I’m not entirely sure if this fits the request. But I really liked the idea demonstrated here.

2 Likes

The first time I looked at a Satie score, I remember thinking “man, this just looks like an etude”. It didn’t look all that impressive on paper. So much of Satie “essence” for me is about the interpretation and performance of the piece (the How), rather than the notes on the page (the What).

To me, it really doesn’t matter which sequencing approach you use because as the very name implies, it is always going to be a coarse representation of what is happening in the music. There needs to be some intermediate interpretation layer that turns the notation or sequence into a sound that comes out of the speaker.

4 Likes

I think Batumi can do variable phase offset with CV and slider control? I’ve achieved something similar by having a single square-wave LFO with gently modulated speed, and then deriving additional streams from that wobbly “master clock”, e.g., with counters, probability, etc. Marbles is amazing for this kind of thing.

I think the difference is that my setup could not apply independent rubato to an individual voice, though?

3 Likes

do you have a specific synthesizer or sound module you are looking to use with the hypothetical midi/sequencing? One possible thing to try is to see what you might be able to do on the end of your synth patch programming to imitate performance dynamics - using some very slow LFOs, math functions, randoms, similar stuff assigned to things like your envelope attack time, sustain levels, decay times, other functions which might create slight timbral changes and change the feel of the timing even if it is still very on-grid, so on, might get you a bit closer to what you are looking for if your sequencing methods can’t.

3 Likes

Interesting stuff here, I think that using the NYSHTI I have figured out something with VCV Rack. Does anyone know how to best to integrate VCV Rack with Ableton?

lots of cool sequencers for iOS
@brambos ‘s and @junklight ‘s among them …
Rozeta and Autony

although what’s being considered here is music
and how the playing of music
makes music
what I was thinking about with this thread >

…timing, rhythmically musical

‘What is music?

2 Likes

:gift_heart: holy shit that guy did my favorite S versions. blew my mind to see him recently still at it.

never in a million years a sequencer going to pull that

3 Likes

i don’t suppose anybody knows of one of these for ios that can output midi directly from the app?

it’d be fun to put my long-disused theory knowledge to work in the context of modular!

i find myself thinking of lorca’s duende

4 Likes

If you don’t play, get in front of a piano a bit to get a feel for hammer gradations.
Ciccolini was famous for Satie, but listen to him playing another composer, Mozart; it’s more noticeable what he’s internalized from Satie when you hear something familiar played so different.

Particularly this album from late in life, K332, so good

Forget using a sequencer (boring timing) or a quantizer (no modal shifts). Use a random source on things you’ve written.

That’s if you want to write music that stays interesting for longer than a few bars.

1 Like

I’ve personally never been able to impart that kind of depth to phrasing using a sequencer. I feel that micro-timing adjustments and tempo changes are extremely difficult to program abstractly, but very intuitive to feel and play concretely. I’m not a great pianist, so sometimes I’ll just play a simplified version of the melody / chord sequence I’m working on by focusing only on the rhythm, and then correct / add in the missing notes to what I’ve played afterwards in Ableton.

Frankly, this inability to create subtlety and nuance in timing with purely pre-programmed electronic instruments is what drove me to learn to play acoustic instruments. If you don’t play any, I highly encourage you to pick one you like and give it a try. Immediacy is extremely rewarding, even when your vocabulary is very restricted in the beginning. And acoustic sound is often a beautiful complement to electronic sound.

5 Likes

You should also check the Voltage Runner / VCSQ2 by Ginko, which is a sequencer running on CVs/LFOs, not clocks. Generally speaking, your gates and pauses need not be tied to your sequence. I don’t see a better way to achieving your goals than modular.

1 Like

I’ve never paid attention to the Wogglebug before. Would you mind explaining if and how it’s been utilized? Was there self-patching involved? Which outputs did you use?

I really like the results. I’ve been using modules such as the ADDAC207 and others to get hits on note changes, which works wonders with non-grid melodies such as the one above. It’s the actual sequencing that’s gotten me intrigued! Good work.

3 Likes

Wogglebug has two non-steppy random CV outputs - one smooth and one ‘woggle’ which is basically bouncy, both I think related to a master random from a shift register. I’ve always just found it somehow musical. From memory (many years ago) this is just those two outs into the A-156 with two channels of QMMG pinged by the trig outs on the quantiser.

6 Likes

Really like where this thread has gone. Interesting how it has been interpreted by different folk.

As someone quite new to eurorack I’ve really enjoyed the imperfections in timing that the interaction of modules can bring, esp compared to the computer.

Those look like interesting modules.

On a similar note I picked up a 4ms QPLFO - and love how it adapts to changing clocks. Someone also mentioned in the Magneto thread that you can both clock and tap the module at different rates to get shifting timings. Actually thinking about it - delay becomes a compositional effect at that point, especially with low feedback. More like moving notes around…

5 Likes

I would very much love to hear an example of this!

I believe the forthcoming SDS Digital Accord Sequarallel, being a multi-channel MIDI sequence recorder and player (thus something very unusual in Eurorack), would be one way to achieve what OP is after (assuming that, after all this time, OP is still after anything).

http://www.freshnelly.com/sequarallel/sequarallel.htm

I forget whether it can work without quantisation for those ‘human’ imperfections in timing, but it certainly has very fine timing as well as CV input for midi functions like probability (true to form, Sandy has packed it with countless features, many beyond my understanding). Anyway, there is obviously no other module like this one; anyone interested in sequencers or MIDI to CV functionality might wish to investigate.

I was eager and desperate to buy a sequarallel myself until eventually settling on the 1010 blackbox; although sequarallel is far more powerful and interesting in what it can do with sequences, blackbox has multi-channel stereo sampling (with four-voice polyphony), looping and even a reasonable granulator - all of which I knew I would be able to use.

As far as hardware sequencers go, the Yamaha QY700 has a DAW-style piano roll note editor as well as ample resolution for capturing nuanced performances (480ppq). I used the smaller QY70 for a while and it was, for a lack of a better word, the most music theory minded sequencer I’ve tried. I eventually sold it due to the interface feeling too cramped on the handheld device, but the QY700 fixes this with more dedicated buttons. You can even humanize performances by using “groove quantize”, which basically works like Abletons groove template feature: you choose a preset or user rhythm template and apply the timing, velocity and gate length from that rhythm to your phrase on a scale from 0% to 200%.

My favorite feature from the QY series has to be the phrase editor. Basically you can form your own auto-accompaniment phrases and transpose them to fit any chord type. The best thing is that out-of-scale notes are allowed in the source pattern, allowing you to do chromatic approaches to scale notes etc. Much more sophisticated than the usual approach of locking notes strictly to the selected scale.

Interesting thread for sure. I think there are a few different ways to look at this and what is trying to be achieved. I have nothing much to offer from a technical advice standpoint but the imperfections are something I think about a lot.

To me, human imperfection can be modelled with random or algorithmic manipulation of some sterile, for lack of a better word, source data. I don’t personally think that is all that useful though. It can at best represent a poor human performance. Haphazard hesitations or dynamic changes through a phrase are what happens when a performers attention is focused on the wrong thing. Perhaps just learning a piece or just generally distracted. Or perhaps they just got too caught up in trying not to make “mistakes” or are rushing (ahem, me) or don’t actually have sufficient motivation or connection to approach it musically. There is also the theoretical element of understanding what is actually going on from a musical perspective and where your focus should be directed.

When there is intention behind the phrasing, pauses, dynamic development, balancing of voices, etc. basically the musical expression of performance, that is when you get something really worth putting the effort in to achieve. That is much harder however as it cannot be achieved randomly or (traditionally) algorithmically because it requires intent. The decisions must be guided. Actually, the key point is they have to be decisions, not accidents. This is of course perfectly possible to be done in an offline manner by editing midi data or to a lesser extent based on interpretation of score annotations. However, the current best interface to achieve this is in my opinion real-time input from an instrument (acoustic or midi or otherwise) by a practised performer.

Performers develop an active connection with the sound produced by their performance. Their decisions are based on instant auditory feedback on the impact of their actions. Tiny intuitive adjustments are made in a subconscious way based on the skills developed through intentional practice over a period of time. Music is, after all, a listening experience, we can’t achieve the same level of control and nuance when it is approached visually (in the case of editors) or not in real-time (in the case of looping or assessing the results by playing back modifications). There is too great of a disconnect between the actions or decisions and the resulting sound.

I guess ultimately that is the difference (as someone mentioned up thread) between human imperfection and human perfection in musical performance. There will always be a level of imperfection of course but that is not actually the thing we are (perhaps, should be) trying to add with the humanise button. The flaws may make it sound human but they don’t make it sound “musical”.

The crux of all that for me is the fact that emulation of what can be achieved better and more easily through existing means is perhaps a misguided goal. If the goal is to bring “human” nuanced expression to the phrasing of melodic lines, the solution is to use a real-time expressive tool to do so. On the other hand, if the goal is to break free of the grid and bring greater dynamic and rhythmic variation to our music, there are loads of cool and interesting techniques and tools, which other people are much better versed in than I, to do that. It’s probably not going to sound like Satie and that’s probably a good thing.

To make it clear, I place no greater value or validity on real-time performed music etc. Generative, random, locked to a grid, noise, a hoover through a distortion pedal, whatever. It’s all equally intriguing and valuable to me. The fact that I am writing this while procrastinating on finishing some recording I can’t get right means that currently, perhaps I prefer the concept of generative or sequenced music.

9 Likes

wow i love these chunky mechanical keys. i’ve been enjoying the squarp pyramid for midi sequencing (making very long pattern lengths if i don’t want to work in “loops”) - but i really don’t like the buttons.
would love to replace them with mechanical keys…

1 Like