Interesting thread for sure. I think there are a few different ways to look at this and what is trying to be achieved. I have nothing much to offer from a technical advice standpoint but the imperfections are something I think about a lot.

To me, human imperfection can be modelled with random or algorithmic manipulation of some sterile, for lack of a better word, source data. I don’t personally think that is all that useful though. It can at best represent a poor human performance. Haphazard hesitations or dynamic changes through a phrase are what happens when a performers attention is focused on the wrong thing. Perhaps just learning a piece or just generally distracted. Or perhaps they just got too caught up in trying not to make “mistakes” or are rushing (ahem, me) or don’t actually have sufficient motivation or connection to approach it musically. There is also the theoretical element of understanding what is actually going on from a musical perspective and where your focus should be directed.

When there is intention behind the phrasing, pauses, dynamic development, balancing of voices, etc. basically the musical expression of performance, that is when you get something really worth putting the effort in to achieve. That is much harder however as it cannot be achieved randomly or (traditionally) algorithmically because it requires intent. The decisions must be guided. Actually, the key point is they have to be decisions, not accidents. This is of course perfectly possible to be done in an offline manner by editing midi data or to a lesser extent based on interpretation of score annotations. However, the current best interface to achieve this is in my opinion real-time input from an instrument (acoustic or midi or otherwise) by a practised performer.

Performers develop an active connection with the sound produced by their performance. Their decisions are based on instant auditory feedback on the impact of their actions. Tiny intuitive adjustments are made in a subconscious way based on the skills developed through intentional practice over a period of time. Music is, after all, a listening experience, we can’t achieve the same level of control and nuance when it is approached visually (in the case of editors) or not in real-time (in the case of looping or assessing the results by playing back modifications). There is too great of a disconnect between the actions or decisions and the resulting sound.

I guess ultimately that is the difference (as someone mentioned up thread) between human imperfection and human perfection in musical performance. There will always be a level of imperfection of course but that is not actually the thing we are (perhaps, should be) trying to add with the humanise button. The flaws may make it sound human but they don’t make it sound “musical”.

The crux of all that for me is the fact that emulation of what can be achieved better and more easily through existing means is perhaps a misguided goal. If the goal is to bring “human” nuanced expression to the phrasing of melodic lines, the solution is to use a real-time expressive tool to do so. On the other hand, if the goal is to break free of the grid and bring greater dynamic and rhythmic variation to our music, there are loads of cool and interesting techniques and tools, which other people are much better versed in than I, to do that. It’s probably not going to sound like Satie and that’s probably a good thing.

To make it clear, I place no greater value or validity on real-time performed music etc. Generative, random, locked to a grid, noise, a hoover through a distortion pedal, whatever. It’s all equally intriguing and valuable to me. The fact that I am writing this while procrastinating on finishing some recording I can’t get right means that currently, perhaps I prefer the concept of generative or sequenced music.

9 Likes

wow i love these chunky mechanical keys. i’ve been enjoying the squarp pyramid for midi sequencing (making very long pattern lengths if i don’t want to work in “loops”) - but i really don’t like the buttons.
would love to replace them with mechanical keys…

1 Like

wholeheartedly agreed—these are the details that separate impressionism from a mess.

i’ve been finding some success in the replication thereof using score editors lately—the combination of semi-abstract written articulations and value-accurate midi specification seems to ‘perform’ fairly well.

3 Likes

Will see if I can dig up something

1 Like

the buttons are the core reason I couldn’t seriously use the pyramid, really terrible feel :frowning:

What kind of hardware do you own? If it’s eurorack modules, there are many sequencing techniques that can suite your needs. A quick example would be using a Rene, mki or mkii, with it’s logic functions.
Another one would be using a combination of sequencers and switches.

In the iPad territory, you could try using Fugue Machine or the new paahe sequencer from the same developer.

I have not been able to find this one. Is it out yet?

1 Like

The “secret” to this kind of thing is to use more than one sequencer. :slight_smile: Similar to the use-two-sequencers-of-different-lengths-and-sum-then-quantize-their-output to generate novel pitch sequences, a technique that’s been around for decades, you can employ multiple sequencers.

I’ll explain below using modular hardware because that’s what I know. I’m certain you can do these things (probably easier) inside a computer because it must be an easier thing to spin up multiple sequencers in there.

Much dissatisfaction with step sequencing can be addressed by introducing variations in gesture, removing the robotic feel of each note sounding the same. Musical gesture is primarily contained in the note envelope, the Attack/Decay/Sustain/Release of each note.

The energy applied to an acoustic instrument such as a piano or violin or guitar is reflected in the gesture. The characteristics of the instrument (resonance, materials and their pitch dampening effects etc) will filter the sound according to the energy of the gesture as well. I’ll focus on ADSR stuff below but applying similar approaches to the filter of your voice will make the gesture even more satisfying.

Ingredients

  • Seq1: Rene or other sequencer with at least 2 CV outputs (Beatstep Pro fits this role as well).
  • Seq2: Any other sequencer, if you don’t have another sequencer then a couple summed LFOs will work too.
  • Env1: An envelope generator that has a controllable decay/release, or even better, full cv over ADSR. But at minimum, decay/release.

Process

  1. Program the length and melody into Seq1. Use CV1 for pitch like you might normally do. Use CV2 to control filter like you might normally do.
  2. Feed Seq1 and Seq2 the same pulse/clock, even though they will be of separate length you will want them both holding their voltage at the same time.
  3. Use Seq1 to trigger Env1 but
  4. Use Seq2 to control the decay/release on Env1. In this way, the note envelopes will be different for each note in Seq1 but also won’t be the same for each pass through, so there will be variability in the note envelope–the touch–of the sound. This is what a sensitive musician brings to a piece like Gymnopédie.

From this initial setup you can experiment with things to taste:

  • Attenuate the CV going into Env1 decay/release so that it is subtle enough for you, keep that circuit in place as you experiment with other ideas below. This attenuation is likely to be very extreme, like nearly turned. At least it is for me when I do this kind of thing.
  • Try Seq2 step counts that are longer than Seq1 or shorter than Seq1 to see if you like the feel of either one better.
  • Try summing Seq2 and Seq1, then attenuating, then controlling Env1 decay/release–this will allow the pitch to have an influence on the decay/release. The piano, for example, has more natural sustain in lower registers than higher registers. Try inverting Seq1 CV in this instance.
  • Integrate a slew into some of your CV chains so that effects carry over the time domain in less “matched up” ways. The idea is “break” the concept of the step so slewing across the step in the CV can be useful with this. You may need to add a Sample&Hold that is run by the same pulse/clock as Seq1 and Seq2, so that your decay/release isn’t changing mid-note.
  • Integrate noise into the CV chain for Env1 decay/release. The noise may need to be slewed and/or sample&hold for summing with Seq2. This will add variability.
  • If Env1 has CV control of attack add in another sequencer and do all of the above but for Attack.
  • If Env1 has CV control of sustain add in another sequencer and do all of the above but for Sustain.
  • If you don’t have a zillion sequencers use a few LFOs with different unrelated-to-the-tempo-of-Seq1 cycles and waveshapes, sums of LFOs, etc. Batumi + the Pulp Logic Mix-B is my go-to for this kind of thing in Eurorack.

Once you get comfortable with this in terms of pitch and gesture perhaps work in effects as well:

  • Apply similar approaches to reverb: how much of the note passes to the reverb circuit via a gate/VCA, how long the reverb tail is, the dampening filter on the reverb, a dampening filter on the note that is sent to the reverb (which is different from the dry signal note)
  • Apply similar approaches to delay
  • Apply similar approaches to pitch shift and tape effects.
  • Apply similar approaches to the tempo. This requires a clock source that can take a CV in for control of tempo. It’s super effective for adding life to a patch.

I do all these things in modular and it results in super lengthy CV signal paths with all kinds of things getting mixed in to control. I find, for myself, that the magic of gesture and subtlety is almost entirely contained in the control signals. It’s absolutely possible using a step sequencer, it’s more a matter of what you do with the signals your step signal gives you, whether you can add things together and attenuate etc.

Hope this is useful to OP and others working on this.

tldr; this is a process, not a piece of gear

36 Likes

Very nice write-up, thank you. You articulated some of my impressions as I read through the thread. It’s not all just about the sequencer—it’s about what you do with the entire patch.

1 Like

+1 for the Eloquencer

Im sorry, i mispelled it. It meant to say “phase sequencer” (although Paahe seems like a good name for one, right?)
Its called Polyphase

some non-77 fx involved

The phrasing is dependent on the time between each keystroke. I hit them in a certain pattern, hold for a while then release some and play some new ones. This were probably 2 FM Oscs with maybe 3/3 Carrier/Modulator setting with carriers tuned to different note values.

4 Likes

It’s a great effect! Will try something similar with my Ensoniq SQ-80. Thanks for sharing, if you put out any longer tracks of this type of thing I’d like to know! :slight_smile:

1 Like

If you want a sequencer that is steeped in the composition mentality, might I suggest you pick up a Roland MC-202. It may drive you up the wall but it is very free form. It allows for specification of both note and gate duration and offers little protection to the programmer.

Learning curve is steep and you have only cv outs, unless you just want to use the built in synth. A bit of an off the wall suggestion but it really is a “micro-composer”.

1 Like

It is, and a piece of gear I use to follow (roughly) this procedure is a Beatstep Pro, using the two different pitched sequencers - sequencer A for the gates, pitch and velocity to filter or envelope, and sequencer B routing the pitch and velocity to different CV controlled parameters.
If you set sequencer B to have fewer steps than sequencer A then you have a long string of non-repeating CV. You can also use probability / random as well.

4 Likes

You can also use the drum gates of the Beatstep Pro to trigger envelopes and LFOs. It is such a useful piece of gear.

I’m super curious about the Vector from Five12. It looks like it has some wonderful features and with the expander has tons of output options for both midi and CV/Gate

1 Like

Hi Marcus
i just got one and i and very interested in what it can do because of my experience with numerology – did you get one?

No I didn’t. But I’d like to know your thoughts. I’m not a great user of sequencers…
i haven’t found ones yet that I really I like. I do like aspects of the Elektron ones with the ease of polyphony + probably.

1 Like

i know what you mean i was never a sequencer junkie until this recently

I have the Eloquencer and it’s ok not great --not what i had hoped
i have an Analog Rytm and since the randomization updates that are applied to the parameters it’s become much more interesting but i use M4L to rando the seq

This one is New [Vector] so i am digging it and i spent last night just fooling around and i am going to deep dive the manual and the videos to get a feel today. I really dug Numerology Pro so i am hopeful for integrating it.

I was successful in making it my master clock via USB to my former main clock [a circuit]

Recently though i have been using M4L, LessConcepts and other Max patches for sequencing

i also recently grabbed “Klee”