i find myself thinking of lorca’s duende

4 Likes

If you don’t play, get in front of a piano a bit to get a feel for hammer gradations.
Ciccolini was famous for Satie, but listen to him playing another composer, Mozart; it’s more noticeable what he’s internalized from Satie when you hear something familiar played so different.

Particularly this album from late in life, K332, so good

Forget using a sequencer (boring timing) or a quantizer (no modal shifts). Use a random source on things you’ve written.

That’s if you want to write music that stays interesting for longer than a few bars.

1 Like

I’ve personally never been able to impart that kind of depth to phrasing using a sequencer. I feel that micro-timing adjustments and tempo changes are extremely difficult to program abstractly, but very intuitive to feel and play concretely. I’m not a great pianist, so sometimes I’ll just play a simplified version of the melody / chord sequence I’m working on by focusing only on the rhythm, and then correct / add in the missing notes to what I’ve played afterwards in Ableton.

Frankly, this inability to create subtlety and nuance in timing with purely pre-programmed electronic instruments is what drove me to learn to play acoustic instruments. If you don’t play any, I highly encourage you to pick one you like and give it a try. Immediacy is extremely rewarding, even when your vocabulary is very restricted in the beginning. And acoustic sound is often a beautiful complement to electronic sound.

5 Likes

You should also check the Voltage Runner / VCSQ2 by Ginko, which is a sequencer running on CVs/LFOs, not clocks. Generally speaking, your gates and pauses need not be tied to your sequence. I don’t see a better way to achieving your goals than modular.

1 Like

I’ve never paid attention to the Wogglebug before. Would you mind explaining if and how it’s been utilized? Was there self-patching involved? Which outputs did you use?

I really like the results. I’ve been using modules such as the ADDAC207 and others to get hits on note changes, which works wonders with non-grid melodies such as the one above. It’s the actual sequencing that’s gotten me intrigued! Good work.

3 Likes

Wogglebug has two non-steppy random CV outputs - one smooth and one ‘woggle’ which is basically bouncy, both I think related to a master random from a shift register. I’ve always just found it somehow musical. From memory (many years ago) this is just those two outs into the A-156 with two channels of QMMG pinged by the trig outs on the quantiser.

6 Likes

Really like where this thread has gone. Interesting how it has been interpreted by different folk.

As someone quite new to eurorack I’ve really enjoyed the imperfections in timing that the interaction of modules can bring, esp compared to the computer.

Those look like interesting modules.

On a similar note I picked up a 4ms QPLFO - and love how it adapts to changing clocks. Someone also mentioned in the Magneto thread that you can both clock and tap the module at different rates to get shifting timings. Actually thinking about it - delay becomes a compositional effect at that point, especially with low feedback. More like moving notes around…

5 Likes

I would very much love to hear an example of this!

I believe the forthcoming SDS Digital Accord Sequarallel, being a multi-channel MIDI sequence recorder and player (thus something very unusual in Eurorack), would be one way to achieve what OP is after (assuming that, after all this time, OP is still after anything).

http://www.freshnelly.com/sequarallel/sequarallel.htm

I forget whether it can work without quantisation for those ‘human’ imperfections in timing, but it certainly has very fine timing as well as CV input for midi functions like probability (true to form, Sandy has packed it with countless features, many beyond my understanding). Anyway, there is obviously no other module like this one; anyone interested in sequencers or MIDI to CV functionality might wish to investigate.

I was eager and desperate to buy a sequarallel myself until eventually settling on the 1010 blackbox; although sequarallel is far more powerful and interesting in what it can do with sequences, blackbox has multi-channel stereo sampling (with four-voice polyphony), looping and even a reasonable granulator - all of which I knew I would be able to use.

As far as hardware sequencers go, the Yamaha QY700 has a DAW-style piano roll note editor as well as ample resolution for capturing nuanced performances (480ppq). I used the smaller QY70 for a while and it was, for a lack of a better word, the most music theory minded sequencer I’ve tried. I eventually sold it due to the interface feeling too cramped on the handheld device, but the QY700 fixes this with more dedicated buttons. You can even humanize performances by using “groove quantize”, which basically works like Abletons groove template feature: you choose a preset or user rhythm template and apply the timing, velocity and gate length from that rhythm to your phrase on a scale from 0% to 200%.

My favorite feature from the QY series has to be the phrase editor. Basically you can form your own auto-accompaniment phrases and transpose them to fit any chord type. The best thing is that out-of-scale notes are allowed in the source pattern, allowing you to do chromatic approaches to scale notes etc. Much more sophisticated than the usual approach of locking notes strictly to the selected scale.

Interesting thread for sure. I think there are a few different ways to look at this and what is trying to be achieved. I have nothing much to offer from a technical advice standpoint but the imperfections are something I think about a lot.

To me, human imperfection can be modelled with random or algorithmic manipulation of some sterile, for lack of a better word, source data. I don’t personally think that is all that useful though. It can at best represent a poor human performance. Haphazard hesitations or dynamic changes through a phrase are what happens when a performers attention is focused on the wrong thing. Perhaps just learning a piece or just generally distracted. Or perhaps they just got too caught up in trying not to make “mistakes” or are rushing (ahem, me) or don’t actually have sufficient motivation or connection to approach it musically. There is also the theoretical element of understanding what is actually going on from a musical perspective and where your focus should be directed.

When there is intention behind the phrasing, pauses, dynamic development, balancing of voices, etc. basically the musical expression of performance, that is when you get something really worth putting the effort in to achieve. That is much harder however as it cannot be achieved randomly or (traditionally) algorithmically because it requires intent. The decisions must be guided. Actually, the key point is they have to be decisions, not accidents. This is of course perfectly possible to be done in an offline manner by editing midi data or to a lesser extent based on interpretation of score annotations. However, the current best interface to achieve this is in my opinion real-time input from an instrument (acoustic or midi or otherwise) by a practised performer.

Performers develop an active connection with the sound produced by their performance. Their decisions are based on instant auditory feedback on the impact of their actions. Tiny intuitive adjustments are made in a subconscious way based on the skills developed through intentional practice over a period of time. Music is, after all, a listening experience, we can’t achieve the same level of control and nuance when it is approached visually (in the case of editors) or not in real-time (in the case of looping or assessing the results by playing back modifications). There is too great of a disconnect between the actions or decisions and the resulting sound.

I guess ultimately that is the difference (as someone mentioned up thread) between human imperfection and human perfection in musical performance. There will always be a level of imperfection of course but that is not actually the thing we are (perhaps, should be) trying to add with the humanise button. The flaws may make it sound human but they don’t make it sound “musical”.

The crux of all that for me is the fact that emulation of what can be achieved better and more easily through existing means is perhaps a misguided goal. If the goal is to bring “human” nuanced expression to the phrasing of melodic lines, the solution is to use a real-time expressive tool to do so. On the other hand, if the goal is to break free of the grid and bring greater dynamic and rhythmic variation to our music, there are loads of cool and interesting techniques and tools, which other people are much better versed in than I, to do that. It’s probably not going to sound like Satie and that’s probably a good thing.

To make it clear, I place no greater value or validity on real-time performed music etc. Generative, random, locked to a grid, noise, a hoover through a distortion pedal, whatever. It’s all equally intriguing and valuable to me. The fact that I am writing this while procrastinating on finishing some recording I can’t get right means that currently, perhaps I prefer the concept of generative or sequenced music.

9 Likes

wow i love these chunky mechanical keys. i’ve been enjoying the squarp pyramid for midi sequencing (making very long pattern lengths if i don’t want to work in “loops”) - but i really don’t like the buttons.
would love to replace them with mechanical keys…

1 Like

wholeheartedly agreed—these are the details that separate impressionism from a mess.

i’ve been finding some success in the replication thereof using score editors lately—the combination of semi-abstract written articulations and value-accurate midi specification seems to ‘perform’ fairly well.

3 Likes

Will see if I can dig up something

1 Like

the buttons are the core reason I couldn’t seriously use the pyramid, really terrible feel :frowning:

What kind of hardware do you own? If it’s eurorack modules, there are many sequencing techniques that can suite your needs. A quick example would be using a Rene, mki or mkii, with it’s logic functions.
Another one would be using a combination of sequencers and switches.

In the iPad territory, you could try using Fugue Machine or the new paahe sequencer from the same developer.

I have not been able to find this one. Is it out yet?

1 Like

The “secret” to this kind of thing is to use more than one sequencer. :slight_smile: Similar to the use-two-sequencers-of-different-lengths-and-sum-then-quantize-their-output to generate novel pitch sequences, a technique that’s been around for decades, you can employ multiple sequencers.

I’ll explain below using modular hardware because that’s what I know. I’m certain you can do these things (probably easier) inside a computer because it must be an easier thing to spin up multiple sequencers in there.

Much dissatisfaction with step sequencing can be addressed by introducing variations in gesture, removing the robotic feel of each note sounding the same. Musical gesture is primarily contained in the note envelope, the Attack/Decay/Sustain/Release of each note.

The energy applied to an acoustic instrument such as a piano or violin or guitar is reflected in the gesture. The characteristics of the instrument (resonance, materials and their pitch dampening effects etc) will filter the sound according to the energy of the gesture as well. I’ll focus on ADSR stuff below but applying similar approaches to the filter of your voice will make the gesture even more satisfying.

Ingredients

  • Seq1: Rene or other sequencer with at least 2 CV outputs (Beatstep Pro fits this role as well).
  • Seq2: Any other sequencer, if you don’t have another sequencer then a couple summed LFOs will work too.
  • Env1: An envelope generator that has a controllable decay/release, or even better, full cv over ADSR. But at minimum, decay/release.

Process

  1. Program the length and melody into Seq1. Use CV1 for pitch like you might normally do. Use CV2 to control filter like you might normally do.
  2. Feed Seq1 and Seq2 the same pulse/clock, even though they will be of separate length you will want them both holding their voltage at the same time.
  3. Use Seq1 to trigger Env1 but
  4. Use Seq2 to control the decay/release on Env1. In this way, the note envelopes will be different for each note in Seq1 but also won’t be the same for each pass through, so there will be variability in the note envelope–the touch–of the sound. This is what a sensitive musician brings to a piece like Gymnopédie.

From this initial setup you can experiment with things to taste:

  • Attenuate the CV going into Env1 decay/release so that it is subtle enough for you, keep that circuit in place as you experiment with other ideas below. This attenuation is likely to be very extreme, like nearly turned. At least it is for me when I do this kind of thing.
  • Try Seq2 step counts that are longer than Seq1 or shorter than Seq1 to see if you like the feel of either one better.
  • Try summing Seq2 and Seq1, then attenuating, then controlling Env1 decay/release–this will allow the pitch to have an influence on the decay/release. The piano, for example, has more natural sustain in lower registers than higher registers. Try inverting Seq1 CV in this instance.
  • Integrate a slew into some of your CV chains so that effects carry over the time domain in less “matched up” ways. The idea is “break” the concept of the step so slewing across the step in the CV can be useful with this. You may need to add a Sample&Hold that is run by the same pulse/clock as Seq1 and Seq2, so that your decay/release isn’t changing mid-note.
  • Integrate noise into the CV chain for Env1 decay/release. The noise may need to be slewed and/or sample&hold for summing with Seq2. This will add variability.
  • If Env1 has CV control of attack add in another sequencer and do all of the above but for Attack.
  • If Env1 has CV control of sustain add in another sequencer and do all of the above but for Sustain.
  • If you don’t have a zillion sequencers use a few LFOs with different unrelated-to-the-tempo-of-Seq1 cycles and waveshapes, sums of LFOs, etc. Batumi + the Pulp Logic Mix-B is my go-to for this kind of thing in Eurorack.

Once you get comfortable with this in terms of pitch and gesture perhaps work in effects as well:

  • Apply similar approaches to reverb: how much of the note passes to the reverb circuit via a gate/VCA, how long the reverb tail is, the dampening filter on the reverb, a dampening filter on the note that is sent to the reverb (which is different from the dry signal note)
  • Apply similar approaches to delay
  • Apply similar approaches to pitch shift and tape effects.
  • Apply similar approaches to the tempo. This requires a clock source that can take a CV in for control of tempo. It’s super effective for adding life to a patch.

I do all these things in modular and it results in super lengthy CV signal paths with all kinds of things getting mixed in to control. I find, for myself, that the magic of gesture and subtlety is almost entirely contained in the control signals. It’s absolutely possible using a step sequencer, it’s more a matter of what you do with the signals your step signal gives you, whether you can add things together and attenuate etc.

Hope this is useful to OP and others working on this.

tldr; this is a process, not a piece of gear

36 Likes

Very nice write-up, thank you. You articulated some of my impressions as I read through the thread. It’s not all just about the sequencer—it’s about what you do with the entire patch.

1 Like

+1 for the Eloquencer