Ok, i’ll try to share this simple feedback patch that you can experiment for hours. I’m using 2 Serge modules but you can easily do this with a multi-filter and Maths. I’m using the VCFQ and DUSG.
Connections:
VCFQ LP out => mixer
VCFQ BP out => 1v-Oct DUSG (left side)
VCFQ NOTCH out => IN DUSG (left)
DUSG Pulse Out (left) => VCF on VCFQ
DUSG Out (left) => IN VCFQ
DUSG BP OUT (right) => VCQ on VCFQ
Turning knobs slowly has major influence on what’s happening. The right side of DUSG is making the frequency jump higher and it’s important to create something more interesting. You could try to patch the right side of DUSG with the output of the filter for more complex connections.

3 Likes

I’m gonna patch up a version of this on my Voltage Research Lab, thank you!

1 Like

No, it does that itself! You can change quite a few parameters, including the rate of change and response. As is, that demo represents most of its range of sounds, but I expect it would get more interesting if you built a patch around and inside it, rather than thinking of it as an effect.

1 Like

This is huge for the Kayn fans—so much music is helping me get just the tiniest glimpse of his process:

2 Likes

Yes indeed! Ilse is doing a great job making Kayn’s work available for everyone which is incredible. We have so much to learn from his works and we’ve just started!

2 Likes

Well, alright, if we’re all getting feedbacky…

I didn’t patch this with Cybernetics or even Kayn in mind, but most all my feedback patches have inbuilt governors and controls, because I often don’t really like the sound of feedback. It pinches my head in unpleasant ways; perhaps it will tickle some of yours.

Interesting! Would you care to share the patch? Controlling the sound of feedback is indeed difficult, that is why I think it’s a great challenge that the results can be so surprising that it’s worth it in my experience. I can work weeks for one sound, but if I get it right, it’s amazing. It has such a depth that I can listen to it forever.

Just the usual building blocks: sample and hold with rate control, multiple the same voltage source to complementary destinations (and sometimes through parallel VC processing chains with attenuverters and slew limiters). Envelope followers, VCAs, clock divsions. I frequently employ delay lines with frequency shifting in the feedback path, which I did learn indirectly from Kayn and Vink. I’d been doing similar with pitch shifters, which I later learned is the basis for the “shimmer” effect, but once I started patching a frequency shifter that way, I was hooked:

1 Like

I’m finally getting around to listening to this first Roland Kayn Youtube as posted in the first thread. Its really brilliant stuff. This brings up the philospohical question about what role the ‘artist’ has in this. The starting parameters are all set by the composer here, but to release into the world there has to be the decision as to what is ‘worthy of release’ to be heard by the listener. Or is this something that’s totally thrown out when setting up a cybernetic system for music? Does the composer theoretically have no role in determining the final outcome once the parameters are set in motion? The machines are just improvising with themselves? This stuff is so inherently beautiful that IMO there has to be some decisions made about the final product. Or is the system just so well composed that its guaranteeing this outcome? Contrast this for example with generative works by Eno or even late 60’s electric Miles, where there was extensive editing done to make a final work. None of this is happening here?

These are all very interesting questions and I’d like to share my view on cybernetic systems and composition. From an artistic point of view a cybernetic system is something that exists outside of what we perceive as “music”. You could say that “music” or “sound” is one of the elements inside the system, like “volts” and “electrons” that exists with or without our intervention. But I also believe that the composer is responsible for “composing” the system. Instead of notes, she makes connections and makes sure that the system will produce music that is interesting. In that sense, we can definitely claim that it is “our music” when we compose with a cybernetic system. It’s us who decide what the result sounds like. And that is not easy. It requires weeks or months of work to have a system so interesting as Kayn’s work. Something that is versatile and that all the “accidents” that happen are carefully set up by the composer. Personally I think that this interplay between humans and technology is what makes cybernetic music the music of the future. There’s a lot of talk about how we humans can interact with AI so complex that we cannot fully understand it. It’s a really interesting topic that i talk a lot with my patreons who are interested in feedback systems.

3 Likes

Just dropping this because it hasn’t been mentioned yet.

1 Like

This is very fascinating, I love thinking about this stuff, as it really gets into the elements of what art and music really are defined as. To my very noob knowledge of this stuff, I believe this can just be thought of as another tool in the arsenal or word in the vocabulary of composition. I find this not actually too different than free improvised music, as a system of humans are making collective choices on the total outcome based upon feedback. To me it doesn’t matter if a machine is doing it or humans, in order for it to be called ‘music’. Also, unless the machines continue to play on infinitely, there has a be another human ‘artistic’ force that decides to end the piece, and then decide what goes onto the media of listening for other humans. This by nature I believe automatically makes it a human choice of ‘art’ or ‘music’ as its released into the world. IMO it also doesn’t matter at what point the creation starts, take for example DuChamp, who by just deciding an object is ‘art’, makes it art.

I personally have not vested too much time or energy into learning about AI, so I’m coming at this from a purely human standpoint.

Are you using frequency shifting in the feedback path to manage positive feedback? I’m surprised by the claim in this paper that it can be used to add 15-20db more gain w/o howling! Excited to try this on the Hordijk tonight!

In my experience, much of this is strongly frequency- and delay time-dependent. Lower-frequency sounds and slow delays don’t feed back very effectively - or so my patches suggest.

Just following up that this does appear to work, you can introduce a significant amount of additional gain if you frequency shift the signal in a delay feedback loop. While the frequency shifter does color the sound a lot of interesting effects can be achieved without descending into overblown sounds.

Of course, I then had the thought this must apply to phase shifters - and indeed there seems to be some literature suggesting as much.

Can you guys be more specific? What kind of modules are you talking about? Maybe you can give us an example of a simple patch?

Setup a feedback delay patch - I used a Roland DD-8 pedal but you could use a module. You can create a feedback loop with a mixer/crossfader. Input something into the mixer. The output of the mixer goes to the delay and output of the delay goes back into the mixer/crossfader. Adjust the mixer/crossfader control on this input to create audible delay feedback.

Now instead of directly going back into the mixer from the delay, go through something else first. A filter is quite common. But you can instead use a frequency shifter or a phase shifter. With the later two you can introduce more gain into the feedback, either when you mix back in or with some other gain module.

Yes that’s clear! I’ll give it a try. The outputs of the Serge VCFQ filter have a phase difference of 90 degrees so it’s probably a really good filter to give it a try because I don’t really have a phase shifter (I think). Is there a phase shifter in Serge?

Yes, though I don’t know if R*S offers it in Eurorack. It’s not an especially distinctive phase shifter; pretty much any module will do.

@swannodette: Thanks for this link. It never occurred to me to try this use case no. 5, but I’ll have to give it a go when I’m back in front of the modular.

This is another idea you can all experiment with:
Create a simple feedback patch that you like. Take the output as the main signal in a Ring modulator. Then use a frequency from the Res.EQ (or just another OSC but make sure it’s a complex waveform) and use is as a Carrier signal in the same Ring modulator. Use fx freely to make the sounds come to life (a bit of a short reverb would usually do the job). You could also think of a way to make the sounds move in the stereo field. This is an example I did a few days ago.

2 Likes