Disquiet Junto Project 0543: Technique Check

A way of playing/improvising that I enjoy a lot is playing acoustic instruments into long delays with feedback. The delay creates a sort of loop, but audio processing in the signal chain changes the characteristics of the sound in a cumulative way: with each round through the delay line, the sound is further transformed. It feels a bit like you’re ‘sculpting’ the sound.

The basic technique is quite old, first implemented with tape loops over 50 years ago by artists like Terry Riley, Pauline Oliveiros; and later by Brian Eno & Robert Fripp (it’s often called Frippertronics). Modern digital equipment and software allow for more intuitive, instrument-like control of the audio processing and for on-the-fly routing and re-routing of audio signals. One of the setups I use a lot is a Max-controlled Ableton set where return tracks are set up as audio processing chains with the delays, and are then fed into themselves and each other using sends.

Things can quickly get cluttered though: if you keep playing notes into the thing and everything gets fed back into itself, things are likely to get crowded. One technique that mitigates this issue is to have a pitch shifter in the signal chain that shifts everything down an octave: with each round through the delay line, everything is pitched down, until after a couple of rounds it dies in a highpass filter. I’ve found this to be a nice setup to explore harmonies and modes. As you play, you’re accompanied by the delayed, pitched down stuff you played before.

This little piece I made is an exploration of the Phrygian mode. It’s a bit rough: I just did a couple of takes and this seemed the nicest. Just a bit of compression and minimal EQ afterwards to tame some of the louder parts, but left it pretty much as I recorded it. Hope you enjoy and looking forward to listen to what everybody came up with.



Promenade M180

In my works as Ossimuratore that I am presenting here at Junto Project I am testing some musical aesthetic hypothesis, i.e. that (1) emotional reaction to music listening is highly context sensitive, and that music’s context sensitivity is mainly about (2) memory activation and (3) contrast building.

I try to excite short-term memory by means of ostinato, i.e. non evolving, constantly repeating thematic material.

To set extreme contrast I use inharmonic electronic sounds in two ways: serially (where layers of electronic noises alone act as mental purge before presenting tonal material) and symphonically (where by overlying noises to tonal material I try to protect the listener from long-term memory of nauseatingly bad tonal pop music).

These aesthetic hypothesis came to me mostly while reading the great book On Repeat by Elizabeth Margulis.

Made with NI Maschine+.


“The Silent, Understood” by Our Quiet Fog

Recorded, mixed & mastered May 27 2022 by Jim Lemanowicz at Blissville Electro-Magnetic Laboratories of Massapequa.

©2022 Jim Lemanowicz

Process notes -
I have been using a system like this across many different technologies, free and proprietary, soft and hard, dissonant and consonant. There are many variations but like most of my sound experiments that I do on my own, if I stir in just enough ingredients, it can take on enough of a life of it’s own that I am basically collaborating with something interesting. There are many outputs from this type of system, in whole or in part - generated MIDI tracks to use on other sounds later, generated audio to use in later pieces, etc. I’m going to keep it simple for this example and also edit and release a piece of the generated audio as it was originally heard.

Start - Start with some MIDI notes - this can played by your hands or just looped MIDI or CV - short or long notes, many or few notes - I call both of these parameters “density” because that is what I like to deal with. The idea of density at any given time and how denser landscapes can mask elements out or merge elements together and how less dense landscapes can bring out details in the same basic sounds.

Random - Feed that into something that randomizes that input to another note. This usually requires something digital. Choose something that can give you control of how far away from the original it goes.

Modify - and all the while thinking of controls that could be modulated.

  • velocity randomization
  • arpeggiate - interesting to see how that works against short or long notes
  • speed - interesting to see how that works against how many notes you chose to give it per bar

Quantize - in this step you are choosing how much to tame all this randomness, in terms of atonality. You are generally going to pick a scale or a partial scale. I find that scales with few notes are easier to see subtleties in

Pick a sound - timbre is not science, you just like it or you don’t. I typically pick sounds that have 1-2 parameters I can modulate.

Pick an envelope - plucky, slow attack, slow release - these are just three. Again, this speaks towards “density”

Pick effects - again, I pick something that may have 1-2 parameters I can modulate. Again, density.

Modulate - LFO elements that change the 2-4 parameters I chose that I liked to modulate. Instead of LFO, map things to controller knobs, sliders, buttons and do it manually as you go. I tend to do both at the same time - with at least one random LFO. Sometimes, I go full-auto and just make breakfast while it does its thing. That is what I did here.

What I did

I used Ableton Live in the horizontal “arrangement view” for this and created two tracks - one for MIDI and my sound generator and just one more (armed to record) to capture resampled audio from the MIDI track. You can get more complex and feed this any number of MIDI tracks at once, which in turn feed the MIDI sound generator track - in this way, you can also capture the generated MIDI. You can also run the system several times and each time create new audio track to capture different “takes.”

I also should be clear that while I understand many of the settings I play with, I really can’t process more than one or two at a time in my head or I just can’t do this, so I tend to forget why I do things and wonder what the heck I did later on. It’s part of the fun for me. I present this as scientifically as I can here but realize that I have not ever sat here and typed so much while I was setting up a system. I am much too impatient for this!!!

Start - I created an empty MIDI clip of five bars and set it to loop. I set my grid for quarter notes. I then drew in legato notes A3, F#3, C4, F#3, each of 5 quarter notes long. Am6 inversion.

Random - I chose to drag in one Ableton Random device with a 65% chance of generating a note above or below the original and set it in such a way that it can have 8 choices of notes (choices 2 and scale 4). I duplicated this device and deactivated the second one. I intend to feed this a random square LFO to flip this on and off.

Modify -

  • I added a Velocity device to allow for about a 50% chance of randomness and an s-curve (preset “Dynamic II” with random on 43 and out low up to 25).
  • I picked an Arpeggiator device with settings to place after the Random devices that included “Random Once” and “Swing 16” and set it for 8th notes, retrigger off.
  • Set Live’s tempo to 40 bpm.
  • then inserted a Chord device to feed the Arp device more interesting sounds - a version of a “Noir” preset that takes the original note and adds three more - one 3 steps above, another 12 steps above and the last at 6 steps under.
  • somewhere during all this MIDI madness, to listen to what I was doing, I chose a basic EIC2 grand piano (legacy Live Pack) sound for this.

Quantize - I then dragged in a scale device set to C Iwato (intervals in half-steps are 1 - 4 - 1 - 4 - 3)

Sound - Decided I would keep the piano for today and altered some of the settings to give it more sustain and body for starters.

Effects - Since I record under the name “Our Quiet Fog” for piano and echo generative things like this, just using Live’s Echo device in line with the instrument MIDI track today, 60% feedback, 30% reverb, slight modulation, wobble, ducking. Left delay at 1/4 note, right at 1/8At the end was a channel rack preset I created myself that includes a way to bump the volume manually as well as a limiter (because this can get crazy for some other sounds) and allow some stereo field manipulation


  • MIDI I used one .25 Hz Random Max4Live LFO to flip the second Random device on and off by turning the offset all the way down to -100 and then changing the maximum control to 1% for this output. I used another output from this LFO device set to 50-100% range and pointed it at the Arpeggiator’s Rate to allow for a roughly 1/8 or slower rate. I then used another output and assigned it to the Arpeggiator’s Groove control to shift around from straight to different swung grooves.
  • Sound - I chose to try to keep this to one LFO and chose to map the Release control of the piano to 100-50% (so in opposition to all the other modulation so far) and then the Color control to 10-90%.
  • Effect - mapped Feedback to 45-10% and Dry/Wet to 40-60%. I had one modulator left so I went with something sure to add a little chaos back in
    -LFO2! - I decided that I was going to let this go on for a while and that to add some variety, I would drag in a second modulator to do a 1Hz random modulation on LFO’s depth control.

This might sound too abrupt, so I think to step down to 20 bpm instead now and am happy that the echo feedback is smoothing out some roughness. I like this, so I went ahead, made sure my backup software is off, saved it as is and then hit record and just let it go. I set a stopwatch, tweaked the echo a little bit more (I originally had higher settings than what I wrote above, where the modulation outputs from LFO1 were set to something like Feedback 75-10% and Dry/Wet to 10-60%). At about 3 min in, I left it alone and went to go take a quick shower and listened a bit while I was drying off and dressing.

Came back at about the 13 min mark on my stopwatch, half-dressed, a decided it was transition time. I thought - slower, different key center and less notes. Make changes at 30 second intervals. First change the Arp rate to be slower, then a slight bit faster, moved to C#, brought the Arp repeats down about 20%, moved to D#, adjusted LFO2’s action on LFO1 to be a bit more consistent and more likely to be up (50-100% vs the standard 0-100%) and let it go again for another few minutes while I got dressed.

At around 24 minutes in, I decided to do some kind of finish. Will slow down the Arp rate more, increase the chance of feedback in the echo and then switch off the piano. Crossing fingers… Not too bad, I am not sure about the end, echo is repeating too and I reach for my channel rack to slowly turn down gain to zero. Stop the stopwatch and then drag out the loop in Live to its full 26+ minutes. Turn off looping, disable record arm, disable the MIDI track.

Ready to review, I find a spot near 14 min and draw in a fade during the quick part and yet close enough to the transition and just listen. About 3 minutes in from 14, I hear an obvious key change and think, OK, this is getting long so that helped. At 18 another key change and decide to give it a fade out there for the purposes of the Junto. I decide this piece will be called “The Silent, Understood” I do an export to WAV and check the file on disk. 4:36. A bit long for my Juntos, but whatever. It could have been 26 minutes!

Future ideas - use a part of the earlier fast section for another piece, extend this piece to include more at the end and then maybe see if the absolute ending of this session could also be its own piece. Things to do later! It occurs to me that I could have shut the piano off to avoid fades but I don’t mind, this album, if it ever becomes one, could incorporate the idea of fades as the concept of sorts.

I did some more experimenting with my as-yet-unused Ozone 9 plugin (I’ve been using Elements) and settled on just a classical preset with some reduced lufs-iness.

Art - captured 06 Feb 2010 with a Sony Cybershot in Massapequa, NY, USA


I really like this a lot! The bass is the place!


This is not gonna be anything so interesting and new but here it is:

I rely heavily on generative processes. A couple years ago, I recorded several pieces using some free generative tools to try them out: NOD-E (Reaktor Ensemble by Antonio Blanca), transition (CodeFN42), Nova3 (tonecarver), and some others–all of them fun and useful. I liked the generative process a lot but I ended up not liking my compositions enough so I did not publish them; but I continued to use generative heavily.

These days I use an iOS sequencer called ZOA which relies on Conway’s Game of Life for note generation. ZOA has 4 playheads and they can each be run into a pattern sequencer to create further variation and randomization. I usually set these playheads to different speeds (e.g., whole, half, quarter, 8th notes) and run the MIDI into different instruments.

For this submission, instead of using a completely random constellation of cells, I looked up oscillators in Game of Life from a nice webpage here. These go on forever, they are self-sustaining (hence the title of the track). I forgot which one I used for the melody but it’s either Figure8 or Pseudo-Barberpole.

I ran the slowest playhead into a piano that forms the bass structure. The slightly faster playhead went into a pad. The fastest ones into a plucky sound. I had never done generative drum parts so to try that, I also ran a separate ZOA session for drums, using blinkers–the simplest oscillator shape in Game of Life. It’s a bit sloppy but oh well…

I didn’t manipulate the resulting MIDI at all. Usually, generative process results in MIDI that is a bit rigid and random as is so I think generative alone only gets you halfway to a good composition and you have to then take bits and pieces and sequence it in different ways, create loops, manipulate the MIDI manually, etc. I don’t always find the time and patience for that. I enjoy being carried away by the generative process and later just publish it close to its original form. It’s going to get boring soon but I can work more on editing generative results more seriously at that point.


I use an iOS app, FAC Envolver, that takes an incoming signal and lets me set it a peak threshold to generate notes on a scale and send them over midi. The generator in this case is a rhythmic patch on a Sub37 which I use to generate the notes and send them to other instruments. I use another iOS app, Rozeta LFO, to modulate the channel volume and pan for the different instruments to get some movement. By playing with the filter and resonance on the Sub37 I can change the character of the Envolver generated notes.


Probably not my best example, but it’s a good idea that I’ve used a lot.

Given your user name, here’s a previous Junto where I played a 3/4 bassline on a fretless through an Ampeg emulation!


Scrambled Solfeggios [disquiet0543]

**These are a few of my favourite things… Using old online e synths to create sounds (rather than music), in this case the Ancient Solfeggio Synthesizer I found in 2012. Then, putting the result into ambient v.3 software and improvising on it. Lots of reverb, always lots of reverb. Then, manipulating the original synth sounds in Audacity, using its simplified version of Paulstretch, copying and cutting bits and modifying them to create drones (especially ones to cover abrupt endings and accidents). Then, overlaying the Ambient version, multitracking it and balancing the whole thing until it sounds like this…


Willy Wonka. Awkward silences. Moffenzeef.


  • LFO control over panning
  • LFO control over track volume
  • LFOs controlling other LFOs they’ve never even met
  • Phase offset of LFOs to create variation

Hey All,
I think I have used this same technique before for this same prompt but since it is what I mainly do I am repeating myself which is ok cause I am all about the loop.
My technique is slice to midi in ableton. It takes a track and slices it up for you automatically and spreads it across the keyboard. I have 61 keys which works better for me than just a 16 square pad.
I piddle around and try to find a loop that I like and keep doing that creating layers of loops. Editing the loops so they work together. I will often create scenes that I can launch to create different parts. This track uses an old ELP track from a previous STBB Forever beat battle.
I also like to add vocal tracks especially other peoples poetry. I like that it gets me reading poetry and actually saying the poems out loud several times allowing me pick up on more than just scanning by reading. I find that I am attracted to word combinations and phrases more than the overall poem much like how I sample tracks.
I use the poetry foundation website and usually search for topic. In this case it was “Summer”. This poem had a very tight rhyme scheme and meter that went well with the “we got the beat” feel of the track.
I did not know much about Conrad Aiken so I read his bio on the website. I found out at age 11 his father killed his mother and killed himself with a gun in the room next to him and he went in to find their bodies. I can not imagine the devastating effect this had on him but given the news recently it is something that is happening too much in this country. I am resigned to the fact that this is just who we are at the moment and I see no solutions being reached.
I DJed the 8th grade dance Friday night taking requests and using Spotify. This group of 8th graders really got into it and were dancing and singing the lyrics. The girls were all decked out in dresses and the boys, while cleaned up, were no match in the fashion department.
It lifted my spirits and gave me hope that life can still have wonderful moments. The final song of the night of course was “Don’t Stop Believing” by Journey which was very fitting.
Hope all are well.

Peace, Hugh


My technique for today is nothing extravagant: Layering of sounds. In my definition that means sending the same Midi-data to different instruments so that they play in unison. If appropriate pitched by full octaves, changed in note-length and with different effects (especially delays and reverbs).
I love to employ this for pianos…

For this track I decided to exaggerate the approach: I played a simple piano-theme (left hand chords, right hand simple melody) and layered that:

  • 4 tracks with the full Midi (among them 3 pianos)
  • 4 tracks with the Midi of the left hand (among them 2 Hauschka Toolset piano-derived sounds)
  • 4 tracks with the right hand’s Midi

In addition to that there is a mono pad-line with long notes. This has 5 layers.

Different layers are active in different sections, starting from 2:06 all layers play unison. No other instruments are added.


I like creating generative pieces by patching square waves LFOs into logic modules and using the output to trigger drum voices and oscillators. I patched together two squares and a triangle Lfo, used AND, XOR, NOR and XNOR logic to generate triggers for sample drum hits, a DFAM , Beads and a Softpop 2. Set the patch off and played around with the lfo rates to create phasing loops and used a sample and hold of the triangle to generate pitch for the voices.


OT alert: just wanted to say I, too, am loving the Cornell audio bird ID. the Merlin app doesn’t work for me, I have to use their online uploader, but no bother, I have IDed maybe thirty new birds this month and am absolutely hooked. enjoyed your track!

1 Like

Thanks Alex! Likewise.


So, I’ve been busy with lots of work stuff, not least of which is our strike action (hopefully we’re getting close to winning).

But I had time to make this, as I really like the assignment. So what’s my technique? Pretty much stream of consciousness and running things through loops. Also, I used Stochas again because I love it.


Approaching audio processing chains as a playable instrument that completely transforms the experience of singing or playing another instrument into them, sending the original sounds into weird, far-off territories.

For this composition, I built an effects chain in Ableton for processing vocals from my mic and exposed its params as knobs on my MIDI controller. This allowed me to quickly find interesting sweet spots as I turned the knobs while trying different vocal timbres.

I recorded 4 loops on 4 separate tracks, each track having different chain parameters and the raw recorded vocal content.

Then I added back the 4 original, unprocessed vocal tracks.

Overall, there were 8 tracks now - 4 pairs of 100% dry and 100% processed vocal material.

I hit record and after a few takes live-mixed a composition.

The end result was pretty weird already, and I felt like adding more to it so I recorded one layer of percussion played on a Buchla Easel. All other sounds are the dry and processed vocals.

For the curious, here’s the processing chain I used, mostly done with Ableton Live’s stock plugins:

  • EQ > (Pitch)Shifter > (Overdrive) Pedal > Modulated VCA (using Modulators pack + Utility) > Auto Filter > EQ

I’d love for recommendations of artists that work in similar techniques - processing an acoustic instrument or voice with Eurorack or software-based FX chains.

My inspiration came from work of Sarah Belle Reid and this video: Singing Into Synthesizers! Techniques and Concepts for Performing With Vocalists! - YouTube


A lot of this track was made using a technique I love: taking a piece of audio then using the Ableton Push 2 and the Simpler device to retrigger, chop manipulate and play around with the sample to create something new. There are many ways to manipulate samples, but I like this method because it’s fast, I can improvise musically without a lot of fussing around, and I can do it fully on the Push hardware, no computer or mouse needed. Simpler is simple, and the 64 pads and 8 knobs on the Push 2 make playing with samples fun, fast and easy. Happy accidents occur, and things usually sound to my liking with minimal fussing about, so I can just flow and have fun.
Another attribute I like to add to songs are my own voice sounds, but altered since I’m not much of a traditional singer. Bonus points when the voice can turn into perscussion or another instrument. I often take these voice recordings and chop and play with them in Simpler as described above. The intro and middle of this song has some of my early experiments with a talkbox pedal. It’s harder than it looks! And kind of a wet mess. But I’ll keep playing with it. Used the Circuit Monostation as the instrument into the talkbox. So not sure if talkbox will turn into a favorite method for voice manipulation, but I know I’ll keep messing around with vocals sounds.


Ok, not one but two techniques: one that I used to employ a lot (combine a few different types of reverb in return tracks and by adding spacing effects to each blend them to my liking, sending different amounts from each track to combine them in the to-me most pleasing way) and one that I started to do since a while (combining differently sounding drum racks and vary the patterns of each, so as to get more varying results). The track itself started with some recordings I made while fooling around in VCV rack (the scratching sound).


My technique: Employ extensive pitch shifting on instruments

My primary DAW, Logic, has multiple tools to aid in music creation, with the transpose tool being the most consequential. This technique has been used on nearly all of my disquiet contributions to achieve unique sounds. No longer am I stuck with a single pitch or timbre; I can alter my audio tracks by 36 semitones in either direction.

On Rototom Rampage, a single set of rototoms graces the song, but it is the transposition tool that does the heavy lifting. The lead rototom track was lowered by nine semitones to provide even more contrast with the higher pitched rototoms, with some being increased by as many as 11 semitones.

Other effects such as time stretching, filters, and phasers were applied later to glitch up the rototoms, but the pitch shifting from the transposition remains this track’s bread and butter.

1 Like