Disquiet Junto Project 0543: Technique Check

Each Thursday in the Disquiet Junto group, a new compositional challenge is set before the group’s members, who then have just over four days to upload a track in response to the assignment. Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. It’s weekly so that you know it’s there, every Thursday through Monday, when you have the time.

Deadline: This project’s deadline is the end of the day Monday, May 30, 2022, at 11:59pm (that is, just before midnight) wherever you are. It was posted on Thursday, May 26, 2022.

These are the instructions that went out to the group’s email list (at tinyletter.com/disquiet-junto):

Disquiet Junto Project 0543: Technique Check
The Assignment: Share a tip from your method toolbox.

Step 1: Think of some technique — small or larger, simple or complex — that you employ when making music.

Step 2: Make a piece of music employing that technique.

Step 3: When sharing the piece of music, describe the technique so that others might employ it.

Eight Important Steps When Your Track Is Done:

Step 1: Include “disquiet0543” (no spaces or quotation marks) in the name of your tracks.

Step 2: If your audio-hosting platform allows for tags, be sure to also include the project tag “disquiet0543” (no spaces or quotation marks). If you’re posting on SoundCloud in particular, this is essential to subsequent location of tracks for the creation of a project playlist.

Step 3: Upload your tracks. It is helpful but not essential that you use SoundCloud to host your tracks.

Step 4: Post your track in the following discussion thread at llllllll.co:

Project discussion takes place on llllllll.co: https://llllllll.co/t/disquiet-junto-project-0543-technique-check/

Step 5: Annotate your track with a brief explanation of your approach and process.

Step 6: If posting on social media, please consider using the hashtag #DisquietJunto so fellow participants are more likely to locate your communication.

Step 7: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

Step 8: Also join in the discussion on the Disquiet Junto Slack. Send your email address to marc@disquiet.com for Slack inclusion.

Note: Please post one track for this weekly Junto project. If you choose to post more than one, and do so on SoundCloud, please let me know which you’d like added to the playlist. Thanks.

Additional Details:

Deadline: This project’s deadline is the end of the day Monday, May 30, 2022, at 11:59pm (that is, just before midnight) wherever you are. It was posted on Thursday, May 26, 2022.

Length: The length is up to you.

Title/Tag: When posting your tracks, please include “disquiet0543” in the title of the tracks, and where applicable (on SoundCloud, for example) as a tag.

Upload: When participating in this project, be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.

Download: It is always best to set your track as downloadable and allowing for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution, allowing for derivatives).

For context, when posting the track online, please be sure to include this following information:

More on this 543rd weekly Disquiet Junto project – Technique Check (The Assignment: Share a tip from your method toolbox) – at: https://disquiet.com/0543/

More on the Disquiet Junto at: https://disquiet.com/junto/

Subscribe to project announcements here: https://tinyletter.com/disquiet-junto/

Project discussion takes place on llllllll.co: https://llllllll.co/t/disquiet-junto-project-0543-technique-check/


Hopefully I haven’t shared this previously but it’s a beaut tip I learned from David Graham.

It involves writing a bassline in 3/4 to go into a track in 4/4.

The result makes the bass propulsive as it shifts each bar.


The project went live last night via automated tweet and Disquiet.com post, and now it’s live here — and @bassling, in Australia, got his video here while I was sleeping. :slight_smile:


i’ve fallen back in love with the residents, and am working on some one minute cells of music inspired by the thinking behind their magnificent “commercial album”

this one is built out of an improvised guitar rhythm which was when then replayed twice through ableton live’s “granulator ii” to provide all kinds of glitchy embellishments in the side channels

it’s a quick way to add some off-kilter, free-floating patterns

drums inspired by the smile’s “the opposite”



Birds Of Connecticut - every night, thousands of birds migrate through the skies of Connecticut. I like to go outside just before sunrise and record the different birds that are in the area. This morning between 5:00 am and 5:30 am I recorded a Chipping Sparrow, Baltimore Oriole, Tufted Titmouse, American Goldfinch, Black-capped Chickadee, Red-winged Blackbird, White-breasted Nuthatch, Chimney Swift, Carolina Wren, Northern Cardinal, and Gray Catbird. Now, I’m not able to identify all of these birds on my own. I use the Merlin Bird ID app created by the Cornell Ornithology Lab. I import the bird tracks into Apple Garageband, then add a few guitar drone tracks using a tc electronic aeon string sustainer. I find the entire process and the resulting track to be fun to create and very relaxing to listen to. I hope you do too.

Stream Disquiet0543 - Birds Of Connecticut - Technique Check by Bobo Lavorgna | Listen online for free on SoundCloud


Simulated birdsong. Woozy loopy clarinets. VHS tape disintegration.


  • Asymmetric loop lengths
  • LFO control over panning
  • LFO control over track volume
  • Phase offset of LFOs to create variation

My technique that I’m sharing is I use sound memory instead of actual samples for my composition. Even when it’s “incorrect,” I find how I remembered the sound to be more interesting than using a direct quote.

I think this comes from the fact that I was a classical pianist who refused to learn how to read for years. I memorized everything until I finally was forced by my teachers to learn to read. This was my first rebellion against the schooling of music. In honor of my youthful protest, I wrote this all in my head.

As I composed the main theme, it kept blending in my head with the song, “You Are My Sunshine,” probably because of C-D-E-F-A (“you ne-ver no-tice [how much I love you]”). So I put the song in my transition sections.

The Bach-like counterpoint isn’t a real counterpoint. I just thought of the sound in my head and didn’t carefully count intervals, chase fugues, or anything “formal.” Everything in this piece is from sound memory. In the end, it seems like the materials will weave into some magical counterpoint, but shimmer away. It’s a total sham! Or is it?



The playlist is now rolling:

Thanks for the prompt, Marc!

I really needed something to take my mind away from administrivia, so it was good to open Live and draft a quick tune.

1 Like

Hi, “rhythmic delays” is my technique for this junto.
This is something I use a lot, this is not the most abusive example of it, but it’s really important for this track, and peculiar cause it’s all acoustic instruments (I do use this a lot on electric guitars)

I recorded this one last year on an “hotel room” setup, I had my laptop, a great microphone and a very good audio interface, a small requinto, an harmonica, some hand percussion and a soprano ukulele.
Back home I decided to finish a second track that I did release earlier this year but this one stayed on the vault, unused, unfinished.

The use of my technique of rhythmic delays is more prominent as the track advances, I set up a couple of delay lines (mostly straight 1/8th notes, a 1/4th note echo and a syncopated dotted 1/8th notes one. Y add more delay as the track progresses and gradually augment the feedback. It’s quite simple and effective.
And it gives a weird “robotic” feel to a pure acoustic performance.

Comments are welcome (this is one of those tracks that could remain in my vault for decades, and I appreciate that the junto challenges push me to revisit it and just share it)


A way of playing/improvising that I enjoy a lot is playing acoustic instruments into long delays with feedback. The delay creates a sort of loop, but audio processing in the signal chain changes the characteristics of the sound in a cumulative way: with each round through the delay line, the sound is further transformed. It feels a bit like you’re ‘sculpting’ the sound.

The basic technique is quite old, first implemented with tape loops over 50 years ago by artists like Terry Riley, Pauline Oliveiros; and later by Brian Eno & Robert Fripp (it’s often called Frippertronics). Modern digital equipment and software allow for more intuitive, instrument-like control of the audio processing and for on-the-fly routing and re-routing of audio signals. One of the setups I use a lot is a Max-controlled Ableton set where return tracks are set up as audio processing chains with the delays, and are then fed into themselves and each other using sends.

Things can quickly get cluttered though: if you keep playing notes into the thing and everything gets fed back into itself, things are likely to get crowded. One technique that mitigates this issue is to have a pitch shifter in the signal chain that shifts everything down an octave: with each round through the delay line, everything is pitched down, until after a couple of rounds it dies in a highpass filter. I’ve found this to be a nice setup to explore harmonies and modes. As you play, you’re accompanied by the delayed, pitched down stuff you played before.

This little piece I made is an exploration of the Phrygian mode. It’s a bit rough: I just did a couple of takes and this seemed the nicest. Just a bit of compression and minimal EQ afterwards to tame some of the louder parts, but left it pretty much as I recorded it. Hope you enjoy and looking forward to listen to what everybody came up with.



Promenade M180

In my works as Ossimuratore that I am presenting here at Junto Project I am testing some musical aesthetic hypothesis, i.e. that (1) emotional reaction to music listening is highly context sensitive, and that music’s context sensitivity is mainly about (2) memory activation and (3) contrast building.

I try to excite short-term memory by means of ostinato, i.e. non evolving, constantly repeating thematic material.

To set extreme contrast I use inharmonic electronic sounds in two ways: serially (where layers of electronic noises alone act as mental purge before presenting tonal material) and symphonically (where by overlying noises to tonal material I try to protect the listener from long-term memory of nauseatingly bad tonal pop music).

These aesthetic hypothesis came to me mostly while reading the great book On Repeat by Elizabeth Margulis.

Made with NI Maschine+.


“The Silent, Understood” by Our Quiet Fog

Recorded, mixed & mastered May 27 2022 by Jim Lemanowicz at Blissville Electro-Magnetic Laboratories of Massapequa.

©2022 Jim Lemanowicz

Process notes -
I have been using a system like this across many different technologies, free and proprietary, soft and hard, dissonant and consonant. There are many variations but like most of my sound experiments that I do on my own, if I stir in just enough ingredients, it can take on enough of a life of it’s own that I am basically collaborating with something interesting. There are many outputs from this type of system, in whole or in part - generated MIDI tracks to use on other sounds later, generated audio to use in later pieces, etc. I’m going to keep it simple for this example and also edit and release a piece of the generated audio as it was originally heard.

Start - Start with some MIDI notes - this can played by your hands or just looped MIDI or CV - short or long notes, many or few notes - I call both of these parameters “density” because that is what I like to deal with. The idea of density at any given time and how denser landscapes can mask elements out or merge elements together and how less dense landscapes can bring out details in the same basic sounds.

Random - Feed that into something that randomizes that input to another note. This usually requires something digital. Choose something that can give you control of how far away from the original it goes.

Modify - and all the while thinking of controls that could be modulated.

  • velocity randomization
  • arpeggiate - interesting to see how that works against short or long notes
  • speed - interesting to see how that works against how many notes you chose to give it per bar

Quantize - in this step you are choosing how much to tame all this randomness, in terms of atonality. You are generally going to pick a scale or a partial scale. I find that scales with few notes are easier to see subtleties in

Pick a sound - timbre is not science, you just like it or you don’t. I typically pick sounds that have 1-2 parameters I can modulate.

Pick an envelope - plucky, slow attack, slow release - these are just three. Again, this speaks towards “density”

Pick effects - again, I pick something that may have 1-2 parameters I can modulate. Again, density.

Modulate - LFO elements that change the 2-4 parameters I chose that I liked to modulate. Instead of LFO, map things to controller knobs, sliders, buttons and do it manually as you go. I tend to do both at the same time - with at least one random LFO. Sometimes, I go full-auto and just make breakfast while it does its thing. That is what I did here.

What I did

I used Ableton Live in the horizontal “arrangement view” for this and created two tracks - one for MIDI and my sound generator and just one more (armed to record) to capture resampled audio from the MIDI track. You can get more complex and feed this any number of MIDI tracks at once, which in turn feed the MIDI sound generator track - in this way, you can also capture the generated MIDI. You can also run the system several times and each time create new audio track to capture different “takes.”

I also should be clear that while I understand many of the settings I play with, I really can’t process more than one or two at a time in my head or I just can’t do this, so I tend to forget why I do things and wonder what the heck I did later on. It’s part of the fun for me. I present this as scientifically as I can here but realize that I have not ever sat here and typed so much while I was setting up a system. I am much too impatient for this!!!

Start - I created an empty MIDI clip of five bars and set it to loop. I set my grid for quarter notes. I then drew in legato notes A3, F#3, C4, F#3, each of 5 quarter notes long. Am6 inversion.

Random - I chose to drag in one Ableton Random device with a 65% chance of generating a note above or below the original and set it in such a way that it can have 8 choices of notes (choices 2 and scale 4). I duplicated this device and deactivated the second one. I intend to feed this a random square LFO to flip this on and off.

Modify -

  • I added a Velocity device to allow for about a 50% chance of randomness and an s-curve (preset “Dynamic II” with random on 43 and out low up to 25).
  • I picked an Arpeggiator device with settings to place after the Random devices that included “Random Once” and “Swing 16” and set it for 8th notes, retrigger off.
  • Set Live’s tempo to 40 bpm.
  • then inserted a Chord device to feed the Arp device more interesting sounds - a version of a “Noir” preset that takes the original note and adds three more - one 3 steps above, another 12 steps above and the last at 6 steps under.
  • somewhere during all this MIDI madness, to listen to what I was doing, I chose a basic EIC2 grand piano (legacy Live Pack) sound for this.

Quantize - I then dragged in a scale device set to C Iwato (intervals in half-steps are 1 - 4 - 1 - 4 - 3)

Sound - Decided I would keep the piano for today and altered some of the settings to give it more sustain and body for starters.

Effects - Since I record under the name “Our Quiet Fog” for piano and echo generative things like this, just using Live’s Echo device in line with the instrument MIDI track today, 60% feedback, 30% reverb, slight modulation, wobble, ducking. Left delay at 1/4 note, right at 1/8At the end was a channel rack preset I created myself that includes a way to bump the volume manually as well as a limiter (because this can get crazy for some other sounds) and allow some stereo field manipulation


  • MIDI I used one .25 Hz Random Max4Live LFO to flip the second Random device on and off by turning the offset all the way down to -100 and then changing the maximum control to 1% for this output. I used another output from this LFO device set to 50-100% range and pointed it at the Arpeggiator’s Rate to allow for a roughly 1/8 or slower rate. I then used another output and assigned it to the Arpeggiator’s Groove control to shift around from straight to different swung grooves.
  • Sound - I chose to try to keep this to one LFO and chose to map the Release control of the piano to 100-50% (so in opposition to all the other modulation so far) and then the Color control to 10-90%.
  • Effect - mapped Feedback to 45-10% and Dry/Wet to 40-60%. I had one modulator left so I went with something sure to add a little chaos back in
    -LFO2! - I decided that I was going to let this go on for a while and that to add some variety, I would drag in a second modulator to do a 1Hz random modulation on LFO’s depth control.

This might sound too abrupt, so I think to step down to 20 bpm instead now and am happy that the echo feedback is smoothing out some roughness. I like this, so I went ahead, made sure my backup software is off, saved it as is and then hit record and just let it go. I set a stopwatch, tweaked the echo a little bit more (I originally had higher settings than what I wrote above, where the modulation outputs from LFO1 were set to something like Feedback 75-10% and Dry/Wet to 10-60%). At about 3 min in, I left it alone and went to go take a quick shower and listened a bit while I was drying off and dressing.

Came back at about the 13 min mark on my stopwatch, half-dressed, a decided it was transition time. I thought - slower, different key center and less notes. Make changes at 30 second intervals. First change the Arp rate to be slower, then a slight bit faster, moved to C#, brought the Arp repeats down about 20%, moved to D#, adjusted LFO2’s action on LFO1 to be a bit more consistent and more likely to be up (50-100% vs the standard 0-100%) and let it go again for another few minutes while I got dressed.

At around 24 minutes in, I decided to do some kind of finish. Will slow down the Arp rate more, increase the chance of feedback in the echo and then switch off the piano. Crossing fingers… Not too bad, I am not sure about the end, echo is repeating too and I reach for my channel rack to slowly turn down gain to zero. Stop the stopwatch and then drag out the loop in Live to its full 26+ minutes. Turn off looping, disable record arm, disable the MIDI track.

Ready to review, I find a spot near 14 min and draw in a fade during the quick part and yet close enough to the transition and just listen. About 3 minutes in from 14, I hear an obvious key change and think, OK, this is getting long so that helped. At 18 another key change and decide to give it a fade out there for the purposes of the Junto. I decide this piece will be called “The Silent, Understood” I do an export to WAV and check the file on disk. 4:36. A bit long for my Juntos, but whatever. It could have been 26 minutes!

Future ideas - use a part of the earlier fast section for another piece, extend this piece to include more at the end and then maybe see if the absolute ending of this session could also be its own piece. Things to do later! It occurs to me that I could have shut the piano off to avoid fades but I don’t mind, this album, if it ever becomes one, could incorporate the idea of fades as the concept of sorts.

I did some more experimenting with my as-yet-unused Ozone 9 plugin (I’ve been using Elements) and settled on just a classical preset with some reduced lufs-iness.

Art - captured 06 Feb 2010 with a Sony Cybershot in Massapequa, NY, USA


I really like this a lot! The bass is the place!


This is not gonna be anything so interesting and new but here it is:

I rely heavily on generative processes. A couple years ago, I recorded several pieces using some free generative tools to try them out: NOD-E (Reaktor Ensemble by Antonio Blanca), transition (CodeFN42), Nova3 (tonecarver), and some others–all of them fun and useful. I liked the generative process a lot but I ended up not liking my compositions enough so I did not publish them; but I continued to use generative heavily.

These days I use an iOS sequencer called ZOA which relies on Conway’s Game of Life for note generation. ZOA has 4 playheads and they can each be run into a pattern sequencer to create further variation and randomization. I usually set these playheads to different speeds (e.g., whole, half, quarter, 8th notes) and run the MIDI into different instruments.

For this submission, instead of using a completely random constellation of cells, I looked up oscillators in Game of Life from a nice webpage here. These go on forever, they are self-sustaining (hence the title of the track). I forgot which one I used for the melody but it’s either Figure8 or Pseudo-Barberpole.

I ran the slowest playhead into a piano that forms the bass structure. The slightly faster playhead went into a pad. The fastest ones into a plucky sound. I had never done generative drum parts so to try that, I also ran a separate ZOA session for drums, using blinkers–the simplest oscillator shape in Game of Life. It’s a bit sloppy but oh well…

I didn’t manipulate the resulting MIDI at all. Usually, generative process results in MIDI that is a bit rigid and random as is so I think generative alone only gets you halfway to a good composition and you have to then take bits and pieces and sequence it in different ways, create loops, manipulate the MIDI manually, etc. I don’t always find the time and patience for that. I enjoy being carried away by the generative process and later just publish it close to its original form. It’s going to get boring soon but I can work more on editing generative results more seriously at that point.


I use an iOS app, FAC Envolver, that takes an incoming signal and lets me set it a peak threshold to generate notes on a scale and send them over midi. The generator in this case is a rhythmic patch on a Sub37 which I use to generate the notes and send them to other instruments. I use another iOS app, Rozeta LFO, to modulate the channel volume and pan for the different instruments to get some movement. By playing with the filter and resonance on the Sub37 I can change the character of the Envolver generated notes.


Probably not my best example, but it’s a good idea that I’ve used a lot.

Given your user name, here’s a previous Junto where I played a 3/4 bassline on a fretless through an Ampeg emulation!


Scrambled Solfeggios [disquiet0543]

**These are a few of my favourite things… Using old online e synths to create sounds (rather than music), in this case the Ancient Solfeggio Synthesizer I found in 2012. Then, putting the result into ambient v.3 software and improvising on it. Lots of reverb, always lots of reverb. Then, manipulating the original synth sounds in Audacity, using its simplified version of Paulstretch, copying and cutting bits and modifying them to create drones (especially ones to cover abrupt endings and accidents). Then, overlaying the Ambient version, multitracking it and balancing the whole thing until it sounds like this…


Willy Wonka. Awkward silences. Moffenzeef.


  • LFO control over panning
  • LFO control over track volume
  • LFOs controlling other LFOs they’ve never even met
  • Phase offset of LFOs to create variation