Disquiet Junto Project 0422: Chapter Cascade

wow thanks! I’m having a great time doing stuff in ChucK but wondering if I can go a level deeper. I’ll check out those libraries and your GitHub.


Yeah right so chuck is super cool. I remember when it first came out and there was nothing being that smart with time in a specialized language. Up until then it was only down in Haskell land that people had thought that carefully about time structure in a language - at least as far as I saw. I did a pretty big project a few years ago doing sequencing/scoring in clojure, too, and that was great.

And python is not that. Python is kind of terrible charmless glue where you can do anything but it’s not very elegant or nice.

So the other thing I should have said is: when I’ve used python for these things it is because I had the idea for my approach in my head up front, and I just needed some terrible charmless glue to sort of splurge it out. It’s actually a pretty bad composition environment, and I’m using it just for mechanics.


Just something simple and far away sounding this week. It’s just a couple of the Spitfire Audio LABS libraries with some reverb/delay. Keeping it simple.

Contrary to the prompt though, I added a small C part on top of when the A and B parts combine because I kept hearing it in my head when it would play.


I’ve done most of my programming in FORTRAN (FORTRAN! man I’m old) with a bit of C so the discovery a few years back that there were programming languages for music was pretty mind-blowing. I heard Python was what the cool kids were using so I worked through a book. I like ChucK because of the combo of fast start up and the ability to tinker with the tiny bits like individual samples. I’d eventually like to build standalone apps to do some of the stuff I like to do with ChucK but I need to update my skills.


I also am old and I also have written a lot of FORTRAN.

And depending on which cool kids, the cool kids use Rust or Haskell. But you know, https://xkcd.com/224/ is always true, just the language pairs change.

If you are serious about music work at the sample level, C++ is really where it’s at. I’m just using python to screw around with prototypes of ideas and make messes. (I’ve been thinking that the technique I used to make the dischoir would be cool as a standalone C++ engine that you could play properly). You would never write a serious VST or rack plugin in python.

Also C++ (especially modern C++ like C++14) is a really great language. But it’s harder to get started when you are just banging out a dumb idea.

But nothing will beat a domain specific language like chuck for expressivity inside the domain. The fact that you can bind ticks to operators so naturally (what is it, time => foo or whatever the syntax is) is something you won’t find easily and in a performant setting.


I think might have taken a bit of a personal approach to this project.

To create “short bursts of music” I used the Buchla Easel for one part of the composition, and the Waldorf Quantum for the other.

I created a fairly complex, random, self running patch on the Buchla Easel, which spits out short bursts of random notes and noise. This was routed through the KOMA BBD Delay, with its delay time controlled by the MakeNoise Wogglebug.

On the Quantum I choose a patch with two particle oscillators set to granular, and then I manipulated them in numerous ways during the recording. This is basically live slicing and manipulation of a sample.

I would recommend you to follow the track through to its end, as there are quite a lot of interesting textures developing.

I have also created a video for this piece:



The playlist is now rolling:

fantastic :slight_smile:


After a lot of meddling about and several tries, this is what I came up with. Even though it might not seem like it, this piece is a result of combining two very different small pieces - but I was simply not able to produce something I liked by simply cutting them up and combining. I had to “interweave” them (if thats a word…). So, here, i present the result in 60seconds…


I recorded two differently prepared pianos, one with contact microphones, the other with Zoom H2N in surround mode. Then I split both recordings into tiny parts, which I arranged in Ableton live. A dialogue between two differently prepared, played and recorded upright pianos.
This is my first contribution, discovered this exciting project only lately.


Played with arpeggiated Cello lines, mangled them through Granulator, played a slower line on a Casio CZ and mixed them together.


Two very different audio generators, after some heated discussion, find some common ground while remaining true to themselves.

More specifically, a custom Reaktor patch that is uncontrolled and inconsistent by design, and NI’s Straylight.


Local Fall
• Key: Bb bebop BPM: 60 Time signature: 4/4 DAW: Reaper
• Instruments: NI Machine 2, NI Session Horns
• Plug-ins:
• Compose a piece of music made of up lots of very short bursts. You will have an A line and a B line, which will be tonally and aesthetically distinct from each other. These will alternate back and forth for however long you desire. Consider a length of about a second, or less, for each sliver of sound. And then finally at the very end, have the A and B lines combine.
• Watched this video: https://www.youtube.com/watch?v=PYgsaeGWNHA


To start with I went back to a lot of old samples brutally chopping and panning. Then added a few changes to make it all a bit less brutalist. maybe im going soft?


I used the sounds of a car accelerating and a car crashing interwoven using shorter and shorter durations. Here are attributions for the sounds:

I did this in ChucK. You can view my source code here.


Versioned it.

Source code here. Did some post-processing in Ableton Live.


I loved this one. I used it as a jumping off point and the just dove into concrete with the whole idea. It’s pretty simple instrumentation: guitar, distortion, octaver, laptop bitcrusher, reverb, basic freeware drum sound in logic sent through the aforementioned mangling tools and stompboxes.

I did this under a working title that now seems like sleep-deprived gibberish:

One Theoretical Explanation of the Rhythmic Suspension Attributes the Change to Exogenous Cascades Introduced at an Indeterminable Stage in the Procedure.


Another great piece of work: I really enjoy the glitchy feel and the varies chop sizes. It maintains great texture throughout without feeling like it’s leaning on the glitches as a novelty. The audio snippets are also short enough to feel like the little snippets I imagined (Vs mine which were mostly too long).


I really appreciate how live (and lively) this sounds (the initial riff reminds me of the knightrider theme). In particular, in my submission, I kept focusing on short snippets of short lines: taking short (and long) snippets of full length lines was a great idea. The other sample elements also added great textural variety. Nice work!


Thanks for the feedback :slight_smile: I was quite careful in not allowing the glitchy-ness to take over as you mention.
I would say the sound is very deliberately grounded with the piano from the Eilish song and the recurrence of her voice, which to me also served as the emotional hook of the track too - so that I can let all this glitchy chaos happen around it without losing the backbone

I listened to yours and I do think you undersell yourself a bit. I really like the synth sounds in it and the feeling that its building towards something. But then its over, unfortunately, just as it feels like its beginning :pensive:

And maybe you didn’t adhere so much to the little snippets part of the constraint - but I didn’t follow the A/B instruction in the prompt (I have to admit I was a little fuzzy in my understanding of the exact steps this week). So we’re both guilty of bending the rules!

1 Like