Disquiet Junto Project 0424: Fluctuating Rhythm

I made this in-situ at the arboretum across the road from my work. A nice way to spend my lunchbreak. I had my laptop set up on a table under a melaleuca tree running a 2018 Max patch which randomises sine tones, noise and samples triggered/manipulated from incoming acoustic information via the mic input (laptop pinhole mic in this case) – e.g. degrees of amplitude and pitch. It is a fairly unpredictable patch, and it might not be immediately evident that there’s an explicit correlation to the environmental sounds in the site (wind, birds, traffic) and the patch’s output, but there’s a loose interplay and I really like in the instances of space, presences, etc. The Max output is simply coming out of the laptop’s stereo speakers and this - along with the environment itself - was recorded with my handheld recorder up fairly close to the laptop so I could capture the spatial placement of the sounds being produced in Max (such as the shifting placement of the insect trills and bells.)

8 Likes

Listening to the crickets and frogs singing so happily after flooding rains, different from the quiet nights not so long ago during the dry and the fires so close. I hum with them. and Vocal noises with the insects.
Edited and layered the single recording and mixed it.

5 Likes

The playlist is now rolling:

This is written in the Processing program language. The video portion is generated by Daniel Shiffman’s Flocking code.

I wrote the sound generation code. I use two pulse oscillators. One for the x-axis and one for the y-axis.
The average position of the boids is mapped to the frequency of each respective oscillator. So, if the boids move, in general, up, the pitch of one oscillator goes up. Similarly if the boids move to the right, the second oscillator increases in pitch. Finally, the standard deviations of the locations of the boids is mapped to the width of the oscillators. I’m glossing over some details, but that’s the basic idea.

Here is the crux of my sound mapping code:

    x_avg = self.get_x_avg()
    x_velocity_avg = self.get_x_velocity_avg()
    y_velocity_avg = self.get_y_velocity_avg()
    pos = (x_avg / w) * 2 - 1
    y_avg = self.get_y_avg()
    
    x_std_dev = std_dev([b.location[0] for b in self.boids])
    y_std_dev = std_dev([b.location[1] for b in self.boids])
    
    # Syntax: .set(  freq,               width, amp,            add, pos)
    ch_1.set(       x_avg, x_std_dev / (w / 2),   1, x_velocity_avg, pos)
    ch_2.set(-(h - y_avg), y_std_dev / (h / 2),   1, y_velocity_avg, pos)

The full code is here. I would recommend running the code as the video is only a random three minute sample of the idea.

Of course the screen wraps around. (The boids are flying on the surface of a torus.) The way I’ve written it, the audio also “wraps around”. It might make sense to use absolute frequencies instead, and have the app stop after the pitch goes out of hearing range. Maybe I will try that.

I didn’t realize until after finishing that nature was supposed to affect the rhythm. Also, this is a simulation of nature, rather than nature itself. So, I kind of broke the rules. Sorry.

2 Likes

Oops! Strayed quite a way from the brief on this one!

OK … the original idea was to use weather data to ‘conduct’ the tempo on a piece … but I ended up using weather data from https://rp5.ru in TwoTone [app.twotone.io] to generate some ‘music’ which I then layered, morphed and messed with …

Have a great week! h u :slight_smile:

3 Likes

Non-submission (with acknowledgment to @DetritusTabuIII!), as I’ve applied the technique shared by Brian Crabtree in project 223 and layered all the takes – including the interruption at the end.

4 Likes

I tidied this up into a slightly more formalized statement. Thanks for the prompt to write about the prompts.

https://disquiet.com/2020/02/14/reverse-engineering-musical-composition-prompts/

2 Likes

Hey All, I started by using some nature sounds from freesound (including one from the marvelous uploader klankbeeld) since I don’t really have the technology to record outside plus it is freaking cold out there with the wind blowing. I focused on the performance aspect and improvised the parts and tried to accompany the field recording tracks and keep it kinda flowing tempo wise. There is a great freedom in getting away from tempo and notes of a melody but I hope it retains something.

Peace, Hugh

3 Likes

After reading “Philosophy Is a Public Service” by JONATHON KEATS I decided I had to do something with trees. So I wrote a program in ChucK to generate tones from a fractal tree. These trees grow exponentially, so my processor could only handle 3 generations of tree growth.

I’m too tired to explain in plain English how I did this, so I’ll just include the code in-line. It’s quite short and could even be a little shorter if I spent more time on it. Full code is here.

[0] @=> int axiom[];

fun int[] apply_rules(int tree[]) {
    get_next_gen_length(tree) => int next_gen_length;
    int next_gen[next_gen_length];
    0 => int j;
    for (0 => int i; i < tree.cap(); i++) {
        tree[i] => int symbol;
        if (symbol == 0) {
            1 => next_gen[j];
            -2 => next_gen[j + 1];
            0 => next_gen[j + 2];
            -1 => next_gen[j + 3];
            0 => next_gen[j + 4];
            j + 5 => j;
        } else if (symbol == 1) {
            1 => next_gen[j];
            1 => next_gen[j + 1];
            j + 2 => j;
        } else {
            symbol => next_gen[j];
            j + 1 => j;
        }
    }
    
    return next_gen;
}

fun int get_next_gen_length(int tree[]) {
    0 => int len;
    for (0 => int i; i < tree.cap(); i++) {
        tree[i] => int symbol;
        if (symbol == 0) {
            len + 5 => len;
        } else if (symbol == 1) {
            len + 2 => len;
        } else {
            len + 1 => len;
        }
    }
}
    
fun void sound_notes(int tree[]) {
    60 => int note;
    .5 / tree.cap() => float gain;
    .5 => float param;
    for (0 => int i; i < tree.cap(); i++) {
        if (tree[i] == 0) {
            Moog moog => dac;
            note => Std.mtof => moog.freq;
            0.8 => moog.noteOn;
            gain => moog.volume;
            param => moog.filterQ;
            <<< "volume:", moog.volume() >>>;
            <<< "filterQ", moog.filterQ() >>>;
        } else if (tree[i] == 1) {
            Moog moog => dac;
            note + 12 => Std.mtof => moog.freq;
            0.8 => moog.noteOn;
            gain => moog.volume;
            param => moog.filterQ;
            <<< "volume:", moog.volume() >>>;
            <<< "filterQ", moog.filterQ() >>>;
        } else if (tree[i] == -2) {
            note - 1 => note;
            param / 2.0 => param;
        } else {
            note + 1 => note;
            (param + 1.0) / 2.0 => param;
        }
    }

    5::second => now;
}

sound_notes(axiom);

apply_rules(axiom) @=> int tree[];
sound_notes(tree);

apply_rules(tree) @=> tree;
sound_notes(tree);

apply_rules(tree) @=> tree;
sound_notes(tree);
4 Likes

Last week when the Hold Music email I arrived, it occurred to me that there have been a disproportionate number of Junto projects about telephones.

1 Like

With apologies to Franz, Peter, & Sviatoslav; Marc & Jonathon; and Listeners Like You (especially if you have perfect pitch), I proudlyish present “Schubert in a Tube”.

Yesterday, I recorded snowmelt flowing through a plastic culvert under the road by my house.

Like so.

I then high-passed & gated this recording to minimize the general rauschen of the water and emphasize the intermittent higher blips and blops, and fed this stream (lol) to two copies of BeatTrack.kr in SuperCollider (one for each stereo channel). I used their estimated tempi to control the speeds of four copies each of Peter Schreier and Sviatoslav Richter performing Franz Schubert’s “Wasserflut” from the song cycle Winterreise:

Take it away, Wikipedia

“Wasserflut” (“Flood”):
The cold snow thirstily sucks up his tears; when the warm winds blow, the snow and ice will melt, and the brook will carry them through the town to where his sweetheart lives.

BeatTrack has “biases to 100-120 bpm”, but the performance, although very fluid (lol) in tempo, is more like 60-70 bpm, so I scaled it by half. Probably BeatTrack2 would be better but I don’t know how to use it (which features, etc). The tempi are then smoothed out using VarLag with various parameters.

Finally, the performances are panned (lol) & mixed back together with the original recording of the snowmelt and also (softly) the high-passed/gated version.

The SuperCollider code is on my GitHub although there are probably better ways to do every aspect of that.

3 Likes

This was a really great project for me. I haven’t played my bass in months, not having much of a reason to. This project came into my inbox and it seemed like something I could do and not really worry about if the results were good enough for one person or another. Anyway, I went outside hoping to hear the birds that are usually chirping but they were silent. Instead I turned from the tree and saw the slowly moving clouds and synchronized with them. The clouds soon revealed the sun, maybe you can hear that part. Eventually the clouds covered the sun back up and the birds I was expecting began to join in.

4 Likes

This is true. An obsession.

1 Like

Judging by your Twitter posts, you spend some time on conference calls.

When I interviewed you last, I’d been thinking how many Junto projects capture a variety of everyday activities and give insight into the diversity of the community.

Those projects reveal participants’ environments and it can be surprising what we learn about each other.

The other thing is I sometimes look back on my recordings and get a sense of how different times of the year will lead to different moods and instrumentation.

1 Like

For this track I went down to the river, sampled the water sounds into the SpaceCraft Granular app and played with it for bit. I left the river sounds recorded with my iPhone in the background, and I occasionally mixed them in and out of the track as I saw fit.

I also made a video for it on my youtube channel that you can watch here: https://www.youtube.com/watch?v=rxVQEZ3mbe0

2 Likes

@BennDeMole I like how you’ve added your voice and it’s a really atmospheric piece.

@tristan_louth_robins The environment and stereo field work well together.

@DetritusTabuIII Love the warbling but I’m sober at the moment. Can imagine this would be more visceral if I were drunk!

@tatecarson It’s great hearing the detail of the strings and the description of the birds being silent reminds me how there’s something up when that happens.

@WhiteNoise Beaut results and it’s a good effect the way the sounds from the environment sit within it. Enjoyed watching it unfold too.

2 Likes

I set up a series of sine waves running at frequencies suggested by the Fibonacci sequence. Some are running at audio frequency and some are acting as LFO modulation sources. It ended up sounding more like nature in a metallic windstorm… but I’m reasonably happy with the outcome. I have a short video which inspired this…that I’ll add later.

2 Likes

‘Pipe Sentinels’ consists of recordings my feedback-based installation of the same name captured in the gardens of a museum in South-London. Placed in a small bamboo field, the installation recorded its environment, processed the audio according to shifting environmental data and played it back via speakers to the same surrounding - before it became recorded again.

The recordings were edited, layered and turned into this short composition.

2 Likes

I took my trumpet outside to our shed whilst Storm Dennis was starting here in the North of England. I placed my Tascam DR-05 on the floor to record both the external sounds of wind and rain and my trumpet. I had no preconceived ideas of melody and tried to respond to the weather. I wanted it to be as ‘in the moment’ as possible and do it in one take - so excuse the noodling! I left the wind noise distortion in towards the end (about 1min 50secs) - so be prepared!

2 Likes

Went slightly different than instructed… I took a recording of water sloshing irregularly from a drain pipe, In Ableton I then extracted the groove from it, applied it to a drum track full of artificial noises (metal scrapping and banging). Played two piano melodies, applied convolutional reverb to one with the water slosh as the reverb space. Then chopped up the water recording, had it play back via random midi notes, but in the same groove as the original. Finally, added some LFOs that took timing from the water waveforms peaks and applied them to the reverb parameters of the other piano section.

2 Likes