Disquiet Junto Project 0424: Fluctuating Rhythm

This week’s project is a collaboration with the artist Jonathon Keats, whom we’ve worked with previously. There are discussions underway to potentially do a concert in the San Francisco Bay Area connecting to some of his art. More details as that idea comes into focus.

Disquiet Junto Project 0424: Fluctuating Rhythm
The Assignment: Employ nature as your conductor.

Step 1: Compose or choose a work of music. (The work can involve any number of instruments or can be purely electronic.)

Step 2: Perform the work outdoors, employing nature as your conductor. (Any natural phenomenon may be enlisted to keep time during your performance. Examples include the sway of a tree in the wind, the flow of a stream, or the circling of a flock of birds before a storm. Consider a phenomenon that fluctuates with environmental conditions, such that your rhythm varies in ways that situate your work in the landscape.)

Background: This is a collaboration with the artist and experimental philosopher Jonathon Keats, who is working on a global initiative to enlist natural systems as official time standards. Read more here:

http://nautil.us/issue/79/catalysts/philosophy-is-a-public-service

Seven More Important Steps When Your Track Is Done:

Step 1: Include “disquiet0424” (no spaces or quotation marks) in the name of your track.

Step 2: If your audio-hosting platform allows for tags, be sure to also include the project tag “disquiet0424” (no spaces or quotation marks). If you’re posting on SoundCloud in particular, this is essential to subsequent location of tracks for the creation of a project playlist.

Step 3: Upload your track. It is helpful but not essential that you use SoundCloud to host your track.

Step 4: Post your track in the following discussion thread at llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0424-fluctuating-rhythm/

Step 5: Annotate your track with a brief explanation of your approach and process.

Step 6: If posting on social media, please consider using the hashtag #disquietjunto so fellow participants are more likely to locate your communication.

Step 7: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

Additional Details: Deadline: This project’s deadline is Monday, February 17, 2020, at 11:59pm (that is, just before midnight) wherever you are. It was posted on Thursday, February 13, 2020.

Length: The length is up to you. Shorter is often better. Let nature take its course.

Title/Tag: When posting your track, please include “disquiet0424” in the title of the track, and where applicable (on SoundCloud, for example) as a tag.

Upload: When participating in this project, post one finished track with the project tag, and be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.

Download: Consider setting your track as downloadable and allowing for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution, allowing for derivatives).

For context, when posting the track online, please be sure to include this following information:

More on this 424th weekly Disquiet Junto project — Fluctuating Rhythm / The Assignment: Employ nature as your conductor — at:

https://disquiet.com/0424/

This is a collaboration with the artist and experimental philosopher Jonathon Keats.

More on the Disquiet Junto at:

https://disquiet.com/junto/

Subscribe to project announcements here:

http://tinyletter.com/disquiet-junto/

Project discussion takes place on llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0424-fluctuating-rhythm/

There’s also a Disquiet Junto Slack. Send your email address to twitter.com/disquiet for Slack inclusion.

The image associated with this project is by Chris Murphy.

6 Likes

The project is now live. Thanks, folks.

You always have such great project ideas, apparently 424 of them. Do you ever run out? How far out do you have ideas planned? Do you get collaborative input on them, or is it all your own? I find this long-running project fascinating.

1 Like

It was too cold to go outside, so I digged in my folder of old videos and found a clip that I found soothing and meditative…

I build a small piece of music by using 95% field recordings, taken “on the road” in Taiwan, Tokyo, Osaka, and Brooklyn. They have been looped, mangled - and a few reversed - to create this small piece. I have added just a nearly inaudible artificial background drone and a couple of subby kicks.
The piece has then been synchronized (by hand) to the video of waves. The waves are from Øresund, the video filmed at a beach in the northern part of Zealand, Denmark.

The video can be seen here: https://youtu.be/hYpm0JhYJUU

Thank you again, for an inspiring challenge!

6 Likes

3 Likes

Thanks for asking. The best way I have come to explain the Junto project development process is that the vast majority of the projects come out of a “reverse-engineering” scenario. Something – a natural phenomena, a bit of math, a cultural or historical tidbit, a bit of text in a novel, a report in the science pages, a stray observation – is noted, and then I work to figure out how that source concept could become a Junto project: How can we probe the source concept by investigating it through music and sound and, by extension, online collaboration. (When the Junto first got started, back in early 2012, an especially important founding concept was the idea of non-verbal communication. The Junto was a way for us to communicate across cultures musically/sonically, and to pursue ideas musically/sonically.) Then, having selected one of these topics, I break it down into steps.

For example, there’s an upcoming project that resulted from the t-shirt a friend was wearing. It depicted a common mathematical sequence in an unfamiliar way. We’re going to take that unfamiliar way (it’s visual rather than numerical), and imagine it as a graphic score. That way we’ll “hear” the source concept.

Sometimes we repeat past projects, or tweak previous ones. Some are proposed by other people, such as this one, which was proposed by an artist we’ve worked with in the past. I have a long list, and often those are delayed because other ideas present themselves and are acted on immediately.

Also, I spend a lot of time thinking about the sequence of projects, making sure they’re balanced, that we alternate heavy concept ones with straightforward ones, and ones that require wholly original production with something sample-based, and so forth. Again, thanks for asking.

12 Likes

Took my guitar to a nearby water view.

Played a simple chord progression and, since I couldn’t look away from my picking hand for long, I took cues about the breeze from my sweaty face.

9 Likes

I made this in-situ at the arboretum across the road from my work. A nice way to spend my lunchbreak. I had my laptop set up on a table under a melaleuca tree running a 2018 Max patch which randomises sine tones, noise and samples triggered/manipulated from incoming acoustic information via the mic input (laptop pinhole mic in this case) – e.g. degrees of amplitude and pitch. It is a fairly unpredictable patch, and it might not be immediately evident that there’s an explicit correlation to the environmental sounds in the site (wind, birds, traffic) and the patch’s output, but there’s a loose interplay and I really like in the instances of space, presences, etc. The Max output is simply coming out of the laptop’s stereo speakers and this - along with the environment itself - was recorded with my handheld recorder up fairly close to the laptop so I could capture the spatial placement of the sounds being produced in Max (such as the shifting placement of the insect trills and bells.)

10 Likes

Listening to the crickets and frogs singing so happily after flooding rains, different from the quiet nights not so long ago during the dry and the fires so close. I hum with them. and Vocal noises with the insects.
Edited and layered the single recording and mixed it.

6 Likes

The playlist is now rolling:

This is written in the Processing program language. The video portion is generated by Daniel Shiffman’s Flocking code.

I wrote the sound generation code. I use two pulse oscillators. One for the x-axis and one for the y-axis.
The average position of the boids is mapped to the frequency of each respective oscillator. So, if the boids move, in general, up, the pitch of one oscillator goes up. Similarly if the boids move to the right, the second oscillator increases in pitch. Finally, the standard deviations of the locations of the boids is mapped to the width of the oscillators. I’m glossing over some details, but that’s the basic idea.

Here is the crux of my sound mapping code:

    x_avg = self.get_x_avg()
    x_velocity_avg = self.get_x_velocity_avg()
    y_velocity_avg = self.get_y_velocity_avg()
    pos = (x_avg / w) * 2 - 1
    y_avg = self.get_y_avg()
    
    x_std_dev = std_dev([b.location[0] for b in self.boids])
    y_std_dev = std_dev([b.location[1] for b in self.boids])
    
    # Syntax: .set(  freq,               width, amp,            add, pos)
    ch_1.set(       x_avg, x_std_dev / (w / 2),   1, x_velocity_avg, pos)
    ch_2.set(-(h - y_avg), y_std_dev / (h / 2),   1, y_velocity_avg, pos)

The full code is here. I would recommend running the code as the video is only a random three minute sample of the idea.

Of course the screen wraps around. (The boids are flying on the surface of a torus.) The way I’ve written it, the audio also “wraps around”. It might make sense to use absolute frequencies instead, and have the app stop after the pitch goes out of hearing range. Maybe I will try that.

I didn’t realize until after finishing that nature was supposed to affect the rhythm. Also, this is a simulation of nature, rather than nature itself. So, I kind of broke the rules. Sorry.

3 Likes

Oops! Strayed quite a way from the brief on this one!

OK … the original idea was to use weather data to ‘conduct’ the tempo on a piece … but I ended up using weather data from https://rp5.ru in TwoTone [app.twotone.io] to generate some ‘music’ which I then layered, morphed and messed with …

Have a great week! h u :slight_smile:

3 Likes

Non-submission (with acknowledgment to @DetritusTabuIII!), as I’ve applied the technique shared by Brian Crabtree in project 223 and layered all the takes – including the interruption at the end.

4 Likes

I tidied this up into a slightly more formalized statement. Thanks for the prompt to write about the prompts.

https://disquiet.com/2020/02/14/reverse-engineering-musical-composition-prompts/

3 Likes

Hey All, I started by using some nature sounds from freesound (including one from the marvelous uploader klankbeeld) since I don’t really have the technology to record outside plus it is freaking cold out there with the wind blowing. I focused on the performance aspect and improvised the parts and tried to accompany the field recording tracks and keep it kinda flowing tempo wise. There is a great freedom in getting away from tempo and notes of a melody but I hope it retains something.

Peace, Hugh

4 Likes

After reading “Philosophy Is a Public Service” by JONATHON KEATS I decided I had to do something with trees. So I wrote a program in ChucK to generate tones from a fractal tree. These trees grow exponentially, so my processor could only handle 3 generations of tree growth.

I’m too tired to explain in plain English how I did this, so I’ll just include the code in-line. It’s quite short and could even be a little shorter if I spent more time on it. Full code is here.

[0] @=> int axiom[];

fun int[] apply_rules(int tree[]) {
    get_next_gen_length(tree) => int next_gen_length;
    int next_gen[next_gen_length];
    0 => int j;
    for (0 => int i; i < tree.cap(); i++) {
        tree[i] => int symbol;
        if (symbol == 0) {
            1 => next_gen[j];
            -2 => next_gen[j + 1];
            0 => next_gen[j + 2];
            -1 => next_gen[j + 3];
            0 => next_gen[j + 4];
            j + 5 => j;
        } else if (symbol == 1) {
            1 => next_gen[j];
            1 => next_gen[j + 1];
            j + 2 => j;
        } else {
            symbol => next_gen[j];
            j + 1 => j;
        }
    }
    
    return next_gen;
}

fun int get_next_gen_length(int tree[]) {
    0 => int len;
    for (0 => int i; i < tree.cap(); i++) {
        tree[i] => int symbol;
        if (symbol == 0) {
            len + 5 => len;
        } else if (symbol == 1) {
            len + 2 => len;
        } else {
            len + 1 => len;
        }
    }
}
    
fun void sound_notes(int tree[]) {
    60 => int note;
    .5 / tree.cap() => float gain;
    .5 => float param;
    for (0 => int i; i < tree.cap(); i++) {
        if (tree[i] == 0) {
            Moog moog => dac;
            note => Std.mtof => moog.freq;
            0.8 => moog.noteOn;
            gain => moog.volume;
            param => moog.filterQ;
            <<< "volume:", moog.volume() >>>;
            <<< "filterQ", moog.filterQ() >>>;
        } else if (tree[i] == 1) {
            Moog moog => dac;
            note + 12 => Std.mtof => moog.freq;
            0.8 => moog.noteOn;
            gain => moog.volume;
            param => moog.filterQ;
            <<< "volume:", moog.volume() >>>;
            <<< "filterQ", moog.filterQ() >>>;
        } else if (tree[i] == -2) {
            note - 1 => note;
            param / 2.0 => param;
        } else {
            note + 1 => note;
            (param + 1.0) / 2.0 => param;
        }
    }

    5::second => now;
}

sound_notes(axiom);

apply_rules(axiom) @=> int tree[];
sound_notes(tree);

apply_rules(tree) @=> tree;
sound_notes(tree);

apply_rules(tree) @=> tree;
sound_notes(tree);
4 Likes

Last week when the Hold Music email I arrived, it occurred to me that there have been a disproportionate number of Junto projects about telephones.

1 Like

With apologies to Franz, Peter, & Sviatoslav; Marc & Jonathon; and Listeners Like You (especially if you have perfect pitch), I proudlyish present “Schubert in a Tube”.

Yesterday, I recorded snowmelt flowing through a plastic culvert under the road by my house.

Like so.

I then high-passed & gated this recording to minimize the general rauschen of the water and emphasize the intermittent higher blips and blops, and fed this stream (lol) to two copies of BeatTrack.kr in SuperCollider (one for each stereo channel). I used their estimated tempi to control the speeds of four copies each of Peter Schreier and Sviatoslav Richter performing Franz Schubert’s “Wasserflut” from the song cycle Winterreise:

Take it away, Wikipedia

“Wasserflut” (“Flood”):
The cold snow thirstily sucks up his tears; when the warm winds blow, the snow and ice will melt, and the brook will carry them through the town to where his sweetheart lives.

BeatTrack has “biases to 100-120 bpm”, but the performance, although very fluid (lol) in tempo, is more like 60-70 bpm, so I scaled it by half. Probably BeatTrack2 would be better but I don’t know how to use it (which features, etc). The tempi are then smoothed out using VarLag with various parameters.

Finally, the performances are panned (lol) & mixed back together with the original recording of the snowmelt and also (softly) the high-passed/gated version.

The SuperCollider code is on my GitHub although there are probably better ways to do every aspect of that.

4 Likes

This was a really great project for me. I haven’t played my bass in months, not having much of a reason to. This project came into my inbox and it seemed like something I could do and not really worry about if the results were good enough for one person or another. Anyway, I went outside hoping to hear the birds that are usually chirping but they were silent. Instead I turned from the tree and saw the slowly moving clouds and synchronized with them. The clouds soon revealed the sun, maybe you can hear that part. Eventually the clouds covered the sun back up and the birds I was expecting began to join in.

6 Likes

This is true. An obsession.

1 Like