Disquiet Junto Project 0493: AudioCorrect

Each Thursday in the Disquiet Junto group, a new compositional challenge is set before the group’s members, who then have just over four days to upload a track in response to the assignment. Membership in the Junto is open: just join and participate. (A SoundCloud account is helpful but not required.) There’s no pressure to do every project. It’s weekly so that you know it’s there, every Thursday through Monday, when you have the time.

Deadline: This project’s deadline is the end of the day Monday, June 14, 2021, at 11:59pm (that is, just before midnight) wherever you are. It was posted on Thursday, June 10, 2021.

These are the instructions that went out to the group’s email list (at tinyletter.com/disquiet-junto):

Disquiet Junto Project 0493: AudioCorrect

The Assignment: Think about the utility and the useful failures inherent in autocorrect and apply this to your music.

Thanks to Alan Bland (@morgulbee) for proposing this project.

Step 1: Think about how autocorrect works on your phone, how it sometimes does indeed take your haphazard typing and recognize what you had intended, and yet how also (probably more often) it subtly or even drastically alters the meaning of your intended message.

Step 2: There are numerous existing musical equivalents and approximations to autocorrect that exist as algorithms, such as pitch and tempo quantizers, and autotune. Consider the ones you have used or want to explore.

Step 3: Create a piece of music by either (A) using/abusing one of the musical autocorrect concepts from Step 2 or (B) imagining your own autocorrect algorithm and creating what the result might sound like.

Seven More Important Steps When Your Track Is Done:

Step 1: Include “disquiet0493” (no spaces or quotation marks) in the name of your tracks.

Step 2: If your audio-hosting platform allows for tags, be sure to also include the project tag “disquiet0493” (no spaces or quotation marks). If you’re posting on SoundCloud in particular, this is essential to subsequent location of tracks for the creation of a project playlist.

Step 3: Upload your tracks. It is helpful but not essential that you use SoundCloud to host your tracks.

Step 4: Post your tracks in the following discussion thread at llllllll.co:

https://llllllll.co/t/disquiet-junto-project-0493-audiocorrect/

Step 5: Annotate your tracks with a brief explanation of your approach and process.

Step 6: If posting on social media, please consider using the hashtag #disquietjunto so fellow participants are more likely to locate your communication.

Step 7: Then listen to and comment on tracks uploaded by your fellow Disquiet Junto participants.

Additional Details:

Deadline: This project’s deadline is the end of the day Monday, June 14, 2021, at 11:59pm (that is, just before midnight) wherever you are. It was posted on Thursday, June 10, 2021.

Length: The length of your finished track is up to you.

Title/Tag: When posting your tracks, please include “disquiet0493” in the title of the tracks, and where applicable (on SoundCloud, for example) as a tag.

Upload: When participating in this project, be sure to include a description of your process in planning, composing, and recording it. This description is an essential element of the communicative process inherent in the Disquiet Junto. Photos, video, and lists of equipment are always appreciated.

Download: It is always best to set your track as downloadable and allowing for attributed remixing (i.e., a Creative Commons license permitting non-commercial sharing with attribution, allowing for derivatives).

For context, when posting the track online, please be sure to include this following information:

More on this 493rd weekly Disquiet Junto project – AudioCorrect (The Assignment: Think about the utility and the useful failures inherent in autocorrect and apply this to your music) – at: https://disquiet.com/0493/

Thanks to Alan Bland for proposing this project.

More on the Disquiet Junto at: https://disquiet.com/junto/

Subscribe to project announcements here: https://tinyletter.com/disquiet-junto/

Project discussion takes place on llllllll.co: https://llllllll.co/t/disquiet-junto-project-0493-audiocorrect/

There’s also a Disquiet Junto Slack. Send your email address to twitter.com/disquiet for Slack inclusion.

4 Likes

The project is now live.

https://soundcloud.com/user-507251108/correct-disquiet0493

3 Likes

Hey All,
I must admit I was a little intimitated when I first meet famed demo prouder Skip Petersun. I first became aware of him when I watched his youtube video showing how to plug in headphones on a computer. When I meet him in the studio in the garage that he shares with his car he impressed me when he pulled out not one but 2 microphobes out os a laundry basket full of over 500 hundred power adapters. He really got me to sound like a hunded dollar bill. Hope you implode.

Thanks Marc, I had a log day and this was just what I needed.

Peace, Hugh

9 Likes

The Disquiet Junto project this week prompted my mind to wander back to a piece of music that I’ve struggled to revisit.

Back in 2016 I was inspired by Morphine to explore an open tuning on my guitar, settling on four strings tuned DDAE.

Thinking back and the tuning inspired a variety of music.

Recently I’d used the song ‘Reflections’ in my soundtrack for The Lost World and my partner had singled out the song for praise.

So I tried to correct the piece by generating MIDI in Ableton Live, quantising and then setting a scale.

The scale still eludes me, as it starts in G minor and then has a couple of changes, including a stray semitone in the melody and then it rises a tone.

Anyway, I wanted to hear it as performed on a Rhodes-style electric piano and tried to removing the bum notes (either my playing or the Live generated ones, but mostly Ableton).

5 Likes

The playlist is now rolling. Thanks, folks.

For this I experimented by taking a few drone pads, some crispy field recordings and added a correction algorithm VST, in this case the Auburn Sounds Graillon 2, subtle but does work for me. Might use this method again.

For the name I typed in “J” in the google search bar, the auto completion suggested Java. (Maybe because I am a software engineer and quite often google some code related stuff.)

6 Likes


A textural adventure made with O-vox doing pitch correction on the voice. Sounds from wingpinger, diy liquid foam and a pulsar synthesis vst plus others. Pulsar vst is free and was originally posted here on lines.

6 Likes

This one is hard to explain. I have an 11-note first species counterpart. The cantus firmus is constant. However, harmonic mistakes in the counterpoint are corrected. The counterpoint has the following structure:

  1. 1 wrong note
  2. 1 correct note
  3. 1 correct note, 1 wrong note
  4. 2 correct notes
  5. 2 correct notes, 1 wrong note
  6. 3 correct notes

and so on until there are 11 correct notes. So, basically, there is a wrong note at the end of each phrase, then the phrase is played again with a corrected note. And the phrases get longer and longer.

That’s the first part of autocorrection.

Then I added the audio from the trailer to the move The Brain That Wouldn’t Die (1962). The film was in the public domain in the United States from the day of its release due to a flawed copyright notice. I added pitch correction to this audio so that it would be in the same mode (Dorian) that the counterpoint thing is in.

5 Likes

Real-time recording created in Pure Data.

5 Likes

I always liked De-Click tools and how they fail. So I started with a field recording (first 30 seconds mostly unchanged) and added Izotope Declicker (I recommend it!), although the field recording did not have any clicks to remove. The sound was mostly unchanged, until I increased “click widening” to max, changed “sensitivity” over time as a track envelope and added more declickers. At the end I had five instances of declicker and strange psychedelic noises. Reaper was freezing all the time, otherwise I would have tried to go up to 10 instances. De-clicking must be cpu-expensive …

So all melodic material is from the field recording (unchanged at the beginning and the end) corrected by Declickers, plus some SuperMassive to soften the atmosphere and some beats to sell it as experimental techno, to be danced slowly and drunk with two fluorescent drinks in both hands.

There is also a zero-beat version here.

5 Likes

First I made a bit of a normal, human-sounding riff with drums, just played. Then I quantized it, changed the timbre of the melody and made it more strict, but then I also overdid it, with the melody taking some out-of-scale notes and drifting frequency in the sound design itself; also the beat for this I manually adjusted to be super-straight. Then I put it together in reverse order, so first the beat is straight, with the melody off, part 2 then loosens the beat a bit, but the harmony becomes a bit softer, finally the beat is syncopated and loose, while the sound of the melody is more raw.

5 Likes


A repeating 4 note melody (bells) and 3 note bass line is “autocorrected” to various scale modes, selected somewhat at random.

3 Likes

Hello, again I misinterpreted the prompt, oh well! Here’s a super rough go at it.
This was more of a generative algorithm, or a set of rules that I used to produced a chord progression.
The synth is two plugins layered (korg M1 and a XILS). Drums are auto-tuned in three different tracks,
which is kind of like auto-complete, I was thinking. Shuffled by automating the delay wet/dry mix.
Really needs a bassline. Completely fumbled on that one.

Instructions:

  • Look up “auto-complete” on Wikipedia
  • Number each paragraph
  • Count out words, take the word that matches the paragraph number (must have length > 4)
    • Count out letters and travel around the circle of fifths.
    • Take that chord
    • Even number of letters then major else minor

circle of fifths, starting from f
f c g d a e b f# c# g# d# a#

paragraph, word, chord selected

1 Autocomplete A#m7

2 speeds EMaj7

3 Algorithms G#min9

4 software F#Maj9

5 writes Em

6 existing F#m

7 automatic C#Maj7

8 completes C#m9

5 Likes

Squarewave source being quantized then gradually mangled by vocoders, formant effects, a wave folder, a spectral resonator (revealing the uncanny image of a horse) and, eventually, a distortion effect. The wave folder and vocoder were modulated by several envelope followers that were in turn feeding back on themselves.

Not strictly a correction (although I am using quantizers and pitch correction in the mix) but had fun with fed-back envelope followers as part of this experiment.

4 Likes

I had an idea immediately on reading this prompt, and then it took me a few runs to get something satisfactory together. This called to mind an interview with Daniel Menche, talking about Kataract, one of the first ‘proper’ noise albums I got into many years ago. That album is a dense layering of waterfall field recordings, which obviously have a similar sonic profile to white noise. He was discussing the first few minutes, where he fed some waterfalls into a noise-reduction program, which tried to automatically determine what the “real” sound behind all that noise was. The results were quite odd and interesting (You can hear Kataract here: Kataract (extended version) | Daniel Menche )

I’ve recently been working on a track for a ‘blank tape’ compilation, where the only sound source permitted is blank tapes. (Coming soon on Steep Gloss! Here’s an earlier installment in the series: __________ [blank tape compilation vol. 1] | Various Artists | Steep Gloss ) This hiss-heavy track, which is empty of any ‘real’ audio content, seemed a perfect candidate for trying this process out myself. I tried a few different denoiseing VSTs, but they were all too manual, allowing me to adjust parameters to get a ‘good’ result. I didn’t want a ‘good’ result though: the point was to hear the automatic process failing at the impossible task of denoising a piece of noise.

I eventually found audiodenoise.com, which uses “experimental audio algorithms”, and ran bits of my track through with various different parameter choices, until I had something I liked. It needed a bit more though, so I added another layer of 'autocorrection. I took the relevant section of the Menche interview, and ran it through Google Translate many times, traversing the world linguistically. That led to some interestingly mangled results, with a lot of meaning lost in translation, and some strange new meanings introduced. I found a speech synthesis site to convert these texts to speech as best it could (and with a Scottish accent).

The finished track is three iterations of automatic noise reduction and three iterations of automatic translate-mangled texts (plus the ghost of an earlier abandoned attempt layered very low in the mix). Please enjoy this oddity that no human hand would have made.

5 Likes

i used some notes from a previous track (C A E G) and swiped on my phone keyboard a few times - got the words: bach, cash, case, can
then i ran these through a cipher decoder Online calculator: Substitution cipher decoder
and got rsab aseb asel ast so played the notes A B / A E B / A E / A

these were played thru iris 2, supermassive, m ez convolution reverb
and another instance of iris 2 with randomized fifths of the notes

5 Likes

Often, when I’m setting up the mics and getting levels, I’ll do a little acappella to get things started. It’s not exactly Aretha Franklin – or Cher, for that matter – but would Aretha personally play your birthday party for a modest fee? Or at least some drink tickets? Probably not (call me).

I love to play things with unwieldy sonic attachments to see if I can get it to work somehow. If I recall, this is a pitched and buffered kalimba take where I play flams and sparse notes to get this syncopated chugging to issue forth. The synth line is also pitched about randomly and I didn’t bother much getting it locked in – each playthrough is different and this one is close enough for now. There’s also a glass track, on a much longer timescale, that was doubled and pitched again, but ended up playing a minor role in the mixdown. Finally, the vocal at the end is processed with Ambient from Audio Bulb which has a number of pitch steps running off a quantizer, though I could hardly explain it (I think I did some scrubbing to get the cut up effect heard on this take).

I notice now there’s a bit of feedback pooling on the left side of the mix that seems to be coming from some sewer water and a conspicuous drain downtown – has me wondering where the sidewalk truly ends. Perhaps we all float down here… and there are secrets in the ooze. :clown_face:

Have a lovely week, all.

4 Likes


I decided to “use” the Cubase pitch correct for this project. I tried it out and finally applied it to a “Lofi String” and a Pluck sample from the Cubase content. This gave very unusual sound effects, especially when the correction only changes some parts of the sound.
After starting with such a strange sound, I had somehow (and now inexplicable to me) the idea to look for a frog sample at Freesounds. This is used as a plug sound (auto-warped etc.) and can be heard right at the beginning and in another guise. I put together a drum kit with special samples (thunder, door, …) and thus assembled a rather unusual collection of sounds for the whole track. Again something I never would have created without these weekly inspirations!

The first “corrected” sound appears at 1:02, the second is added at 1:21.

5 Likes