Thanks, got it, cheers
I rarely work this way, but for this project I simply let my whims guide me, working with no particular plan, simply listening to the sound as it evolved through various processings I choose.
Processing includes various pitch shifting, grain delays, reverbs, eq, feedback, dynamic processing, and hand placement of multiple copies of clips. Only sounds used were the three given snippets. Only software was Ableton Live & M4L convolution reverb.
*EDIT: Replaced with a shorter arrangement. Still long enough, says you!
Remix of three 20-second pieces from net labels. Details of source material below.
Goal for me here was to make something drastically new out of the source material, but not lose the overall vibe I get from the source pieces, which sound dark and difficult to me.
Rough idea before embarking on the track was to make a playable sampler patch from the source audio, and use sections of the audio for rhythm.
Using VSTs: New Sonic Arts Granite, Unfiltered Audio Indent, Pentode Audio TRW-1, Sonic Charge Permut8 and Echobode, Valhalla SpaceModulator, Brainworx bx_limiter.
Bassline - short section of audio in Ableton Simpler, pitched down, low-pass filter, chorus, saturation, distortion.
Rhythm - sliced Zraerza in Ableton Simpler, tapped out beats. Also, made some 1-bar loops from each Audio piece, and let Ableton’s Follow Actions sequence them.
Master Bus: Echobode, Space Modulator, and bx_limiter.
Excuse the length. Some day, Ableton will present the length (in mins and secs) of an arrangement - and then I’ll be able to preempt my penchant for long tracks before bounce-down. Hopefully, the track will engage you for the full 4.54…
Hope you enjoy!
But I don’t think your 4:54 wasn’t too long at all.
What is this magic screenshot you’re showing me?!
Cheers man, Eanna
I wasn’t sure about doing this junto, but here you go. This didn’t really seem to come together until I added a beat to the altered clips. I did looping and verb as well as my own plugin Swoosh on the three sources. Finished it off with audio to midi on “The Station” clip.
Very cool shifting timbres
Thank you for taking the time to clearly document your process and share your thoughts. I appreciate your candid words and enjoyed reading them.
I have done this - too long, as usual…
Factory 3 Mix
- An interesting collection of music for this piece. Could have used a heads up on Zraerza. That feedback killed me the first time I played it;)
- The sounds lead me to an industrial feel. It wasn’t until I posted it that I noticed the picture Marc used for the project.
- My piece had six tracks. I used Isotope Iris 2 on each track to play all bits of the three 20 second samples.
- 1 - The opening thump and click played through out came from “The Station and the Underclass.”
- 2 - The clarinet sound playing a very slow progression was from the opening feedback in “Zraerza” - played down four octaves in Iris 2.
- 3 & 4 - A part of “Cloud Scissors” played on both tracks. One panned hard left and the other hard right. In Iris 2 both tracks were set to forward and backward.
- 5 & 6 - Another part of “The Station and the Underclass” automated to come in and out with each note in Track 2. Track 6 was played a whole note above Track 5.
My Ohmage to XX Committee. Gone but not forgotten.
I’m still exploring the world of stretching sounds so what I did was pretty minimal which I hoped would reveal some interesting subtleties hidden in the morass.
I downloaded the requisite tracks, isolated the 20-second clips, and stretched them all out to the same 5:40 length (about 17x). The two loud and continuous tracks were then essentially cross-faded while the quieter and sparser final track was just left as it was.
Yeah, I’d intended the rail photo (I think that show is from Switzerland) to serve as a symbol of inter-European movement, but a number of resulting tracks have used a kind of slow-tempo industrial pulse.
After creating source items from the first 20 seconds of the originals, I loaded them into the Reaktor ensemble Grainstates SP and tweaked settings to arrive at three bed tracks, which were bolstered with Kombinat Dva and SDRR. Then I chopped and looped bits from two of the source items for a brief intro. Next, I loaded the source items into Glitchmachines’ Cataract to create a rhythmic part, and treated the result with ValhallaRoom and Trash 2. Finally, I chopped another bit from one of the source items, gave it plenty of ValhallaRoom reverb, and placed it at the end of the piece. To help glue everything together, the master track received some compression from ReaXcomp, some excitement from Trash 2, and a touch of ValhallaRoom. Final normalization and compression was done in Audacity.
I’m a bit late this week, but here is my attempt:
Sampled a few snippets, tweaked them in SoundForge, layered and further tweaked in Acid Pro. Fairly minimal processing - mostly just pitch related, plus some reverb, and creatively automated EQ.
I managed to squeeze in a disquiet project despite an incredibly busy new school year. I was hoping to achieve my goal of reinventing Eno’s semi-stochastic Music for Airports album, but I ended up going in a slightly different direction (and perhaps falling back into old habits!)
Pitch-shifting and time-stretching were the name of this game. I started with modifying these source samples down as low as they could go, and lovely tones emerged. Feedback squalls became thoughtful long tones. Then, I pitch-shifted and time-stretched the excerpts to make them match the subsections of the golden mean. I let the length of time to fill determine how much I time-stretched the samples: long lengths of time fill meant severely down-pitched samples.
The feedback squall reappears to punctuate transitions between sections.
The other central component of this piece is a slow increase in gain to the golden mean point. I put a limiter on the track to keep the level even, but the gradually increasing gain creates distortion and changes the timbre of familiar samples.
The title comes from a piece of advice my kid gave me after I led her choir rehearsal tonight.
I trimmed the source audio down to the first 20 seconds of each. Next I loaded each 20 second track into the AUM mixer on 3 separate channels. Since they were all noise tracks I decided to remix noise with noise so I ran the audio out into my old Monotron Delay. So essentially the track was comprised of an improvised mix of the three tracks plus the Monotron unit and a little bit of reverb from my analog mixer.
Thanks. Junto has been quite a surprising prompt for me.
My disquiet0244 submission was created using code. I used SoX for audio file manipulation, Node.js for wrapping SoX commands, and Essentia for audio analysis (i.e. - beat detection). My process was as follows:
- trim tracks to first 20 sec
- strip beginning silence
- essentia extraction (i.e. - beat detection)
- slice tracks at detected beats
- normalize slices
- combine slices from all tracks into a single sequence
The result avoids consecutive beats from the same track until the end. The sequence seeks to maintain the relative distribution of each track as the sequence progresses.