I’m interested in how people use recorded audio and transform it into sequences/arrangements? Particularly from a musique concrete perspective.
Seemingly like a lot of folks on here I use less conventional tools (Ciat Lonbarde, Mics, lots of effects) to generate sounds, often layered. I often end up hitting the record button and jam it out like a live performance then a little post eq, compression and job done. It’s not often I take the time to go into eg: Live’s arrange window and start moving things around but I feel like challenging my working methods to be more considered, or to be able to recall arrangements and work over them.
ooo, great thread! especially considering I’ve been really deep into this way of working lately. also funny that you posted that link to the UCI course, I’m considering going to grad school hopefully for that exact sort of thing.
my friend Jonathan and I work together as Outer Heaven and just put out this new track (piece?) that I think exemplifies how we’re working right now:
it’s part of what is turning into an entirely new album, inspired by me digging through my large collection of recordings and samples - field recordings, loops from my modular system, tape loops, bits of noise, samples (the vast majority from videogames), live recordings of our performances, some high quality recordings made at Jon’s parent’s house in arlington we made when we were on tour, etc - and realizing that I was basically hoarding all these sounds and never using them. so we decided to self-impose a rule to only use pre-existing samples to make this next album, to finally utilize this huge collection.
it’s made me think a lot about composing, given that my only real input is how I effect, layer, and mix these sounds that I don’t otherwise modify. overall I try to let the materials speak for themselves - I like exploring what relationships emerge between two random pieces of sound, things that were recorded years apart, or on entirely different mediums (Jon’s paradigm is very digital, mine very analog, overall). the piece above emerged after finding a fantastic recording from tour of Jon that was very spaced out and incidental and trying to follow that thread.
in a nutshell I think my compositional method is one that tries to embrace emergent relationships. I try to avoid the trap that academically-rooted musique concrete sets of putting a huge amount of consideration into every sound, and thus losing the intuitive and subconcious element of musicmaking that I find essential to what I do.
if I have any rules, they’d be “keep things subtly moving, don’t overthink things, let the material lay a path and follow it”.
I tend to use many stems from different unconventional instruments, most being Max patches such as runglers or other bespoke synth patches. I also use a lot of granular sounds derived from recordings. So finding stuff that works together tonally is often difficult or I need to shift my perception of ok a bit…
I tend to either layer sounds in a DAW, aiming to do so intuitively, or often trying to use any dynamic changes within the material to naturally prompt a movement in the music. Most likely these will be longer fades and be more ambient / spatial in the arrangement. Here’s an example:
Or, I’ll create a sound set that will work together and find a way of auto-sequencing them, perhaps using randomisation to select samples and randomised / modulating LFOs to define envelopes or effects parameters. Here’s an example of that (sorry for the cross-post of that video anyone who’s seen it…):
Occasionally (but something I’d like to do more of) I’ll build some sort of instrument and improvise with it. This is a real-time rungler patch being controlled by a Quneo:
Generally speaking, discovering ways of automating the composition process to create a range of sound materials or suggest ways they may go together has been a real discovery for me. I find I finish a lot more tracks that way.
Reading about compositional approaches is also very valuable. For example after hearing the Meng Qi Sound + Process and watching the performance below I really want to make something that’s more electro-acuostic performance: