I’m currently rendering hours of unreleased music in a bid to draw a line under it all and put it out into the world in some format. The reason I’ve not done this before, however, is the issue that is facing me once again:
I can’t handle how static some of it sounds and the only way I can find around this is to completely rework (or even re-produce) the tracks (which is basically like a making a new track… not exactly helpful in my goal of clearing out these stacks of unreleased tracks!)
As a basic idea to help move things along, I was thinking of using external sound sources to control filters. An LFO is too uniform on the rendered tracks as they’re basically broken into 4 bar chunks of bass, beats, mids, atmosphere etc. so a wave-controlled filter is still quite uniform and predictable. An erratic external sound source - like amplified record crackle, for instance, or rain, perhaps - might achieve more of an enjoyable result. The only issue is: I don’t know where to begin in order to achieve this. I’ve done it on my MS-20 Mini, but I want to do it with rendered audio rather than a synth tone so I’d be working within Ableton using a plugin, I imagine. Is this something I would need to delve into something like Reaktor to achieve? Does what I’m looking to do make sense to anyone (have I actually managed to express myself at all?) and/or has anyone done anything similar ITB?
I don’t - as daft as it sounds, I’ve always been mildly intimidated by it! I do have Reaktor, but have only used ensembles that others have made.
An Envelope Follower does seem like exactly what I’m looking for but I’ve only ever encountered them in modular systems. Forgive my ignorance on a lot of this stuff - I’m coming from a largely hip-hop perspective, at least in terms of my own productions & although I have been using synths for a long time, I’m still very new to a lot of the production techniques that breaking out of the hip-hop mentality can offer.
An envelope follower really is more or less like what you describe on the MS-20 actually! It takes an audio signal (record crackle or something) and “smoothes it out” into an envelope that you can then (hopefully) send to modulate something else. I’m not quite sure how the workflow is in Ableton vs Reaktor, but I think there should be a Reaktor ensemble someone has made if you search envelope follower, and it might do what you want.
That’s massively helpful - thank you! Using a cassette loop to control the pitch of the MS20 was the first thing I did with the patch leads after I’d played around with the SQ-1 to make some basic sequences. I didn’t completely know what it would do, but I applied what I thought was logic to the issue and it did more or less what I imagined (/hoped) it would. For somebody who has done very little with patch cables that weren’t connected to guitar pedals, it felt like quite an exciting breakthrough!
Max For Live was the first recommendation my brain jumped to as well.
If you use premade M4L stuff rather than rolling your own it’s the same as using Ableton devices, they’re just in a different folder in the browser.
Besides an envelope follower look into the default LFO device. It has a “jitter” control that can be attenuated and smoothed which adds some nice unpredictability to the usual waves, but when I’m doing the exact kind of thing that I believe you’re describing I’ll take it further by pulling up 3-4 LFOs and have them modulating each other’s frequency and amplitude, with degrees of jitter on each. You can quickly get totally non-repetitive but musically useful modulation signals to map to any automatable parameter in your session, and dial in a range that makes sense.
Besides looking into LFO’s to remedy this, I’d recommend trying to inject small events here and there in a place and way that makes musical sense.
My process is that I record stuff live into a DAW and then do final adjustments. At this adjustment stage I often add small things that add unpredictability to the track. And I’ve found that these small pinches of spice really make the ’basic track’ come together and breathe. I often use small melodic elements, chirps or swooshes or drum fills/hits, or even just noise fading in and out, but anything that makes sense in the context of your track is fair game
EDIT: this might seem like a gimmicky way of adding interest to something that doesn’t stand on its own, but I’ve arrived to a different conclusion. Many artists I adore use similar elements. Besides, this technique can scale to fit anything - it can be a -40db noise swoosh from 2:00 to 2:45 or it can be Amen fills every 2 bars.
Low frequency noise through an envelope follower can do the trick. Or, depending on what you have for inputs on the LFO, noise, random voltage, wave folded audio, etc to modulate the LFO. Or the same techniques to modulate a filter or VCA that you pass your audio through. Working in a DAW, you can make a copy of the audio on a different track and move it forward or backwards slightly in time to create subtle phase effects… or slightly detune one of the tracks… use mix automation to fade the modified tracks in and out.
I had absolutely no idea you go do that with Live’s LFOs - that’s fantastic! I’d ruled LFOs out simply because on the hardware synths I have you can only vary between standard, simple wave types, resulting in relatively predictable movement. More complex or randomised waveforms are a whole different story, though!
To be honest, it’s not structural changes that have the main issue with. I’m an obsessive about drum programming, for example, which can add a lot of momentum to the structural or compositional movement of a track, although the other things you suggested are excellent calls and things I do totally love in the work of others whilst perhaps not doing them enough in my own.
The key thing I’m looking for is textural movement, really - ways to make a sample (a rendered synth loop, for example) dance around the frequencies to add interest and unpredictability on repeated listens.
That’s a good call - these are all things I’d do naturally with synths and MIDI (detuning the second oscillator, for example) but have been overlooking in audio work, leading me to become bored by them and view them as somewhat staid and “on rails” if that makes sense.
When working with material that is already recorded and I don’t want to rework, to which I just want to add special „something” I do what I call „giving it a dub treatment”. Basically you leave everything as it is, add few effects into sends, play music and while it is playing you play with effects introducing changes and movement into the whole thing. It works best for me with hardware controllers/effects because it allows fluent values changes in a very fast way instead of changing things manually with mouse. And another plus is that you are reacting to changes that are happening in music in real time instead on working of smaller parts of composition one at a time.
It’s funny you say this as it’s exactly what I’ve been doing with Aum on my iPad and “playing the mixing desk” (á la the dub scientists) is also the basis of how I played Live live (back when I used to leave the house to play music to rooms full of strangers).
With Aum I’ve been sending a loop to a bus and then sending the bus to a number of channels, all running different effects. The next logical step would be to map them to a hardware controller but I haven’t tried that yet.
With Live in a live scenario I’ve always had split dedicated channels for bass, drums, mids etc with 2 send effects (the Reaktor RE-201 and MF-101 simulations from the user banks).
Both of these options are a lot of fun in-the-moment, but I have a mental obstacle regarding “real” tracks (i.e. ones which get nailed to the wall as “finished”) in that I expect something else from them. I know that probably sounds ridiculous…
Then I would probably advice what others have already adviced: use lfo to bring movement especially when using one lfo to modulate another. Also if you are using Reaktor you might use something like grainstates fx as a send effect, send some tracks to it and set it rather low in the mix. But its effects are rather glitchy (but with a lot of movement) so you milage my vary
Lots of outstanding suggestions! I’m noticing that a lot of the discussion so far focuses on control sources (e. follow/lfo) and destinations (e.g. effect bus send).
Are you set on what kind of effects you want to be using?
One direction that might be especially rewarding for what you are describing is using your stock convolution plugins for more radical cross synthesis instead of reverb. (Diego Stocco has a lot of inspiring material in that vein.) It’s easy to control and implement within a DAW, and it offers endless potential for timbre and rhythm variation.
I’m not and have to admit that I don’t know of one for Live (although I’m very new to this area of production so have done anything but an exhaustive search). Seeing what people have done (and continue to do) with modules such as Braids is no small inspiration for what I’m looking to achieve, definitely.
I’m not remotely set on anything at all, to be completely honest! I have some very concrete principles about texture and tone - the sonic palette, I guess you could say - but I’m keen to very much re-examine my practices in relationship to maximising the tools which are available. I spent the better part of 20 years making forms of instrumental hip-hop and although I was pulled away from samples into the sound of analogue synthesizers at a relatively early stage, but explorations into the more esoteric production methods such things can afford have been very tentative at best.
I have to admit: Diego Stocco is a new name to me, but I’ve just had a quick search for his work and it looks incredible. I don’t think I’ve ever actually used a convolution reverb at all (I have a dual tank spring reverb unit that I tend to use for all my reverb and have never really considered that I needed anything else). The idea of using a CR for something other than reverb, then, is a fascinating idea that instantly appeals!