How do you approach making music with multiple parts or voices and make it sound coherent and spacious and balanced?
I find when I try to write or improvise music with more than one part, it very quickly becomes a pile-up. Each part on its own takes up a lot of space, and so combined it becomes impossible to pick either one out and hear it. At the same time, when I’m putting a part together, it feels “wrong” somehow to just have a couple notes plunking every other bar.
My musical background started with piano and keyboards, and I never really played in bands or ensembles, so I think my instincts are to fill all the space, since that what a solo piano usually has to do. It doesn’t seem to translate well to putting together multi-part music. With the synth voices I come up with, I also feel like the part fills up all the space and doesn’t leave much room for other sounds.
I realize to some degree that some of the answer is probably just ‘have more restraint’, but I’m hoping that some people here can share their thoughts or approaches on this.
Maximalism and oversaturation are valid aesthetics! I enjoy them in fact. But I want to be able to also pull in a wider variety of sounds that play well together and not have them run over each other.
Compositionally speaking, this is exactly the problem counterpoint tries to solve (almost literally… one could make the argument that early counterpoint techniques were used in sacred choral music to fully maximize the acoustics of the space they inhabited).
edit: i misunderstood xenus_dad’s original question, thinking it was about vocal music.
i think you’re right it’s not a mixing problem, that’s a later question. mainly compositional, making space in the spectrum by giving your voices pitches that allows for air in between. but also how it’s recorded greatly affects how it sounds. human voices have a natural tendency to go well together. if a handful of people sing together in a room, i think it will most likely produce a likeable, natural sound.
it’s much easier to capture that natural blend of voices if they’re actually recorded as one instrument, commonly with some type of stereo mic setup. then the acoustics of the room will also be very significant.
also, what type of vocal music are you talking about? “multi-voice music” could be anything : ) could you give us some example or explain it a bit more?
classical/church choir music covers a wide spectral range and often “fills up all the space” as you say, blending the voices together with help of natural reverb, making it difficult to pick out individual voices. i guess this it not the type of multi-voice music you’re talking about?
some vocal duos have the luck of having the vocal qualities that just in perfect timbral harmony with each other – like abba or simon & garfunkel for instance. i know the latter sometimes sat next to each other recording into the same mic.
that might only be for super professional singers with excellent mic technique, but it might be worth trying.
perhaps you’re only recording your own voice multiple times. then you can try positioning yourself at different distances from the mic for different takes. set up multiple mics in the room and blend between them. sing in both your modal voice and falsetto to create timbral and harmonic space.
finally some mixing points if you’re multi-tracking: don’t underestimate panning for creating space in the mix. treat your vocal tracks individually, but more importantly as a group. a method could be to try working in pairs, mixing two voices well together, then another set of two, then mix those pairs together.
I should be clear, I just mean any musical voice or part, not vocals specifically. I edited my post to be clearer. I meant “voice” in the compositional sense rather than humans making sounds with their lungs and mouths
I realize there’s a super broad range of musical forms this could cover. Let’s maybe start by thinking of some more conventional current musical forms: pop song (so mixture of synth/production and “real” instruments) or techno/etc, or “band” with drums plus bass player plus maybe guitar plus maybe vocals. Small ensemble music.
The problem of the various parts of the music making space for each other seems to me like it would be common to all those forms, and many others besides.
haha, of course! well, some of the things I wrote still applies. the main thing is composition and orchestration. spread out your instruments and phrases across a wide spectral range. choose and make sounds that are of different timbral qualities: wet and dry, hollow and thick, shimmering and matte, raw and brittle, snappy and droney etc. they will complement each other and create a rich sound world.
some synths are difficult to mix, always creating fat walls of sounds that leave no room for other sounds. if the synth you’re using has built in effects, i often find them to be a problem when mixing. filtering, eqing re-amping etc can then fix many problems. setting volume is the main thing to get right. and panning is great for creating space.
When I was studying taiko drumming, the director occasionally would remind us about ma, or negative space… leaving in some rests to give the music time to breathe. Often in taiko that also involves slow fluid movement anticipating the next strike, but it could also involve a moment of stillness. If you’re nervous or excited while taking a solo, it can be hard not to just fill every second with flurries of notes.
In my music now, I have embraced the drone… which seems at first like the opposite of leaving open space. But drones don’t have to be overwhelming, and open space doesn’t have to be complete silence.
I think in my current project, I kind of accidentally did better with giving it some space. Part of it is the nature of some of the sounds I was working with, but part of it was going for a longer format, where I felt I needed variety in dynamics and density. It also made me “think slower” while improvising, and think in terms of transitions between sections that weren’t necessarily compatible if they overlapped in time.
This so speaks the problem I’ve been trying to solve for the past few months, so I’ll be following this thread with interest! My short term approach has been to carve out restraint in editing, but I’d like to get to restraint as a starting point.
I am not sure if this is helpful for anyone else but since I also struggle with this, here is some of my approach.
Use fewer notes at a time. Seems obvious, but it’s a choice. Maybe it doesn’t need to be a full chord. Use a dyad. Let something else play that 5th or 7th or whatever.
Let it breathe. Let the envelopes help you. Start a new sound as the previous sound is fading. Use a sound that rises slowly with a sound that decays to create sinuous textures.
Be sparse but fill space with noise. Be restrained with melodic elements but fill space with texture.
Use call and response. Let one part answer another part. Let voices finish each other’s sentences.
Use only a sound’s sweet spots. Rather than trying to use one sound across a lot of octaves or timbres, find where it actually sounds best and only use those spots. Then pair it with sounds that have sweet spots that complement each other well. Think about the range of acoustic instruments and mimic those relationships.
I pretty much always multi track so even if I go all out and end up with a mess of overlapping voices I can spend a long time editing it down, creating space for the various parts. When working like this I like to think of it like a sculpture where you just start out with a big block and you carve out all the spaces, starting with broad/large cuts and getting finer and finer with the detail until you end up with something you find well balanced…The process itself reveals patterns to you as you go along and you may get parts that gel together in a way you would never come up with it on your own while playing. Editing can be quite rewarding and fun like this!
The whole edit process can be made wayyy easier by better thought into your sound design before laying anything down. In the past I too used to like making each sound fat and full. These days I put some thought when designing/choosing each voice…so if you have a fat moog style bass, you dont want more fat sounds…you may complement it with a more thin reedy sound…like making a thin sound using a pulse wave or just a thinner sounding synth/instrument…likewise combining different filters types and then textures and then dynamics and then silences…
I tend to favour specific timbres in synths, I like resonance and sines… But if I make every part the one sound I like I end up with horrible mush…
I am always reminded of when I was in school, during music department concerts, there would be a flute choir and a ukelele orchestra and I could never stand either of them… Even brass bands can get to me sometimes, because when you have 30 instruments with the same timbre, even when occupying different registers, it gets kinda mucky.
In a standard rock band, by contrast, even the lead guitar and rhythum guitar use drastically different playing styles (strummed with a little distortion Vs plucked higher up the neck with reverb and fuzz, for example). Even a piano has quite a different timbre between lows, mids and highs, Vs say, a flute + piccolo.
I’m my own modular music, I get around this problem by ensuring that each voice sounds like a different family of instrument, maybe try describing the texture/timbre of each sound and try to make the next part not share too many of those descriptors. Needs to happen in comunction with occupying different registers and not needing every part to play all at once.
I think playing double bass in an orchestra as a kid helped me realise the importance of space, once you have sat in a concert counting 80 bars and turning pages and pages of rests you realise how a lot of orchestral music only uses 1/3 of the orchestra at any given time.
Finally, specifically in a modular context, if I find my composition is too crowded I will put one or two of the parts through a Vca and connect to a really slow lfo, with the whole negative portion of the waveform fully closing the vca, so by it’s very nature the part is only turned up half the time.
Ignoring questions of eq / frequency bands, there are a couple of things that have helped me out recently.
The first is so silly but I’ll mention it anyway: reduce those envelope release times. I find that when I am working on a part for one instrument at a time, I tend to do two things that negatively effect an eventual effort to combine it with other parts: I favor a specific frequency band, and I favor long-ass envelopes. And that’s fine when I am working on that part in isolation, but I need to have in mind that once I am working with parts together, I will likely start by switching up the filters (or at least cutoff frequency used) and shorten up those envelopes.
There are some sequencing strategies that I’ve found really helpful to solve this problem as well. I think in general these can be summarized as falling under the same umbrella of: introducing variability.
First is decoupling step note from step duration. I have really come to love this strategy for sequencing even individual voices, but when doing this and working with multiple voices, I find that the variability introduced creates a sense of space that I never quite do well enough trying to introduce on my own. You can accomplish this by having an analog sequencer that can be driven by another sequencer or a digital sequencer that allows for modulation of step length.
A second, related approach is driving sequences of different voices via something like a clock divider. This is nice because it allows for the possibility of some overlap between voices (if you use like a 3 & 4 output, for example), but plenty of opportunity for them to work in isolation as well.
Finally, using logic to drive sequences works for this as well. As an example, I am working on a song with three separate voices, two in a similar frequency band. I was struggling to get them to feel distinct from each other, until I used the inverted gate out from on sequence to trigger the other. The two parts now feel related, but not sitting on top of each other.
Anyway, not super complicated stuff, but these are all strategies I’ve been employing recently to solve this problem in my songs.
If two totally independent melodies are played together, chances are they won’t work together. Ways to have them related would include having them play in the same rhythm, leave space for each other (question/answer) or separate enough to be distinguishable (like fast/slow).
Work out melodies in different ranges (bass/melody or melody/float-on-top) or with with different “roles” (warmth/percussive or foreground/background).
If the interval between notes are dissonant, there should probably be some resolution later on.
Listen to music you like and try to identify what makes different melodies sounding at the same time work, anything from Alban Berg over ABBA to Autechre should work…
My method of working these days gets around a lot of this stuff in a different way than some of the advice that I suggested… but that is the stuff I do when I work in the way that I’m imagining the OP is working… which seems like maybe writing all of the parts at the same time?
What I do now usually is just create samples. I’ll mess around with whatever sound source and make 5, 10, 20, 100 samples in a given key/scale/mode/whatever. They’ll usually be based around some theme or melody, but might just be one shots or drones or anything else. Sometimes I will spend a lot of time on sound design up front, but often I will leave things pretty unfinished or bare.
Then I will start messing with those samples with fx or Norns or throw them in a sampler or whatever else and start resampling and messing them up, generating a new pool of samples.
Then I will take those and start laying them out in the DAW, layering, eq’ing, mixing, grouping tracks and adding any further processing (fx chains with modulation on groups of sounds can help glue them together)… at this point I’m usually working in sections.
If anything needs to be added to those, I’ll just write in new parts responding to whatever I’ve assembled through editing. It’s an iterative process, so this might happen any number of times.
Would love to find a faster and more immediate way to do this, but haven’t yet. Sorry if this was too off topic!
I really appreciate all these answers. I currently run everything into a stereo pair through an analogue mixer, and I probably could benefit from breaking that out. I may need to invest in an ES-9 or a higher input channel interface (mine has 4 channels in if you ignore the adat). Multitracking would certainly help with editing and mixing later on, which might tame some of the mess.
For some reason, I think I’ve had a bias towards “doing it live” as much as possible, especially with modular things. Which, if you can pull it off, is great! But it might be making things more difficult than they need to be, especially as I don’t currently produce music that I’m happy with regularly.
(Making things more difficult and/or complicated and/or expensive before I can do the work is a tendency I suffer from across the board and which I could do to gently resist. Different topic though!)
The thing that blew my mind when I learned about it with having a lot voices is the effect of too many overlapping overtones basically creating their own fundamental. So the notes you choose in your composition can affect how crowded your music can sound separate from fixing things with EQing frequency bands. I’m still very much a beginner but it was one of those unknown unknowns for me for why my music sounded off. This article explains it way better than I can https://evenant.com/music/3-beginner-orchestration-mistakes/
perhaps a spectral mixer like the jumble henge could be something for you, since it’s basically built to separate inputs by frequency bands and panning in a simple and effective way.
or get a cheap adat interface and expand your audio interface with 8 channels for multitracking. personally, i probably probably wouldn’t make much music i enjoyed myself either if i weren’t able to fade out and in, mute and unmute separate tracks after it had been recorded. simple stereo recordings are fine, but then i would have to make a lot more practice runs rather than just record 40 minutes worth and cut it down to 6 : )
I’ll take a look at attacking this from a mixing perspective. It feels like more of a compositional issue than a mixing/EQ issue to me, but maybe a bit of EQ & pan would take care of some of the problems I think I have.
Regarding ADAT expansion, everything seems to point to the Behringer ADA8200, which would make it the first time I’ve considered a Behringer product!
One of my “quick and dirty” live improv gig sauces is to have the band pass filter on my rc505s input effect constantly engaged… I have a low to medium depth to it and I just tend to turn it up and down the frequency spectrum as I add in parts… Deliberately keeping it just tuned by ear on the spot… Its no magic bullet but simply by moving that notch about creatively I can add two instruments against each other… Say piano sample and Rhodes sample… Back to back… Without it coming across as “too much”… Recording would obviously be a very different thing but it works for live in a pinch
i’ve been thinking about this a lot lately. would love any recommendations for books on orchestration if anyone has any. will hopefully have something more productive to add but for now i really love the way the voices are layered in this piece