choosing and layering timbres: (synthesizer) orchestration

Here is (to my mind) some fundamental problems in making music: how do I color the sounds I am making? What techniques allow a collection of “instruments” to layer into a cohesive unit of sound? What techniques differentiate sounds into “voices” and allow them to coexist well in a mix? What musical effects can I achieve by altering timbre (either while “mixing” or over time in a piece)? In classical music, to my understanding, these problems fall under the name of orchestration.

Several times in bookstores I’ve picked up books on orchestration and flipped through them to see if I could glean anything useful to my music from them. I haven’t bought any of these books, which I guess means so far the answer is “no”. It can be quite hard to translate techniques for a classical orchestra into the context of, e.g., synthesizer music (what’s the difference in synthesizer terms between a clarinet, an oboe and a saxophone? do we take violins to mean a saw sound, or a lead more generally?)

So I’d like to learn from you all. When making music, designing sounds into “voices”, what are some of your favorite techniques, effects, problems? How do you go from synthesizer to patch to voice?

I’ll share a piece I made where this idea of orchestration was central to the piece. It’s inspired by Ravel’s Bolero and, more obviously perhaps, by Clark’s Beacon, particularly the “one sequence” feel and lovely clangy reverberations the lead gets at some points.

The conceit of the piece was to never change the melody, just the timbre. There are four parts: the lead sequence, a bass part that emphasizes parts of the lead sequence, a chord sound that marks time at different points of the piece, and then drums. I think I did actually minimal tweaking of the actual synth patch’s parameters, maybe just one parameter that to my ear, away from the project file, sounds like going from a more square-wave sound (hollow, juicy, blue, hard to put words to what a square wave evokes, isn’t it) to a more saw sound (a bit thinner, brighter, less interiority). Instead I had three or maybe even four different copies of the sound going through different effects chains, and most of the drama of the piece comes from modulating the levels of the different chains. Notable effects included distortion into reverb, formant-shifting, chorus, etc.

As I write this, I begin to wonder how such drastic modification of parameters might sound if I were trying to fit this sound into a mix with other elements without having it steal the show. Have you attempted this? What works well?

By the way, I modeled something I’d like to attempt if possible in this thread: inspired by Namelessness, I think it would be especially useful in this thread to discuss techniques in general terms: to my mind it matters little which synthesizer, reverb, or distortion plugins I used to create my piece, and it might even be more useful to other readers interested in adapting your techniques if you explain them in terms of more fundamental building blocks. I don’t think it makes sense to be super strict about this idea, but it might be worthwhile to keep in mind.

Anyway, looking forward to hearing your thoughts!

43 Likes

Definitely worth looking into spectralism… music composed for orchestra that makes timbre and its transformations an explicit concern.

Started in the '70’s. Risset, Scelsi, Radulescu are places to start. Interesting overlaps with early computer music, too: Spectralism - Routledge Encyclopedia of Modernism


Great topic sparking many thoughts on orchestration vs arrangement, the standardization of the forces in Western orchestras and its relationship to harmonic treatises, monochords, harmony of the spheres, Bartok, extended techniques, non-Western approaches, etc, etc…

I’ll try to organize some ideas and slowly add to this post as I am able. In the meantime, here are my thoughts.

7 Likes

i love the aim and execution of voicing on your track!
this thread will be a really cool resource for us all

layering synth parts (and samples) is what i love most about music creation and arranging but…i’m not the greatest at describing what i think of when working on stuff

before thinking thru how i might explain my own musical methods i can share a few songs that show what i like and display principles i base my work on

this was the first producer i thought of and the most influential for how i mentally approach synth arranging, voicing, mixing (in context)

next in mind was this album by portable sunsets

voicing and mix on this masterpiece sprung to mind next (cause i just listened earlier today)

8 Likes

Balance via contrast is the goal but hard to do in reality.

If one element is becoming more dominant in volume/spectral range/density/duration, your complementing element would be quieter, more spacious, filtered and sparse.

Can be any opposing characteristic, for timbral qualities it can be much more subjective than some stuff I mentioned like volume and brightness. With electronic music, organic/synthetic is a natural consideration I think. People like flying lotus marry the two worlds beautifully and it’s very difficult do make work but it’s a great combination if you can pull off.

You choosing what opposing characteristics to pull out is personal taste. I like frequencies from ~35-100hz so I try to make room for that by having less mid-rangey instruments or high pass filtering the ones that are there.

King Tubby Meets Rockers Uptown has been a huge blueprint lately

5 Likes

I often think of creating voices as sculpture, and arranging the song more like painting, but maybe a better approach is the voices as the individual work, but song/track as the show/installation presenting that work. You have to balance the individual pieces to create the presentation you want in total which might mean putting smaller pieces on one side of the room and allowing the largest piece to dominate one wall or the center of the room. The room here is the total frequency spectra. There will also be unused space in that room (or not!).

One thing that is hard is taking a sound that is great in isolation and taking away from it to fit in the song. (To me, for synth based work, mixing and songwriting is intertwined/inseparable.) It can feel like you’re mutilating it but really a little pruning is healthy and probably necessary. In general I use the board for minor eq tweaks and try to get the sound right as much as possible before hitting it.

Adding effects present additional issues, since they tend to add to the original sound’s frequency spectrum - especially reverb, distortion, delay. The more effects on a voice probably the less voices you need since reverb, for example, is going to take up a lot more space. An orchestra has one reverb only (the room/hall) effecting it en masse.

Lastly I think randomization for synths is important but I’ve babbled a bit now so I’ll just leave it at that.

6 Likes

I, for one, would love to hear more. I think this is a wonderful topic to discuss and want every post in this thread to be very long.

3 Likes

lol well ok… randomization is relatively unique to synths and I feel that it can add the right amount (subtle or wild, it’s up to you) of timbral variation/movement in a way that doesn’t feel like just going from point a to b (often realized as turning the cutoff/decay/reverb amount up over the course of a song - making something “more” as the track progresses). Or lfos which can sometimes feel repetitive/predictable (it’s going up, now down - now up again!). Of course multiple lfos to multiple destinations is another way to negate that effect.

Adding that I’m into OP’s track and think it is effective and I like the construction (same voice, multiple effected copies mixed in and out, which I’ve never done).

4 Likes

song-as-gallery vs. song-as-painting is a really interesting comparison. I’m quite partial to song-as-painting, because it reminds me that it’s possible to use different kinds of brush strokes, if that isn’t stretching the metaphor too far. Something I often struggle with in working within a DAW is “song-as-sheet music” where my desire to create patterns that fit neatly in the piano roll and gridlines of the DAW leads to music that feels a little too much like clockwork.


An interesting technique I wanted to mention is layering monosynths, either recorded or copies of the same plugin patch, to create chords rather than just using a polysynth. Doing this reminds me more of vocal harmony rather than chords played on a piano. This might be appropriate when you want subtle timbral variation (or mixer differences like panning) between the constituent notes in your chord, but actually forces me, at least, to think about things like “voice leading” and “counterpoint” more than or differently than I might with a single polysynth voice.

8 Likes

Somewhat related, I recall an interview with Eric Moquet (the Deep Forest guy, not a fan but found the interview interesting regardless) where he said one of his favorite things to do was to try to get as close as possible to the same patch on multiple monosynths and then play them together as a poly. Supposedly a much bigger sound.

What I like about multitracking monosynths is the control you have over legato and glides which can otherwise be unpredictable on a single poly instrument. A lot of interesting phrasing possibilities in there.

6 Likes

I’m so glad you made this thread, I have been wanting to reply to it but needed time to gather my thoughts and let them stew – I spend a lot of time considering orchestration in synth voice design.

Fundamentally, I think about all synth sounds as collections of overtones. Additive synthesis really helped me to understand this concept, along with looking at spectrograms of voices and instruments. Square waves are all odd harmonics, saw waves are all harmonics, sine waves are no harmonics, white noise is all frequencies, etc.
Here’s an article with some pictures.
The internal bore of a wind instrument will have an effect on the overtones it produces. Here’s some math. Conical instruments: sax, oboe, cornet. Cylindrical bore: Flute (open on both ends), clarinet, trumpet, trombone. A french horn (cylindrical) points away from the audience and has a hand shoved into it, so it’s verrry low-pass-filtered.
Strings are kind of a funny thing, because they produce sound by applying noise to excite a string (bowhair is like a million little plucks all happening very fast, with random jumps and skips). Notice in the diagram in the previous link that whether an instrument/resonator is open or closed at the end(s) has an effect on the timbre - strings are closed on both ends, so they have all harmonics present (saw wave). But the combination of variations in stimulus from the bow, resonance from the violin body, and how many are playing, will have an effect on what we hear.
Also, this is fun - an image of various resonaces in a violin body:

If I’m trying to model a resonant body, I will usually use convolution - I’ll record myself knocking on a desk or a book or a cardboard box, and run a synth through a convolution that has the knocking sound as the impulse response. Knock on a bigger thing to simulate a larger resonating body.

Here’s another graph you might already know – the Fletcher-Munson curve. It’s a graph of how we perceive frequencies at different volumes:

Think of it like an upside-down filter on our ears. We need more energy in the bass to hear a sound that is actually the same dB level as a mid/high frequency. So our ears are more sensitive to highs than lows. And we are overly sensitive around 2-6k at low levels, as well as 200-1k in most ranges. So we perceive those sounds as louder than they actually are. It’s interesting from a psychoacoustic standpoint, but not deeply applicable, except that it shows that we are super sensitive to high frequencies and mids, which happen to correlate to the sounds we use for phonemes (spoken language). All this is to say, if I want someone to pay attention to something, I will emphasize these frequencies. If it isn’t in the foreground, I’ll try not to emphasize them in the mix, and will probably cut them at least a bit.

I almost always put a LPF on every synth voice I use (if I’m not using convolution), just because the full presence of all the overtones has always sounded abrasive and unnatural to me (since I was in a car seat hearing Peter Gabriel on the radio). Sometimes I’ll keyboard-track the filter, especially if I’m using a very wide range of the instrument, but not usually – I use the LPF as a way of keeping the voice’s range in check. I rarely take the resonance very high; usually I don’t use it at all unless I want a nasal/oboe sound to really cut through something. Or I’ll do an LFO sweep with the resonance up but still below 50% and send the thing through a long reverb if I want the differently accentuated harmonics to swirl around in a pad/ambient/drone thing. Also, concert halls have reverb, but as we get farther away from a sound source, air causes the high frequencies to decay faster so the sounds have less high-end, so high frequency content also plays a role in perceived distance.
If a synth is taking a lead/solo, I’ll give it more high frequency content so the material will slice through.

In mixing, I think about synths as very close-mic’d instruments that need aggressive EQ or else they take up too much space in a mix. I’ll cut out vast bands of their frequency content so they only take up what I need from them.

In traditional orchestration, piano music is usually flushed out and assigned to different parts of the orchestra. Sometimes it’s the whole shebang, but generally, unless it’s a full tutti section, the voices are restricted to switching among either the instrument families (string orchestra or brass choir for instance) or instruments with different timbres and similar ranges, like clarinet/oboe/flute with violins/violas. This tends to give the arrangement somewhere to go – everyone playing at once is cool but not for a very long time. It’s waayyy more interesting if we can switch it up and move the music around to different instruments, volumes, and frequency ranges.
Within that, it’s very interesting to stack the winds in the orchestra, say flute under clarinet under oboe, and then to switch up the order. There are all kinds of deep analyses of how different composers like to stack their winds. I think it’s interesting to do a 3-voice chord and have each voice with a different timbre and slight tuning or phase variations through the same filter.

Actual orchestral instruments have some limitations that synthesizers don’t. Playable range, for instance. Or that wind instruments can’t sustain a note/phrase/section longer than a breath. Sometimes I’ll say “okay this is a tenor line for square waves” and not let it go too far up or down in pitch. Other times I’ll say “it’s time for this synth bass to take a piccolo solo” (ex. TB-303 playing outside of its intended range and giving rise to acid).

Another thing I really like to do is dynamic timbres – I have some max patches that use 2 different wavetables and mix between them. One begins on one timbre and fades over to another as the note sustains. The first timbre will usually be brighter and last a short amount of time, like the length of the attack portion of the envelope, and somewhere along the D/S portion I’ll fade into a mellower timbre. Another patch randomly chooses a mix between 2 timbres, and I usually keep these more related – if one is too much brighter or more resonant than the other, the contrast between the two is very stark and it gets distracting for me.

Also! It’s interesting to listen to a Bach piece for solo cello, and to conceive of a solo synth piece. OR! Here’s one of my favorite Messiaen pieces, first in its original form for 6(!) Ondes Martenot, and then as a movement in Quartet for the End of Time (which he put together when he was in a POW camp during WWII when a guard found out he was a composer).

and HERE is a video of (the sadly late) Richard Lainhart playing the same piece for Buchla 200e and Haken Continuum.

That’s all I can squeeze out of my brain right now, but thanks for the opportunity to share!

23 Likes

Thank you for this generous reply!! Lots to think about here.

I found this really illuminating! There’s so much to think about when there’s a microphone involved, and a whole degree of artistry there. Fun to consider what “mic’ing” a synth might look like (extra reverb and a high / low cut to simulate distance, for instance)

This too! I think you can hear this in some of the Weeknd’s stuff for instance, where the arrangement is really spare, and yet there’s a real subtle life to the parts, with layers coming in and out to differentiate different parts of the song sonically.

I’m still thinking too about the differences and similarities between visual art and music. I guess it’s really in sound design (and orchestration!) where you see layering play a similar role.

6 Likes