I’m so glad you made this thread, I have been wanting to reply to it but needed time to gather my thoughts and let them stew – I spend a lot of time considering orchestration in synth voice design.
Fundamentally, I think about all synth sounds as collections of overtones. Additive synthesis really helped me to understand this concept, along with looking at spectrograms of voices and instruments. Square waves are all odd harmonics, saw waves are all harmonics, sine waves are no harmonics, white noise is all frequencies, etc.
Here’s an article with some pictures.
The internal bore of a wind instrument will have an effect on the overtones it produces. Here’s some math. Conical instruments: sax, oboe, cornet. Cylindrical bore: Flute (open on both ends), clarinet, trumpet, trombone. A french horn (cylindrical) points away from the audience and has a hand shoved into it, so it’s verrry low-pass-filtered.
Strings are kind of a funny thing, because they produce sound by applying noise to excite a string (bowhair is like a million little plucks all happening very fast, with random jumps and skips). Notice in the diagram in the previous link that whether an instrument/resonator is open or closed at the end(s) has an effect on the timbre - strings are closed on both ends, so they have all harmonics present (saw wave). But the combination of variations in stimulus from the bow, resonance from the violin body, and how many are playing, will have an effect on what we hear.
Also, this is fun - an image of various resonaces in a violin body:
If I’m trying to model a resonant body, I will usually use convolution - I’ll record myself knocking on a desk or a book or a cardboard box, and run a synth through a convolution that has the knocking sound as the impulse response. Knock on a bigger thing to simulate a larger resonating body.
Here’s another graph you might already know – the Fletcher-Munson curve. It’s a graph of how we perceive frequencies at different volumes:
Think of it like an upside-down filter on our ears. We need more energy in the bass to hear a sound that is actually the same dB level as a mid/high frequency. So our ears are more sensitive to highs than lows. And we are overly sensitive around 2-6k at low levels, as well as 200-1k in most ranges. So we perceive those sounds as louder than they actually are. It’s interesting from a psychoacoustic standpoint, but not deeply applicable, except that it shows that we are super sensitive to high frequencies and mids, which happen to correlate to the sounds we use for phonemes (spoken language). All this is to say, if I want someone to pay attention to something, I will emphasize these frequencies. If it isn’t in the foreground, I’ll try not to emphasize them in the mix, and will probably cut them at least a bit.
I almost always put a LPF on every synth voice I use (if I’m not using convolution), just because the full presence of all the overtones has always sounded abrasive and unnatural to me (since I was in a car seat hearing Peter Gabriel on the radio). Sometimes I’ll keyboard-track the filter, especially if I’m using a very wide range of the instrument, but not usually – I use the LPF as a way of keeping the voice’s range in check. I rarely take the resonance very high; usually I don’t use it at all unless I want a nasal/oboe sound to really cut through something. Or I’ll do an LFO sweep with the resonance up but still below 50% and send the thing through a long reverb if I want the differently accentuated harmonics to swirl around in a pad/ambient/drone thing. Also, concert halls have reverb, but as we get farther away from a sound source, air causes the high frequencies to decay faster so the sounds have less high-end, so high frequency content also plays a role in perceived distance.
If a synth is taking a lead/solo, I’ll give it more high frequency content so the material will slice through.
In mixing, I think about synths as very close-mic’d instruments that need aggressive EQ or else they take up too much space in a mix. I’ll cut out vast bands of their frequency content so they only take up what I need from them.
In traditional orchestration, piano music is usually flushed out and assigned to different parts of the orchestra. Sometimes it’s the whole shebang, but generally, unless it’s a full tutti section, the voices are restricted to switching among either the instrument families (string orchestra or brass choir for instance) or instruments with different timbres and similar ranges, like clarinet/oboe/flute with violins/violas. This tends to give the arrangement somewhere to go – everyone playing at once is cool but not for a very long time. It’s waayyy more interesting if we can switch it up and move the music around to different instruments, volumes, and frequency ranges.
Within that, it’s very interesting to stack the winds in the orchestra, say flute under clarinet under oboe, and then to switch up the order. There are all kinds of deep analyses of how different composers like to stack their winds. I think it’s interesting to do a 3-voice chord and have each voice with a different timbre and slight tuning or phase variations through the same filter.
Actual orchestral instruments have some limitations that synthesizers don’t. Playable range, for instance. Or that wind instruments can’t sustain a note/phrase/section longer than a breath. Sometimes I’ll say “okay this is a tenor line for square waves” and not let it go too far up or down in pitch. Other times I’ll say “it’s time for this synth bass to take a piccolo solo” (ex. TB-303 playing outside of its intended range and giving rise to acid).
Another thing I really like to do is dynamic timbres – I have some max patches that use 2 different wavetables and mix between them. One begins on one timbre and fades over to another as the note sustains. The first timbre will usually be brighter and last a short amount of time, like the length of the attack portion of the envelope, and somewhere along the D/S portion I’ll fade into a mellower timbre. Another patch randomly chooses a mix between 2 timbres, and I usually keep these more related – if one is too much brighter or more resonant than the other, the contrast between the two is very stark and it gets distracting for me.
Also! It’s interesting to listen to a Bach piece for solo cello, and to conceive of a solo synth piece. OR! Here’s one of my favorite Messiaen pieces, first in its original form for 6(!) Ondes Martenot, and then as a movement in Quartet for the End of Time (which he put together when he was in a POW camp during WWII when a guard found out he was a composer).
and HERE is a video of (the sadly late) Richard Lainhart playing the same piece for Buchla 200e and Haken Continuum.
That’s all I can squeeze out of my brain right now, but thanks for the opportunity to share!