I do. But it’s really dependent on the context of the sounds, the transient content etc… usually it’s either very gentle -1 to -2db of leveling compression or parallel compression (mixing in a bit of super squashed signal) to gain volume as needed.


The Jon Ville guide is gold. Appreciated the no-bullshit approach.


I think it all depends on what the effect sounds like in the context of the mix. Just yesterday I applied EQ and compression to a return track in Ableton because parts of the delay were sticking out too much in the mix and at other times it could hardly be heard. Sometimes the rest of your mix might swallow up most of a reverb tail but you might want that audible for a longer period of time. Compression can help you here too. So I guess my answer is “it all depends”. Listen to your effects in context, think about what you want it to sound like, and then ask yourself if compression is the proper tool to achieve that goal.

A tip I was once given was to solo your return tracks and mix all of your effects separately from your dry track. I’ve tried it a few times and I ended up with much cleaner mixes this way.


Thanks! I agree in that it depends.

My primary interest is in capturing—recording—the modular system as an instrument-in-itself. So many musicians now include either effects modules, e.g., Echophon, or external pedals, e.g., El Capistan, that constitute an integral part of one’s sound. In the recording or mixing process, I’m hesitant to decouple these elements from the sound (to be applied later when mixing) just because it’s the conventional way of working with “effects.” I understand applying a bit of delay or reverb (after compression) to a mix of modular synth if it’s there to give the instrument some space, but this doesn’t make sense if these sonic elements are being performed—or manipulated—in real-time as part of a patch. Perhaps it also doesn’t make sense to then add compression?


In this case you could choose compression (or any other effect) as either an artistic choice (e.g. to emphasize certain aspects of the performance in agreement with the performer) or in order to provide by emphasis the sense of emotion the performance possessed in context - perhaps there was some sort of room response or audience energy that didn’t translate well to tape. Much like colour work done on a film, a lot of times you’re trying to help the captured medium translate emotion equally, rather than translate sonic waveforms equally. There can be a significant difference and it’s the mix engineer’s job to balance between sonically faithful and emotionally faithful reproduction.


Continuing the discussion from Mannequins W/ (with):

:slight_smile: i believe there’s actually quite a bit of knowledge on this forum…

it’s an organic process, a little like cooking, or making kimchi!

so many options, from 'do nothing, it sounds great!

to just give her a ring https://www.theguardian.com/music/2017/feb/16/sound-engineer-mandy-parnell-bjork-aphex-twin-brian-eno

and even then, she may not be the person you want
ie. find someone who can make it sound like you hear it
the best mastering cats know their gear, their rooms, their ears
and can make the program material sound as good as it can, to/for them

online options are available: https://www.abbeyroad.com/online-mastering

as well as the 'do-it-yourself rabbit hole…
a place to start is audacity
fade in/out heads and tails, apply compression (many options)
and 'you’re done!

:slight_smile: you know how at 'some coffee houses they have an espresso machine, and even some coffee beans that may/might be ok for espresso, maybe…but we almost can’t get an espresso out of there because of everything that happens after we order it (ie. push the button, it’s not their fault)

in my humble opinion, I’ve come to understand 'mastering
as a marker that helps the 'artist let go of the material
and out into the world…

Improving the signal chain

I’ve noticed that the really tricky part about discussing mastering is that it involves a ton of really squishy subjective judgement. There’s just no way around the need to use your ears and your brain and your preferences and your experience.


maybe it’s just because i’ve never learned the basics of recording beyond fiddling around with audacity and ableton lite, but i feel like the recordings i put together from my modular always lack the magic that they have in the monitors and they certainly don’t sound as “good” as the recordings in the Latest Tracks thread.

for the track last night in the w/ thread, i squished around with EQ, reverb, compression, etc. for a while until it sounded OK but ultimately i don’t know what i don’t know. any suggested resources for better recording and mastering would be very appreciated! i suppose i’ll start in the search tab here on lines.

on a related note [in that i don’t know what i don’t know] my assumption has been that there’s some way to make it work with my very minimal recording setup - a 1-in-1-out stereo interface and ableton lite.


Here’s a good place to start (in this order):

  • Start with a Mid/Side EQ. This will allow you to EQ the center and the sides separately. Ableton has mid/side options in the EQ8 but I would highly recommend getting the Brainworx BX Digital V3. This should bring a little more clarity to your mix: On the side channel roll off the low end and shelf the high end. On the mid channel shelf the low end but cut some mid-low end. You also want all frequencies under 250hz to be mono and then anything after that can start to get wider.

  • Next, add a bus compressor. I always use the Waves SSL compressor but Ableton’s Glue works too. Set a very slow attach and a fast release. Use no more than 2dB of gain reduction. This will “glue” your mix together nicely.

  • Use tape and tube emulation. I use the Waves Kramer Tape and the SPL Twin Tube. This adds an analog warmth and the saturation helps to further glue your mix together.

  • This is kind of a secret weapon plugin: The SPL Vitalizer MK2T. It has the best stereo widening I’ve heard in a plugin and it also works and an amazing exciter. This will beef up a mix nicely if you use it properly.

  • Limiter. The Waves L2 or the Izotope Maximizer are two of the best. Do not use more than 2-3 dB gain reduction here.

The key to using every one of these plugins is subtlety. You can destroy a mix by over using them. Use small boosts and cuts on the EQ and never use more than 2-3 db gain reduction on the compressor or limiter.


exploring this really opened up control over placement in my own mixes. a clutch tool.


I do 4 things while mastering (I usually tweak these settings as I listen on different systems). I feel like I don’t really know what I’m doing, but this seems to work okay.

  • compression to get the RMS of the mix somewhere between -6 and -12, usually matched to something I want to sound something like. I usually do parallel compression with ableton’s glue compressor at the beginning of the master chain and a limiter at the end. I treat this more like an art than a science.
  • tons of notch eqs to pull down peaks in the frequency above 6db. Also listening on various systems (laptop speakers/home stereo 3-ways/near fields/headphones/earbuds and looking for the buzzy frequencies to pull down.
  • spatial enhancement using M/S eq and very light reverb. I like Ableton’s warm wide master chain’s eq as a starting point. I like the Valhalla plate with short wide settings very low in the mix.
  • export at 44.1 with dithering (I usually work at 96/no dither until I get to the mastering stage).

I would like to learn more about phase monitoring and issue fixing with my next release.

EDIT: O also, I stopped letting Ableton normalize my stuff on export. It seemed to cause clipping in some situations, even though it technically shouldn’t. I just pull things up with the limiter with it’s standard threshold of -0.30)


if it happens, then it is because it is technically possible. Probably something to do with inter-sample peaks (any ressource about “true-peak” metering will explain it better than me).

On the mastering topic:
in my day job we receive hundreds of CDs every year, ranging from self-produced EPs to world-class engineered albums. I’d say roughly 80% of them are way over-limited. Difficult to listen through the entire thing without ear fatigue. Clipped. Rectangles of a mass akin to white noise.
Mind you, as a broadcast radio, those tracks we choose to enlighten our listeners with are aired through “some more” processing (legal requirement of max. FM deviation, cohesive level riding, a bit of coloring vanity, and a tad further excess clipping to bring the program level to a reasonable loudness relative to our hertzian neighbors). You can guess which tracks sound the best on the air, between those with an initial dynamic range of 1dB and those that still dare betting that the volume knob of your device exists.

tl,dr; with dynamic reduction tools, please be mindful that most of them will easily destroy your music.
Imho, there is more energy and subjective impact when the listener feels that elements have room to expand than when listening to squashed everything.

In my own practice, i can’t say i master anything, most of the time i add some multiband compression and fine eq corrections and that’s it. Then again my stance is that in electroacoustic music the relative levels, colors, and timbres of sounds are a major part of the artistic discourse, so “mixing” does not really happen either; it’s more that you work and rework until it sounds like you want it to, be it cohesive or totally disjointed.
Using the -23 LUFS standard as a loudness target level while working ensures that you almost never worry about a sound being too dynamic to fit in; you’re not fighting against the 0dBFS wall when trying to bring up a phrase here or there, you just do it and still have room for a bit more gain. The EBU R128 has been a very liberating thing for me.

Also, emulating the “old times analog” chain helps to avoid wild dynamics if unwanted: think of each stage of a chain (slightly overdriven preamp, tape machine, slow responding amplifiers, etc) as adding a little compression/peak limiting to the signal.


Thank you @ermina, these comments resonate with me greatly.


My current perspective on mastering is that it should feel “correct” to the listener in as many situations as possible. In other words, not excessively louder or quieter than other similarly-sounding things. Not sound awful on less capable systems. When those requirements are met, I feel like it helps me be more confident sharing my music, because I feel like what someone else hears is more or less what I’m hearing, and things don’t get lost in some translation that I can’t see or control.

I also tend to feel that over time, the music I’m making becomes more sophisticated as I learn more, and the same thing holds true for this part of the process. I try not get bogged down by doing something wrong or “destroying” things, as ultimately, I know that by just letting the thing out there, I will have completed a process in which I learned things that I can apply to destroy things less in the future.

Using the -23 LUFS standard as a loudness target level while working ensures that you almost never worry

How does this work in your practice? I’m not quite sure I understand what you mean by “loudness target level”


How does this work in your practice? I’m not quite sure I understand what you mean by “loudness target level”

-23 LUFS is common in media where -10dBFS (full scale) is the absolute ceiling allowed (e.g. like how TV was). If you think of loudness target as a glorified integrated RMS level, that basically means RMS of about -13dB (relative to max 0 = -10 full scale). Gating and weighting applies in the case of LUFS/LKFS/R128 to make it work for film/TV (very quiet moments), not so much for music (the gate would almost never close).

One thing I’ve found useful for music is Bob Katz’s K-12 scale, which gives a fairly pleasant sound, and is actually not so trivial to meet. K-14 leaves room for some more bass but the overall level is quieter.

Another overly-simplified thing I do is calibrating a pair of VU meters to -9dBFS = my preferred target level (0 vu). -6 to +2 (vu) sounds healthy. Sadly a lot of music on Spotify would max them out past +3.


I seem to always struggle to get my different modular voices to play well together and fit simultaneously in the mix. I’m looking for any tips or thoughts of what you all do to make each of your sounds fit together nicely.

A bit of background: I don’t use a computer or an external mixer. I have a xaoc praga which allows me to dial the levels of 4 inputs including using two effects sends. I understand this … limits … me somewhat, but I’d like to be able to get better at rudimentary mixing.

When I compose, I try to create a bass voice, a mid voice, usually a high arpeggiated voice, some vocals or otherwise using morphagene, and maybe some type of rhythm using rings. Despite this, I seem to still muddy everything up and it all globs together into something I’m not 100% happy with.

If I were to critique myself, I’d say I have the following problems: I use too much reverb which bleeds everything together. I trigger notes on the same steps. I tend to want everything to be heard at once rather than the subtle shifting of things to the foreground and background. I use similar (or close) octaves for the bass/mid voices.

Any ideas or recommendations? Tips/tricks? Stories where you overcame some of the problems I’m facing right now? Any insight you guys have I’d greatly appreciate!


One thing that will help a lot would be to avoid sending any low to mid-low frequencies into effects such as delay and reverb. Mult the tracks you want to put these effects on and keep one signal dry and the other filtered before sending to an effect. Then you will have control over the wet/dry and your wet will have most of the low end filtered out. This prevents a build up of mud in your mix.


Whoa, what a topic! I could type forever, but here are some suggestions that don’t include buying new gear:

  • Try turning everything down. Some modules react differently (poorly) when they don’t have much headroom.
  • Try opening up your filters more - if everything is through LPFs it’ll sound muddy.
  • Try turning off your reverb until you’ve got the patch nearly there, then bring it in until it hits the spot.


@Simeon feel free to type forever! Great suggestions and all good points. As @photofractal mentioned mid-low frequencies sent through delay/reverb (which I do and need to get away from!) will make it sound muddy as well as too much LPFs. All things I didn’t think about, thanks for the good ideas.


Something else that I really want to try is using some kind of side chain compression or ducking within the modular. The Intellijel Jellysquasher has a side chain input but you can also achieve a similar effect by sending your signal through an envelope follower, then an inverter, and then to a VCA. Now one signal will attenuate another creating more space in a mix!