Maybe this is both a general question but I’m also curious how it applies specifically to the type of things we’re talking about here (textural, reductive and experimental type work). The 500 series thread talk about stepped pots also had me thinking about this in the context of stereo and mastering.

Someone please correct me but I thought part of the function of stepped pots on things like mastering gear was a mix of reproducibility (a lot easier to write down 3 or 4 numbers in a notebook) and also to be able to create identical stereo settings. While mastering say a Trente Oiseaux type album is very different from a rock band how common is it to actually use identical stereo settings in mastering? Can it potentially damage and flatten what was a very good or interesting stereo mix, or rather does it provide more balance and cohesion? And while maybe it is not so common but I’m curious how a mono mix would also influence someone’s approach to that.

While I’ve always been ‘mastering curious’ I haven’t really done much of it myself - I tend to sort of build ‘mixing’ into my actual compositional/recording process and usually I had such a specific sound I was after I didn’t want anyone messing with it so much. the mastering I’ve had other people do for me was usually more clean-up like level matching and getting things ready for the plant combined with just getting a second set of ears on something and on some different speakers (making sure I don’t have the bass totally out of control, things like that). I also have worked primarily in mono for over 10 years now since for me a lot of the action needed and still does take place in the acoustic space with room reflection shifting overtones, head position, so on. Though it was also probably a backlash against studying electronic music and finding everything too stereo whooshy. But as my sounds have changed a bit and trying new things is always nice I’m wondering how using techniques like still mixing in mono but mastering in slightly tweaked stereo could offer me more possibilities of creating something interesting or just maybe making more sweet spots for home listening. Also that I think more and more people listen on headphones these days which can kind of kill a lot of the material I was making for a while I’m wondering how mastering mono to stereo could help re-create a bit of that acoustic space action that might get lost.

1 Like

For me, whilst both of these are important points, neither of them are the main reason I prefer switches to pots when mastering. The main reason is fast working. When you have limited choices, you just pick the one that works, fast. No second guessing. It’s far easier to hear/think, “Right, this is the best choice of the 11 or 22 positions I have” and move on, than with a pot where you can easily get stuck in an infinitely descending second guess loop of too much choice whilst twiddling… It’s why I love the TDR plugins too, I have them all in switched/stepped mode. I work much faster this way. If you have a few albums to master in a week, it becomes the primary consideration.

Reproducibility is great for recalls, but the longer I do it, the less of those I get. Was maybe more of a consideration for me a decade ago.

Matched stereo is for sure important. I have some gear that is not switched and it is a pain running pink noise test signals through the chain with an analyser at the end, every time, to make sure the matching is within a fraction of a dB across the frequency range, but I still do it. :slight_smile: Thinking of sending one of my EQs back to the maker to have them swap out the pots for matched Elma switches, I think it would be worth it for me. I like things to be as equal as possible in stereo, but there is also room for the fact that slight variances L to R can make for a larger and more interesting stereo image, and I do know a few people who deliberately set the L and R EQ differently, for example, for that exact reason (although I don’t do it myself?.

That is the very definition of mastering, for me, rather than the “two buss processing” it has seemingly come to mean on the internet over the last few years. Many can make a single track sound louder and better with a variety of techniques, but ensuring good translation on a wide variety of systems, and balancing an entire album timbrally, dynamically, and with the correct spacings, to enhance the emotional impact the artist intended, is an art form where the 10,000 hours rule applies, IMHO.

Mastering is a totally different mindset and technique from recording and mixing. I record and mix one or two tracks a year, but I’m mastering most weekdays the last decade.

As for mono, I recommend you start experimenting with stereo again, it’s a wonderful sounding array that has stood the test of time, and can sound incredible in a nice room with a good system. There’s very little to be done at the mastering stage, in making a mono track sound more stereo, without it completely falling to pieces. I’ve done the “stereo shuffling” EQ technique on mono masters for various clients in the past, and they have consistently said “Nah, we’ll stick with mono”. There’s that, or just slapping some stereo reverb on, but I’d recommend thinking in stereo from the outset if you want to achieve a nice stereo master. Maybe worth reading up on the L-C-R technique, how things were done in the early 60s etc., and trying to implement it in your own works, for an easy starter.

6 Likes

Revisiting an archive of live concert recordings (of various quality), and thinking about preparing some of them to be released. Any tips, or even recommendations for professional mastering?

1 Like

I do a bunch of my own live recordings and the things I’ve learned, which are probably not a revelation to anyone but I’ll put here anyway:

  1. Do whatever repair work you need to do first—coughs, chair squeaks, heating/AC units etc. Izotope Rx is your friend.

  2. If you have multiple mics do not be afraid of abandoning some if they don’t work. My latest live album (Upstairs at Viteks, bandcamp) I had a stereo recorder but someone sat right next to it and on that side it was going to be a nightmare to soap out the sound of them breathing/moving etc. Mono recordings sound great too! :slight_smile: You can use some tricks, stereo verbs etc, to make it more dimensional if you like.

  3. The order of songs on the album does not have to be the same order as in the performance. Pick the order for the album that works for the entire flow of the album listening experience.

  4. Make a decision about whether you are creating an “accurate” document of the live event (which might negate the item above) or an idealized version of a live show or if you’re trying to hide the fact that it’s live and pretending it’s a studio album. These will have an impact on how much of the audience noises you leave in, whether you bring in creative processing and reverbs, edits etc. Be mindful of the source recording when you make this decision: transforming a live recording with lots of audience noise into a studio recording is laborious.

10 Likes

I mastered some live recordings for a member here recently, he was happy. :slight_smile:

3 Likes

hello everybody here! first post. just wanted to chime in regarding my own approach to mixing and mastering…if it comes to my own work I do it myself. basically in my own experiance everything starts with arrangment/composition. and then mixing which is 50% of the job done :slight_smile:
I am no expert, but I have been doing this for some time and learned that I just trust my instincts when it comes to mixing and mastering. since some two years I have ben using analogue mixing desk which hugely improved and opened new ways for music production and mixing, which I mostly do on the console.
for the mastering I use WaveLab with its own plug ins - dont use anything third party, except for Waves reverb :
I’d say decent monitoring is essential, in the sense you know it very well how it translates in your studio space plus good pair of headphones is a must! I had had a guy to measure my room acoustics, so I will be installing some treatment hopefully soon :slight_smile:

1 Like

I have some questions about mid-side… I have been using txn “nearness” and x2 cold macs to do some mid-side processing, but then recently had the idea of using txn as a mid-side mixer and then using only 1 cold mac to “decode” to L-R… for example:

We will call one side of TXN Mid and one side Side instead of L & R (if you’re not familiar, the nature of the module is that the input closest to whichever output is loudest and the one furthest is quietest—but each input is audible to some degree at either output. A fixed pan mixer basically):

Bass drum to mid-most input
pinged filter to center input
sample to side-most input.
Side Out goes to a delay then attenuator then FADE input on Cold Mac (present at left out, inverted at right out)
Mid Out goes to attenuator then OFFSET input on Cold Mac (present at both left and right outputs).

Now, this feels stereo to me, but I’m not entirely sure how or if it’s some sort of aural trickery (certain phase things with three sisters have felt stereo to me when I’ve been listening to mono signals and I believe this had to with mixing the phase inverted high out [apologies i can’t recall the patch off the top of my head] the addition of a delay on different inputs at different levels might be giving the illusion of space with some phase inversion helping out I suppose…). The sample and pinged filter are present at in both ears, and there’s nothing panning them obviously, I’m not sure why this would work, but it feels like there’s stereo movement… I’m not quite sure how to analyze the audio and I’m not sure I would understand it if I could.

The main question I come away with is: decoding mid side from L/R is L+R as mid and L-R as side. When re-encoding mid-side to L/R how does it differentiate which side is left and which is right? This seems like a silly question I’m sure, but I just can’t quite grasp how Cold Macs circuitry could distribute some information to left and some to right just based on its phase… This doesn’t have to be about Cold Mac, it’s just the tool I happen to be using, but I assume if there’s a simple way to do it, then computers and other audio hardware are probably treating mid-side similarly…

2 Likes

It’s just the math. Converting from MS to LR is the same operation as LR to MS (*):

M = L + R
S = L - R

L = M + S
R = M - S

So for a simple example, let’s say you have a 100hz sine panned hard right, with nothing on the left.

Mid = original sine
Side = inverted sine

If you don’t do any further processing and convert it back to LR, you have:

L = sine + inverted sine = sums to 0
R = 0 - inverted sine = original sine

Does that make sense?

(*) (This assumes there’s no chance of clipping. Some converters will scale the values to prevent clipping, so MS-LR is a bit different than LR-MS.)

5 Likes

For sure! That part makes sense, thank you :slight_smile: I should clarify that what I’m more unsure of is what is happening when I start with a mix that is not originally left and right, but is simply two channels of overlapping sounds processed separately perhaps I need to simply think this through mathematically:

okay these percentages reflect the panning from TXN and the italics represents processed signals through the delay.

L = (.9a + .5b + .1c) + (.9c + .5b + .1a)
R = (.9a + .5b + .1c) - (.9c + .5b + .1a)

with raw signals only you would get:
L= 1a + 1b + 1c
R= .8a + 0b + -.8c?

obviously the delay changes certain aspects of this but… am I tripping? is that correct? it doesn’t sound louder in one ear than the other and the middle input on TXN doesn’t disappear completely in the one ear… is that all delay coloration? maybe i need to do another test without any delay. sorry if i’m being thick…

2 Likes

I think in your math there you’re combining dry and wet signals in ways that aren’t quite valid.

If you’re taking a signal as mid, processing it and using that for the side, if the processing involves any sort of filtering or delay you almost have to consider it as a completely different signal.

Let’s take an extremely simple example: a 25ms beep, which you use as the mid signal. You delay that by 50ms and use the delayed version as the side signal.

Converting that to L/R:

L = (original) + (delayed)
R = (original) - (delayed)

So the original beep comes through just fine, sounding like it’s mono. Then the delayed beep comes through, but there is 180 degree phase decorrelation, which kind of sounds stereo-ish but will cancel if you sum to mono and is generally bad practice.

More complex signals might be another story though. A longer tone that mixes with its delayed version might correlate a little better and not be so bad. Or it might – depending on the timing of that delay and how that impacts the phase.

I actually use mid/side this way all the time, even if it’s “illegal” :wink: Two different detuned oscillators as mid/side, or dry and reverbed, or VCA and LPG, filtered vs. unfiltered, or two different points tapped from a feedback loop.

But I check it in the DAW with Voxengo Correlometer, and use mid/side EQ to try to prevent any nasty phase problems while still having an illusion of a stereo field. Sometimes just a peaking filter cutting a couple of dB off the side and a matching one adding it to the mid will take care of things. Often highpassing the sides is a good idea.

4 Likes

ah okay, summing to mono made my sides completely disappear, this makes sense thanks for explaining and the tips :slight_smile: i’ll continue legally decoding L/R to M/S before going back to L/R at least until I master my illegal mixing techniques >:) then they’ll never catch me (nor hear my music, as it will all be perfectly out of phase…)

7 Likes

@renegog and @Starthief, glad to see others so fond of the careful abuse of M/S.

1 Like

I’m looking for some advice on what the ceiling level should be for mastered tracks for all things digital. Is it -.5 or -1 or other? Any help or resources would be very appreciated as I’m working on my own music and trying to learn some more on the mastering side.

I do -1 or -2 true peak. This is because streaming conversions on Bandcamp (the only service I upload to really or care about) sometimes fails when I go any closer to zero. There’s stuff you can find about intersample peaks that is related to this. Ian Shepherd has a number of free PDFs that can help.

For LUFS I tend to mix to around -14 med/long and then bring it up to around -12 med/long in mastering. Still getting comfortable using LUFS but liking it so far.

Most of my music has a very wide dynamic range and I’m working to preserve that. Your music/goals may be different and require different target numbers.

1 Like

Obsidian Sound went with -0.5 dbTP on the couple of tracks I sent about a year back. It should be safe for most codecs. (Bluetooth aptX with the WAV not so much, but I haven’t had any trouble with Bandcamp’s MP3 conversion.)

I’m in the process of mastering my next release now, and aiming generally for around -12.5 to -10dB LUFS and around -0.65 dbTP. Before mastering, I generally keep the peak around -3 LUFS.

1 Like

So I just finished and released an EP, and due to the pandemic, I’ve been forced to work a bit differently on this one. So here’s how I went about it -

I used the 1010 Blackbox to record all my samples, loops and stuff. Any field recordings I used, also went into the Blackbox. No processing or anything, it all went clean into the box.

Once in there, I wrote all the tracks in the 1010 Blackbox. Since it has some clever routing, I applied a lot of resampling through a set of Chase Bliss pedals for character and I used the Blackbox own pan and volume for mix and balance, and its filter more as a one knob eq and less as a filter. I did all this on a set of AKG 712, which I’ve had for seven years. They’ve been in and out from the repair shop lately so they’re giving up on me, but I’ll hold on to them for as long as I can.

Once it sounded all right in the cans, I double checked the mix on my Genelecs 8020D. In my kitchen. Yeah, I know. I positioned them in the exact same spot every time I checked the mix, and cross checked with tracks I knew had a good mix, and eventually I learned how they behaved in that space. I adjusted the mix in the Blackbox and once it sounded good in the 8020’s as well, I did final checks with a pair of Marshall headphones, iPhone plugs, a portable Bose bluetooth, and a render which I sent to friends to just ask them what it sounded like on their systems. Some final adjustments were made after that, and when all tracks were done and ready through this process, I repeated it but this time through an SSL SiX.

The Blackbox allows for three stereo outputs, so I divided up all its samples across the three outputs, being very careful which ones I put on the ch1 and ch2 since the SiX has a drum compressor and two band EQ on those channels, and then I summed it all in the SiX, repeated the kitchen process all the way through and then recorded the tracks when they sounded good enough to me.

After that, I labeled them and uploaded it to Bandcamp. Which is the only part a computer played in this context, by the way.

This shuffling around of equipment was frustrating, and the space was challenging to say the least, but it’s interesting to try and mix something when your kids are home and about, social life happens around you and you’re doing chores in your home while the music just keeps on playing. It’s not the ideal context for balancing a mix, but it is closer to the context where the result will eventually be enjoyed, and I actually found that it contributed to the workflow in some way.

8 Likes

Most streaming services prefer lossless submissions with a -1.0 dBFS true peak ceiling. This makes it far less likely to clip when encoded to lossy formats for streaming.

Personally, I’ve been using -0.3dB with an ISP aware Limiter for over a decade, and never had anyone complain about it, thousands of tracks. If it’s a strict requirement (like Apple Digital Masters, used to be Mastered for iTunes), I will of course use the lower ceiling.

It’s silly, IMO, to give recommended Integrated LUFS figures for average loudness, because every genre, artist, project, album and track are different. Get the track, or album, to where it sounds good, and don’t worry about the numbers.

6 Likes

Thank you @Starthief @Gahlord @Gregg thank you. I will go for -1 as the ceiling. I do have a follow up question…is there any recommendations for a LUFS meter?

1 Like

I use this one:

Also, Sound Forge Pro 13 has a Statistics item on the menu that doesn’t have to “listen” in real time.

I alternate between mastering in Bitwig with a chain of plugins, and checking/editing in Sound Forge.

6 Likes

I use this one. It’s free which is nice and it also has a nice feature for auto compensating to a given LUFS target. This is not always correct as its analysis window isn’t per track but per window of time (which can be arbitrarily sized) but it might give you a good tool to negotiate your ears, the math and your brain.

If you’re at all command line competent ffmpeg can now do correction and it gives you a wide range of options for what your constraints are in terms of range, peak and average. I made a command line tool for doing this with nim.

4 Likes