Give Me Some

So the general advice seems to be, master below 0db. Either to leave headroom for the ME to work with or to avoid inter sample peaks.

This has never made sense to me. Don’t MEs have the ability to reduce a 0db track down to -1db (or whatever they want), or is this kind of processing beyond them?

And the thing is, the tracks on every CD I ever ripped are mastered to 0db, so what goes on?

As I see it, either the professional MEs who master these commercial CDs know jack shit about mastering, or else the advice to master below 0db is…shall we say, misplaced?

I have never mastered to anything but 0db, I haven’t suffered as a result, (as far as I know), so what are you guys doing, and why, and can you give examples where it has made a difference?

…and if mastering below 0db is the way to go, why are CDs mastered to 0db?


I’m anxious to see some answers to this thread. Great question. Also, 2 points to @AJ113 for the clever title. Hahaha!

I’m pretty sure it’s a case of if you are a mastering engineer and somebody comes to you with that question, you know right away that they don’t even have a basic understanding of digital audio. After your 12,000th time of getting tracks that are exported heavily clipped because the guy who exported it knew nothing about anything, you have to start telling people to stay far away from the ceiling.

If you tell people to peak at -12dB, you are giving yourself a buffer against people not knowing what they are doing. You need 6dB buffer just to account for the fact that people will be peaking 6dB higher than they think they will, and another 6dB just for safety.

I would bet that if you took 100 mixing engineers and told them to export a song peaking at -12 dB, you would get a surprisingly high number of mixes that hit very close to 0.

1 Like

OK fair enough, so in your view it’s a headroom safety net for people who are sending their mixes to MEs, but what about us self-masterers? I think most of us here master our own recordings, so why would we want to master lower than 0db when it’s clearly not the practice of the professonals?

@Danny_Danzi, I’d be curious to get your input on this too!

Beats me. I do know that apple tries to push the whole mastered for iTunes thing, which does involve not pushing your raw mix all the way up to 0dB, amongst other things. I don’t know how many mastering engineers bother with that. It’s been a long time since I’ve actually downloaded a song from iTunes and analyzed it.

I’d be curious to know if full time mastering engineers bother with this at all or if it’s just start of the standard procedure.

1 Like

OK, I’ll give you my take for what it’s worth.

First, check the files you are ripping. Once in MP3 format the levels chirp a different song as well as the file attributes changing. You need a wave file to determine what you’re really getting.

A ripped MP3 (as well as your own MP3 if you encode one of your own stuff) will ALWAYS give you different numbers than the original. You can’t ever go by them. Remember…0 is clipping in the digital realm.

Now, realistically speaking, we have -0 which is right t the cusp of clipping and then of course +0 which is clipping. There is no need to go that high and risk the clip. These days we try to stay at a final -0.3dB so we never run into an issue. That’s your best bet for your final master.

As far as mastering guys reducing your volume in your material, no, that is destructive and is not the way to do it. You should mix out at no hotter than -3dB at all times. We can increase volume but decreasing means if there are errors, all we did was reduce volume. We didn’t fix errors that may come from overs at 0dB.

You also have to leave us room to process. And it needs to be at a safe level. External compressors and limiters used by clients are not needed and are frowned upon by me. If you want to clutter your music with that crap, you master it because your tying my hands and restricting what I can do. I can’t uncompress or unlimit.

Most people don’t know what bus glue is. They don’t even use the right compressor or know which to use in a 2-bus glue situation. I’ve walked away from more jobs than you can imagine due to crap like that. It’s glue that should be used, not freaking epoxy loaded with pumping and compression/limiting artifacts leaving me no room to master at all. Sometimes you can’t hear it…you’re too close to the material. But I hear it. Trust in your mastering guy, find another you can trust or do it yourself.

My rule is “-3dB or do t send it to me.”


OK that’s fair enough, so the main reason you want the recordings at -3db is to ensure that the mixer hasn’t sabotaged his own recording by clipping it. I wonder how many mixers mix down to 0db and then knock it down by 3db before sending it to you… :slight_smile:

Are you saying that MEs master to -0.3db as a rule? Because I don’t see any evidence of that. Although to be fair it’s a while since I got something contemporary. Or are you just advising us home recordists to do it?

I mean, if you’re mastering to a specific LUFS level then mastering to -0.3db means losing 0.3db of your dynamic range, and I’d prefer not to do that.

One thing I still can’t wrap my head around fully: I master at 0db. While I’m fully aware of the potential issues of 0db, I’ve never actually heard anything to make me think that something is amiss. So I’m always thinking ok, I understand the technical reasons, but does it really make any difference, or is it just one of those things that has been handed down over the years, but in fact is more of an ‘emperor’s new clothes’ thing.

I don’t think it’s to do with making sure mix engineer hasn’t clipped it.

If you look at most ME’s process chain, GAIN is the last in the chain.
Eq cuts, compress, eq boosts, stereo enhance and then GAIN.

My question would be the reverse: what harm do it do to offer a Mastering engineer the headroom he says he needs to work with. It doesn’t degrade the mix, it doesn’t harm it, so … why not.
I don’t think I’m adding much here.

Intersample clipping is one of those issues that I don’t think is an issue at all. In fact, by definition, any distortion that takes place in between samples is going to have all it’s harmonics above 22kHz anyway, unless your DAC isn’t upsampling, which I don’t even think exists at all.

Now, conversion to mp3 is a different story, and it’s easy to see how much a song changes by converting it to mp3. All you have to do is set your limiter to -6dB ceiling or whatever, export it as mp3, then re-import it and see how your peak levels change.

Then you can put a hard clipper at -6dB (or whatever your original ceiling was) and and A/B it with the clipper on and off to see if it’s causing any damage. At least that way you can get a more objective answer.

Personally, I think it depends a lot on the style of music. Rock and anything else where distortion is a huge part of the sound can get away pretty much unscathed.

1 Like

It is? I think it varies from engineer to engineer and song to song. But if applying negative gain is required to get the song at a level where they like their gear working, I’d expect any mastering engineer to be able to apply that gain in about 1 second. This should be the most trivial thing a mastering engineer can do.

Oh I laughed :slight_smile:

Except that there is an ME here telling us that’s exatly what it is.

Most mastering guys mix to the level they feel is best for the material. I do that as well but 7 times out of 10 -0.3 dB is right where I keep to be…

Are you really concerned with -0.3 resolution loss? You won’t notice it nor is it messing with any resolution or major volume. You’re taking 3/10 of a dB. What you’re doing by staying there is also allowing the math that everyone is always so concerned with, stay sane. That’s a whole other discussion of you want to go there…and that is, what all the numbers mean in a file especially when you hit 0dB.

All of the above said, there are other things to keep in mind. If you are recording at 24/48 and master at 0dB, when you convert to 16/44 how do you personally control clipping?

At the end.of the day, if you want to master at 0dB and you see no reason not to and are happy with your work, no explanation from anyone is necessary. There are so many other factors I can add in here…but in actuality, what you believe and what works for you is what is important.

When I master something for someone…the math is correct, I use my ears but I also do my best to never degrade the audio while maintaining consistency. Loud masters cap off. Masters done right go up and up. This is another reason to keep the numbers sane. Once you reach a max sample value of 32767, you’re capped and clipped even if you don’t hear it. There’s just no reasoning for it to be like that. Hence why we have so many ruined masters. Do what works for you. That’s what’s important.

OK this is attributed to Stephen Massey (Massey PLugins). I’ve no reason to believe it is not by SM but I can’t seem to locate the original. Regardless, it effectively trashes the -0.3db philosophy:

There’s this theory in mastering that you should leave a tiny bit of digital headroom in your brickwall limiter’s output. For example, if you’re bouncing down a mix using the L2007 last in the chain, the theory says you should drop the max. output control a fraction of a dB. The precise value varies by opinion: -.3 dB, -.1dB, -.5dB, etc.

The reasoning is that the analog signal might clip at its loudest peaks, once reconstructed at the output by the digital-to-analog converters in the listener’s playback device. That is, if the analog circuitry hasn’t been well designed. This phenomenon has been labeled “inter-sample clipping.” It’s a reasonable idea based on sound electrical engineering analysis. It probably happens in rare cases. There was a paper published a while back demonstrating it was possible in real-life CD players.

The company I once worked for, TL Labs, designed a metering plugin to model this process. You can read more about inter-sample clipping in the user guide: TL Labs Plugins Guide.

But, personally, I’ve never bought into this story in its entirety. (Maybe you can tell from my other postings, but I suffer from chronic skepticism of any and all dogma.)

This theory begs deeper questions, such as: Is this clipping at all audible above the massive distortion already done in by limiting in the first place? My guess is: no. Significant oversampled clipping goes hand-in-hand with substantial brickwall limiting levels. After blowing out the music with limiter distortion, it’s a little too late to start fretting like an audiophile.

Very few folks even use a CD player anymore, which is in the original foundation of this theory. If someone’s still using a CD player, they’re probably an audiophile and own a well-designed model.
Or, they’re living in the past, don’t care much about sonic quality and won’t be buying your modern, smashed CD anyway. If the listener is using an MP3 player, computer, or other media player, then the gain scaling for the volume control sometimes happens in the digital domain well before reaching the digital-to-analog convertors. This means the output is nowhere near the power “rails” of the analog circuitry. (Furthermore, what impact does MP3 compression generally have on peak levels?)

Most perplexing is the promotion of such a minuscule “headroom” value of -0.3 dB, etc. This isn’t going to get you any audible decrease in distortion in the event of actual clipping. You’re not going to hear the tiny 0.3 dB tip of the sound wave lost. But, you don’t have to take my word: insert a gain plugin on your master fader last in the chain. Go for broke: set it to +0.5 dB and listen to your mix. How far can you push it?

If you’re genuinely concerned about fidelity, then I say do something more substantive and give us 2 or 3 dB of headroom. Otherwise, you’re just playing a psychological game, buying some emotional comfort from the self-deluding marketplace of audio engineering groupthink. The TL Labs meter was always a curious irony to me. Winning the loudness war is mutually exclusive of achieving fidelity, but here was a gadget trying to sell us both.

Anyhow, if you click the “max. output” label on the purchased version of the L2007, a text entry box will appear and you can punch in an exact value. A little secret: -0.5 is usually a little bit grainy and digital-sounding, but -0.6 can be rapturously warm and fuzzy. But, you’ll have to upgrade from the demo version to find out!

1 Like

Yeah but then again, you could just as easily ask are you really going to hear an improvement if you only drop by 0.3db? My max ISPs are often at 1.5db, sometimes 2db. If I was going to do something about them a drop of 0.3db wouldn’t be the solution.

That’s easy: I record at 44.1kHz :slight_smile:

Yes, of course, but that doesn’t mean I don’t want to learn, to understand. I have a curious mind, and if I find a better way of working as a result of my curiosity I will use it. I am not burdened by pre-conceived philosophies and dogmas.

Actually Boz, you are correct. It depends on the ME. I’m a “gain last” mastering guy. That said, it’s not something I do in “one second”. It happens over a few steps. Just to boost the gain in one shot would be a little too abrasive. What I like to do is work my way up a little at a time. First by hand editing every peak and then automating to make the audio consistent keeping at around -3dB. I will most likely end up at -2dB or around there by the time I am done processing with compression/limiting or whatever else needs to be done. CSR, dithering to 16 bit and limiting is last on my list as the final polish. When this is done, the only thing I may need to do is remove and very small DC offset at say 0.03 max. 3/100th of a percent is not even worth fixing…but at this point, if everything sounds good to me…I like to keep the math numbers correct. Why and does it matter?

Yeah…because Bob Ludwig and Bob Katz always have great numbers and I have noticed when everything is aligned just the way it should be, it makes for the best work I have ever done. Does any of it make sense if you are not an ME? I’d say no…just use your ears and be done. But I know that when I review another ME’s work, if the sounds like ass and the numbers are all off…that usually tells the story as part of the problem.

The numbers are like you wearing designing a plugin with a GUI that looks like an incredible piece of 3-d hardware, and me designing it looking like a kid in 1rst grade drew stick figures of something. Or…you wearing a perfectly pressed suit and me wearing one that was stuffed in a drawer. The numbers help the sound to be more correct, but they are also an “image” that someone cared enough to pay attention to them. They are not the be all end all of fixing audio…but I can look at the numbers in a mix and know what went wrong or right.

For anyone else reading: Sometimes the numbers are right and the master is still not good. That’s because someone didn’t do the other stuff that needs to be done. They all walk hand in hand. When we look at a Bob Katz or Ludwig master by the numbers, they are just about always consistent. Left side and right side min and max sample values are either spot on or so close, it’s not worth noting. DC offsets are completely removed, and you’ll NEVER see a 32767 sample value (which you will get by mixing out at 0dB) in their work. Why? I don’t know…someone should write to them and ask. I learned from Katz and did what I was told. Sometimes I questioned things…other times I just did the stuff and listened for the results.

It’s like dithering…I still am not convinced we need it. I don’t care what the math numbers come out to. I can mix a tune and CSR and go to 16 bit without dithering and not hear a difference in my audio. But…someone insisted that we should do that so the numbers stay correct and the entire industry is doing it. So hey…I do it too even though I can’t hear a difference, yet I can hear two gnats getting it on in the other room with cans on a my U-87 in line for me to sing.

All said and done…ME’s like to work a certain way. Sometimes it’s justified, other times not. I want to work on something and make it sound wonderful…fully dynamic, no where near clipping and I never want the mix to cap off due to excessive volume levels or hyper-compression/limiting. Keeping these numbers a certain way helps me to achieve what without error and without question or a guessing game. It makes all my jobs seamless and transparent when need be…aggressive and face melting when need be.

Not having the room to do this makes what I do restricted. Being restricted or having my hands tied due to someone that thinks they know post processing better than I means they should have never sent it to me. Think about it. That’s not a statement that says I know more than anyone else…it says that if you don’t know what an ME does or how he or she makes music sound the way it does…don’t add things that would hinder their work. Mixes too hot, limiters and compression overdone…you’re attempting to do my job…and quite poorly most of the time. At least I’d save people the money and tell them “you’d need to do this that and this if you want me to master this” over just taking their money and making a turd sound more like a turd. :wink:

1 Like

It’s not the improvement. You are pushing things so far, there is no where left to go. Ever turn your mix up? How loud can you go before you get distortion? Try the same thing using my method. -3dB, master as you would…end up with a limiter setting final of -0.3dB. See if you hear a difference. If not, you and I will talk about limiters and the proper settings for this. It’s HUGE to choose the right limiter as well as the right settings.

And you’re worried about losing dynamics and resolution? Hahaha…ah, AJ my friend…switch over to the dark side and record at 24/48. You’ll gain headroom, you’ll gain quality and you’ll be sorry you ever recorded at 16/44. Will it be a huge difference? It depends on your recording abilities as well as how good your interface and converters are. Consumer interfaces show a big difference in bit and sample rates because…well, they don’t have really good converters and they want you to feel like you are actually hearing a difference just from changing the sample clock. With real interfaces, changing the clock source will do nothing until you actually record a project. As you add instruments…this is where the real savings, quality and headroom come in.

The best way I can describe 24/48 compared to 16/44 is…24/48 is wide, open, has room for anything…I can not mess a mix up with frequency masking. Even if something is too low in a mix, you will still hear it completely. 16/44 sounds like I’ve lost some spaciousness…less room in the mix, and it’s easier to lose instruments due to frequency masking.

Keep in mind…when you are working in the 16/44 realm, some of your processing may alter that a bit. This isn’t always the case when working with 16/44…but there are definite issues that can arise that maydegrade your audio. There are some plugins that just sound and behave like crap. When you process at 24/48…the plugins and all your processing stay true and when you bounce down to 16/44 you lose nothing.

Completely understandable. Some things I can elaborate on, others…I will nto sit here and tell you I know everything. Some of it is just “stuff you do because the industry is doing it.” Like I mentioned in the other post about dithering…other than some noise shapers in limiters reacting too harshly, I can’t tell a dithered mix from a no dither mix. The noise shaping is what alters the sound to my ears…if I just convert sample rate (CSR) and go down to 16 bit…all the holes they claim remain in the math numbers…aren’t heard by me. Some of it is “industry says so and does so” other things I believe are hype and marketing. :wink:

1 Like

Well, I didn’t say anything about resolution. I record at 24 bits like you. I haven’t heard any reasonable argument to record at 48kHz. I don’t believe that anyone can actually hear the difference, but I can certainly hear the difference when 48kHz is converted to 44.1kHz (i.e. it’s poorer quality), and that’s the main reason I record at 44.1kHz.

Just to clarify: I work in 24/44.1, but I render to 16/44.1

As you say, -.3db is probably not going to change the world in terms of dynamic range, but like you, I like to see the numbers right and I want to maintain standards. If I’m mastering to -10LUFS then it’s -10 LUFS. Not -9.9 or -10.1. So if I master to -10 LUFS with the fader down by 0.3db I’ll feel cheated out of 0.3db of dynamic range, but more importantly, I try to compete with commercial recordings, that’s the standard I’m aiming for, and I don’t see any commercial recordings that are mastered to -0.3db. Maybe I just haven’t analysed enough of them?

I’m afraid I don’t really understand this. I’m not really into the technicalities of the digital realm, can you spell it out in ‘mastering for dummies’ language?

I beg to differ. As I said above, you definitely lose something when you convert 48kHz to 44.1kHz. It’s quite audible.

Is there a plug that simply turns down each (potential) instance of an inter sample peak? If not why not? Wouldn’t that be the solution? Boz?