I haven’t come across this discussion anywhere, so let’s have it here.
Assuming you measure the loudness of your masters, what if a recording has a fade? I usually apply fades in mastering software after the mix because there is more flexibility, plus I don’t send everything through the two-bus in the DAW so if I applied a fade there it wouldn’t work.
So I got my track to the required loudness, then applied the fade. Then I thought “hmmm I wonder…”. So I re-measured the newly faded and saved file and found to my horror that it was now 0.2 LUFS quieter.
On reflection, no great surprise there, because the fade had affected the loudness, but here’s the nitty gritty: in terms of the real world, what is the correct loudness?
Should I re-master to an extra 0.2 LUFS to compensate for the fade that will come later? Or should I leave it as it is, using the logic that most of the song is at the ‘correct’ loudness?
Are any of you doing your fade-outs in the DAW? What if you are getting an incorrect loudness reading due to the fade-outs? Are you worried? Do you feel cheap and dirty?
Yup, this is the NEW loudness war - tricking the normalisation. This is why quiet breakdowns in the middle of songs are so popular. This is something I figured out when I first got my LUFS meter - The quieter your quiet sections are, the louder you can push your final choruses without exceeding your target LUFS… Shhhh… just don’t tell anyone!
Actually, there’s even an art to the quiet sections. The BS 1770 algo is now BS 1770 3. The European chaps updated it and the US adopted it, this is the version that is used universally, so make sure your LUFS meter is using it, because version 3 has a threshold that ignores stuff below a certain level when it lasts for a certain duration. This was added when it was discovered that a US advert was apparently very loud, appearing to break the CALM act law, but in fact the voice over was able to be very loud because it had an equal amount of very quiet parts.
This why a fade-out may be different to quiet parts in the middle of a track. A fade-out can be extra long and obviously a large part of it will be above the threshold, thus beating even BS 1770 3.
Meanwhile, despite the facetiousness, the question I posed in the OP is meant seriously: Should you measure including, or excluding the fade? Technically you should measure including the fade, because that is what loudness normalisation algos will do, but in terms of keeping your album consistent, is it better to measure without the fade?
Interesting! I knew it wouldn’t belong before someone worked it out and exploited it…
If you’re mastering for an album, then I would think appropriate perceived loudness from track to track would be the arbiter - and for the most part, I imagine that would be an artistic decision dictated by the “flow” of the album. On the other hand, if it’s just a single, then get the most out of the algorithm.
During my evening runs, I listen to podcasts, & some of mastering engineers interviewed mention how they are now providing multiple masters of tracks for clients for this very reason.
Have you ever had any call to master for vinyl? I seem to recall you mentioning selling vinyl copies of your own band’s album as vinyl. Apparently mastering for vinyl has it’s own unique set of parameters/restrictions… Care to elaborate?
As far as I know Andrew, my original CD masters have been used on all of our vinyl.
The main issue with vinyl is bass - too much, or a L/R imbalance, and the needle will jump out of the groove. It’s also recommended to check the mix in mono because if there are big differences, that, too can make the needle jump. Although personally I never bother with this namby pamby mono nonsense…
You may remember some stuff I did for a client Jackson D, he released one of those tracks on a 7 inch single. I was aware that he was going to do this so I advised from the outset that we had to be careful with the bottom end. That’s about the nearest I have come to specifically mastering for vinyl.
But it’s a good point, vinyl is on the rise and mixing/mastering for it may become a separate skill in itself.
Yep, even bog standard stereo effects can be an issue.
Having said that, I’d like to believe that if any of my masters are an issue for vinyl, the record company would at least give me first option on re-mastering, so in the absence of that, I’ve got to assume that my recordings are within acceptable parameters up to this point - and they have plenty of stereo effects, and no mono checking.
Is only one of your ears certifed? What happened to the other one?
Wow, none of this had ever occurred to me… just from the standpoint of a one-nerd operation, I would omit the fade to estimate the perceived loudness simply because the bulk of the song is what one is typically attempting to understand. When it is fading out, it’s obviously getting quieter. I would not want to confuse my understanding of the loudness level in that way.
But in the commercial world? Obviously much different considerations in play…
On a serious note. I detest fades and use them as little as possible. But you raise a very valid point.
But I always do my loudness check when I master, not at the mix stage.
So if I create a fade while mastering (like you, I do my fades at that stage), I then check the loudness levels.
IE. My loudness check is the final check on masters.
Does that make sense, and does it affect you theory?
There ya go. [quote=“Coquet-Shack, post:15, topic:1501”]
Does that make sense, and does it affect you theory?
Well it’s certainly how I discovered the apparent anomily. The question is, should you be checking and adjusting the loudness before applying the fade and then ignoring whatever loudness level it ends up at?
Great question. I’ve never measured including the fade. The reason…most radio and TV will never get to the fade or you’ve done an edited version for radio ir TV. On an album situation, I measure the loudest point of the song and leave it. I think it’s overly anal to include the fade, though I totally understand the curiosity or if someone decided to include it. Makes total sense all across the board…I just personally don’t see the need and have never had anyone mention anything to me.
Yes, I used to do that in the ‘good old days’ when I used a TT meter. I think maybe the world has moved on a bit now though. For example, if you know that YT, Spotify etc. are specifically targeting a loudness of -14 LUFS then I think it’s a good practice to be fully aware of the integrated loudness of your recordings so that you can adjust accordingly.
Yes, certainly from a personal perspective it’s over the top, and probably serves no purpose, but we know that loudness normalisation algos will always include the fade, so I think it’s important at least to be aware of that fact.