I’m glad you explained this, based on Adrian’s postulation. It has always concerned me that with a Master buss limiter (or compressor, for that matter) you are having to use one attack and release setting to cover all the frequency range, as well as your point about the limiting being being the same amount (or at least generic). Different frequency ranges respond differently to attack and release (owing to longer or shorter wavelengths), or certainly can in many cases, so treating things more individually could potential give you more control with attack and release on a specific sound.
Certain side-chain filters have been a workaround for getting frequencies to actually be attenuated the same amount in compressors and limiters, including the Master buss. Otherwise they could be imbalanced. It seems to me you could somewhat solve or avoid that compromise with your technique? At least with the Master buss.
But why would you use a master limiter when mixing? I personally think that if you feel the need to use anything that deals with dynamics or frequencies on the mix bus, you have done something wrong in the mix and you should find the cause and fix it at the source rather than patching it at the end of the chain.
If the mix is done right, mastering can be done with harmless limiting, compression and EQ on the whole mix. It doesn’t have to be detrimental to the sound because it can be done in light touches. You shouldn’t have to make upfront moves like fast release times, >2 dBs of boost/cut on an EQ, narrow Q etc while mastering (unless you’re salvaging a bad mix).
It’s a good point, and I think it just became a trend and a workflow in the modern era. Many point to the invention of the brickwall digital limiter in 1999 (IIRC), which changed the way mixing and mastering was done, and brought about that famous audio war which shall remain unnamed. Efforts in recent years to undo that damage have had some success, and then I guess we can ask “where do we go from here?”
Your approach seems like a very classic one, which is refreshing. It does cause me to reflect on the reasons behind things. The quest for loudness, at least commercially, isn’t that new though. The famous mastering engineer Bob Ludwig got his big break when he gambled on a brand new and expensive Neumann cutting lathe (for vinyl) that could print the audio 6dB louder than before. He took all the business from everyone else for awhile (1960’s or 1970’s I would guess).
Yes, I would say that’s the classic standard. Though things have changed over time. When cutting vinyl, they had to do some pretty drastic EQ to the low end for the vinyl standard IIRC. And cutting the disc was part of the mastering process back then. Digital changed everything (or many things), and opened up a whole new world of possibilities. Things got wild and crazy, and probably out of control just a bit. With a renewed focus on recordings with actual “dynamics”, it may get back to that standard. I would remark though, that the explosion in plugin designers and technology probably doesn’t help. They have to create a need to sell their merchandise, and they’re not required to attach warning labels for its use.
That’s very true, the offer is incredibly extensive and often quite cheap. It’s very easy to buy a plugin because of a trend, and designers have understood and taken advantage of the fact that we’re attracted to shiny things. Scrolling through the Facebook groups about audio engineering is quite scary in this regard, there are thousands of “producers” and “mixers” who think the quality of their work depends primarily on their DAW and plugins…
I have been a victim of this shiny objects syndrome myself when I started out. Fortunately I’ve matured and now I only buy something when I need it, or when something new brings a significant improvement to something similar I already have. And the biggest mindset change I made when I started doing this for a living is thinking a lot more in terms of ergonomics and time efficiency.
On the Audio Mix Club website, we had a mix challenge last month called “Loudness without Mastering”, where we were encouraged to put into practice the techniques outlined here, but to submit a mix that has no mastering limiting on it. I decided to submit a track that I had already posted a mastered version of here, but without the mastering processing.
Here is that track:
Here is what it returns in “Loudness Penalty” for streaming.
So, as I said above, whether you are aiming for a loud mix or not, using this method means you can get away with far less master buss limiting (and maybe even none) on your tracks.
There is also another advantage of controlling overly dynamic peaks in your individual tracks is that the simple act of balancing your elements is much, much easier. Trying to balance tracks that are constantly changing their internal dynamics in relation to one another is like herding cats - it just leads to frustration and an impossible spaghetti of automation. When you keep those dynamic relationships controlled, you are in a far better position to purposefully affect the song’s dynamics for a musically beneficial goal, not just as remedial “repair” work.
All in all a cool thread with some interesting discussion.
Just to pick up on Lophophora’s question about why you’d put something controlling dynamics on the master bus… It’s very common for mixers to do it. It’s also common for mixers not to do it. It’s easy to think of them as something that is a necessary evil to make a track loud, but it’s just as valid to think of them as something that gives the track a particular feel, or unity, which can enhance the emotion present in the song and the performances. That’s why I put a compressor on the mixbus, anyway. And actually, I think that compression can limit the future loudness potential of the mix as much as enhance it in some situations - so it’s definitely not a compromise to a volume war etc.
About limiting in stages, I absolutely do this. The first time it hit me that instead of having a limiter on the master bus to catch the occasional transient spike, which almost always comes from the snare, I could put it on the drum bus to do the same job and not crunch the rest of the mix in those places, it was a bit of a revelation.
I tend to do some things to limit transients early - the aforementioned drum bus limiter once I’ve got the basic fader balance, and a tape sim on the master bus (I love a particular setting of “Tape 99”). Then once the mix is a lot more finished and starting to come together, I think about which elements could benefit from some form of distortion - limiting, clipping, or some kind of soft saturation like a transformer, preamp emulation etc.
I definitely prefer doing this when the mix is further along, because in context it’s amazing how some elements can be pushed into outright distortion without sounding bad at all, but getting a load of extra energy and vibe. OTOH, other things want nothing at all, or only a very slight light touch.
Where it gets really magical is something like a kick, where you can often limit it a surprising amount and watch the peak level reducing, even on occasion to the point of being a brick wall 6dB or more below the un-limited transient peak, without disappearing into the mix - what is lost in amplitude is added in harmonic energy. And if you get it right, it sounds better than it did before!
Yes, tape sims, transformer emulations, console emulations, are all great, subtle “transient tamers”.
The humble limiter can actually be a very interesting tone shaping tool. I know of a few big time mixers who often use (abuse) a limiter on a bass track to get it to generate distortion that can make it translate on smaller speakers. The first person I saw do that was Ken Lewis (of Mixing Knight fame, and prior to that Ken Lewis’ Audio School).
Some people are putting a lot of effort into artificially making audio louder, while a few mastering engineers go in the opposite direction and focus on audio quality and preservation of dynamics. The intricate tricks and techniques described above look like a disproportionate amount of hassle to achieve something that serve a questionable purpose in my opinion.
Dynamics are an essential component of music.
When you learn to play an instrument or to sing, one of the most important things that you start practicing early and never cease to improve is nuances, introducing life and movement into your interpretation. Dynamics.
Manufacturers have deployed boundless ingenuity to achieve impressive dynamic ranges in their AD converters. Developers have allowed DAWs to operate in 32 bits floating point with even greater dynamic ranges.
Why would someone at the end of the production chain spend so much effort into crushing these hard-won dynamics is beyond me.
You don’t have to agree with me of course, I know I am kind of travelling upwind here, when a majority of mastering engineers still deliver music at high LUFS and low dynamic range. But maybe looking at what someone notable does will make you reconsider?
I have been fortunate enough to exchange directly with Bob Katz a few times in the past months. I think you’ll agree that the guy knows a thing or two about mastering. Yesterday he was mentioning one of his past works, the album 3 Cities by Bombay Dub Orchestra. So I recorded one song of this album, Strange Constellations (https://open.spotify.com/track/7pdfCkCQsxrJru6cMdvD6w?si=6364277d8c9f43d4) into my DAW, from Spotify set on high quality (AAC 320 kbps) with normalization turned off, and ran it through the meters.
Bob Katz mastered this song at -16.6 LUFS, with dynamics averaging at 16.5 LU (peak to loudness ratio). You can easily check for yourself by doing the same analysis I made. If you play it on Spotify, how does it sound to you? Why would someone with that kind of experience and expertise choose to preserve the dynamics over loudness? I think it is worth reflecting on these questions.
By the way, if you’re interested in some insider info: the AES is in the process of ratifying a new recommendation about streaming loudness, which should be published in the next few weeks. They met with Spotify and other platforms last week and Spotify has agreed to follow the upcoming recommendations. It looks like they are going to reduce the target normalization loudness to -16 LUFS instead of the current -14 LUFS. And they had just switched from ReplayGain to LUFS a few months back. Another change that has actually already started is the use of album/playlist normalization as standard. SoundCloud was there too so that may be the sign that they are finally going to make the normalization move after talking about it for so many years. Things are changing quickly and certainly not in favor of low dynamic range content.
Without meaning to offend, you sound a little dogmatic - I understand not using compression or limiting for artistic purpose, but you don’t seem to understand the reverse.
Perhaps this will help; an acoustic drum kit is very dynamic. It can go from whisper quiet to literally painfully loud. If you make a recording of that, it’s mostly going to be heard in environments where the smaller details that are intrinsic to the performance would be lost in the ambient noise of most casual listening situations. Compression or dynamic control could, therefore, be a way of presenting a listener at home, in a car, in the office or listening on earbuds on the train an experience that is closer to how it felt to be in the room while the performance occurred - by allowing the details - the soft hits, the room ambience, the sympathetic resonance of the skins to be heard by the end listener.
Or you could preserved the entire dynamic range of the performance on the end product. In that case, the listener would need to turn the recording up very loud to appreciate the small details. And they would quickly run into distortion and bandwidth limitations in most consumer playback systems.
And in controlling the dynamic range of the drums in that situation, there’s no need to compromise the dynamic flow of the entire performance. It is literally a way to present the end listener with an experience more akin to the original sonic event, through a medium that is naturally going to be one step removed from it.
I wonder what makes you think that? I never said or implied that I was against compression or limiting. I don’t like extreme dynamic range reduction, especially when the main goal is to achieve a loud mix, which is going to end up drastically turned down by normalization algorithms anyway.
I’m not sure why you are explaining the basics of compression to me here. Compression is a fantastic tool, it’s no wonder that it was one of the very first audio processing to be invented. Having recorded, mixed and produced music professionally for close to 30 years I have had many occasions to use compression in all kinds of situations and for all kinds of different purposes. It is very rare for me to not use compression while mixing.
I am expressing my opinion here, again you don’t have to agree with me. I am just suggesting that maybe there is something wrong when people keep reducing the dynamic range to extreme values in an attempt to end up louder when in the majority of cases, the opposite will happen (since most music is streamed normalized now, and the trend isn’t reversing anytime soon, quite the opposite).
There was some kind of logic to this when music was played on CDs. Now that most of the music is normalized, why not take advantage of the improvement that audio engineering has benefited from these recent years in terms of audio quality, particularly in the extension of dynamic range and resolution?
And by the way, I do believe that some of the music is actually benefitting from more DR reduction. Every song is different and should be treated as such. But systematically aiming at the loudest possible levels doesn’t make sense nowadays, and there are still many mastering engineers who work like this.
What target loudness are you talking about? If you are referring to the target values that streaming platforms are using, this is achieved automatically. Your song will end up at their target loudness regardless of whatever loudness you set at the mixing or mastering stage.
The only thing you need when mixing is to avoid unintentional clipping and too much dynamic range reduction that would make the mastering engineer’s job impossible or very hard. But if your gain staging is correct in the first place that shouldn’t be a problem. Unless you mess up your mix with a badly set master limiter of course.
The one I set. If you’re mixing a song that is part of an album, each track needs a similar loudness, otherwise it will be a poor experience for the listener.
Correct, but not everything is loudness normalised. CDs for instance.
This is a home recording forum; very few members send their mixes to mastering engineers. They either master the recording themselves, or ‘pretend’ master on the fly, in which case you need a limiter to attenuate loudness.
Personally I think ‘pretend mastering’ is an unnecessarily derogatory term. Mastering on the fly is a skill in itself, and the one huge bonus is that whatever you hear coming out of the monitors is the actual finished product - no nasty surprises after the separate stage of mastering.
Ah, ok I’ve gone back up through the thread. I think I hadn’t appreciated that the conversation had meandered into a question of loudness-war style sonic destruction, and that you were talking about something other than the original topic of the thread, so I was replying to you as if you were. But having seen your post quoted here, I better understand what’s happening - my apologies.
I don’t believe each track in an album should have a similar loudness, quite the opposite. Unless your album only has songs that are all very similar to one another, but that would be weird and very unusual. Soft/slow songs shouldn’t sound as loud as upbeat or more agressive songs.
Fortunately for us, Spotify has an album normalisation feature that allows the albums to be streamed with these loudness differences as the artist/producer intended. Every mainstream pop/rock album I can think of has loudness changes between songs. I suppose there are music genres where it is more common to have loudness consistency across the album, like maybe the heavier subgenres of metal or EDM, but I don’t listen to these so I wouldn’t know.
There are some people who listen to CDs, vinyl or cassette tapes for sure. But they are such a small proportion of the listeners… it doesn’t make sense to target a tiny fraction of your audience. Besides, to experience a loudness difference between 2 CDs/cassettes/vinyls you have to switch them first, which takes some time anyway. So if you have to adjust the volume after having ejected and inserted the next album, it really isn’t a big deal.
Last I checked, stats were showing that more than 85% of the music that is listened to is normalised.
However, even though the original topic isn’t explicitly about loudness war, it still is about maximizing loudness in a mix, which isn’t radically different…
Anyway, I enjoyed learning about Andrew’s process as it is something I had absolutely no experience with and I love learning new stuff. The technical side of it is interesting to me, it is just the ultimate goal that doesn’t really resonate with me.
We’ll have to agree to disagree. I say it’s the norm. You’re not going to find an operatic aria on a death metal band’s album or vice versa. Even if the tracks are of differing styles, they still need a similar loudness, otherwise the listener has to keep turning the volume up and down. It’s exactly the reason the CALM Act of 2012 was introduced.
Depends on your definition of ‘tiny’. CDs and vinyl combined still have 20% of the market. After all, you can’t sell a download or a stream on a merch table - and you certainly can’t autograph one.
In any case, I don’t specifically target only non-loudness normalised environments, I try to cover all eventualities so that my masters will sound good on any platform.
If you have to turn the volume up and down between tracks while listening to a CD, do you think that is a big deal?
I rarely listen to music for recreational purposes, but my band’s sales are 90% physical copies, mostly CDs.
Now some questions for you:
Do you think it’s OK to mix 10 album tracks without paying any attention to their individual loudness?
Do you think it’s good practice to leave loudness to chance in the hope that streaming services will normalise your work for you?
If you are working for a client, what do you think the artist is going to say when they receive their recordings and play them back on Windows Media Player or any other non-loudness normalised playback environment, and they are all separate levels of loudness?
Well I’ll have to insist
I have been studying loudness and normalisation for the past 2 years, analyzing hundreds of songs played from Spotify and Tidal in different genres. It is easy to check for yourself: just do it if you don’t believe me.
Sorry but no, it isn’t. The CALM act was specifically directed at TV broadcasters and targets the loudness differences between commercials and other programmes. It doesn’t apply to audio streaming services and has nothing to do with loudness differences between the songs in an album.
There certainly is a big vinyl comeback indeed. But I very much doubt it is ever going to catch up with streaming. RIAA says physical sales are 7% of the market in 2020 (source).
On that we can definitely agree. One master to rule them all!
Why would you? CDs are not normalised so each track is played back at its intended loudness.
Absolutely not. Have I said something that would make you think otherwise? All I am saying is that not all songs in an album have the same structure, dynamic content, emotional content, and therefore ideal loudness. Again, don’t take my word for granted: do verify this yourself.
Good or bad, it’s what you have to make do with. We have no influence on the loudness targets that they use. However, we do have an influence on the relative levels between several tracks in an album. That is why Spotify has a specific feature that turns the normalisation off when you are listening to an album, so that the original loudness differences from one track to the next are respected as the artist intended. (source)
Well I do this all day long so I can safely say that provided you are setting the loudness according to your client’s vision and your professional input, all is fine.
I do master for CD every once in a while. I did one album 2 weeks ago. There are other things you need to pay attention to as well, for which we have lost the habit, like setting the right amount of silence between two tracks for instance.
By the way, Bob Katz was talking about this very feature yesterday night. Here’s an extract from the livestream:
Is it bad that I actually prefer the new version better and I liked the old one too.
Nice work Andrew. Well polished and clean without too much shredding on the track!