Controlling Transients for a Punchy, Loud Mix (without excessive final limiting)

Ok then, I just finished to read the whole thread and I wish to thank you Andrew for sharing this. I’ll try this to see how it works soon. And I hope I don’t mess everything up as I’m used to…

But the initial paragraph that started this topic was

Which means that you mixed one way before then change something and now you’re happy with this thing.
Controlling peaks isn’t the main topic as I understood, or I missed something :confused:

The manual says:

Saturation level
Adjusts the amount of harmonic distortion added to the original signal.
Range: 0 to 100%
Saturation type
• Selects between Odd and Even harmonics. The impact of odd vs. even harmonics on a signal is very
content-dependent.
• In contrast to Even and Odd, Heavy is less about adding harmonics and is more of a traditional clipper. It has
a custom response to give you a different sound than most clippers, allowing you to shape the sound in ways a
simple clipper can’t.

So it looks like it’s kind of going from soft to hard clipping as you turn the knob, but it also says that it’s not about adding harmonics, yet I always thought soft clipping was adding even order harmonics and hard clipping was adding odd order harmonics so it’s a bit confusing. Fortunately, we have our ears to tell us which one sounds good when we use it :wink:

1 Like

I was with you all the way Andrew, until…

Clippers - OK, as long as it’s inaudible, then clipping is worth looking at. But surely if you use limiters all the way through the chain you’re committing the same crime as slapping a limiter on the 2-bus, just in a different place?

Personally I don’t think there is anything better than manual automation but if you can’t be bothered (like me :slight_smile: ) I suggest something like ERA Vocal Leveler. I use Drum Leveler on pretty much everything that needs leveling out. Not only does it reduce the transients but it boosts the low level stuff.

1 Like

No, Andrew is discussing the finer points of leveling out inherently spiky signals such as snare, vocals etc. In other words, he is suggesting alternatives to compression.

I would like to add Torque to the list of piling up punch related plugins already. Torque works really well on isolating the energy of the punch, and pitch and can work on a variety of tracks.

Thanks for posting the text quote, my PDF manual doesn’t seem to have any of that language, or at least I can’t find it through a text search. And it looks like they haven’t updated the manual since I downloaded it (41 pages).

Yes, I got that. I would still call it gain staging though, since it presumably relates to “headroom”. As I understand it, since the analog days the definition of gain staging had to do with 1. Noise Floor (or signal-to-noise ratio), and 2. Headroom. With analog they had to work harder at these things. With digital, a lot of the noise floor problem was solved (unless you do something wrong), and the headroom was potentially significantly increased. So presumably you don’t have those headaches any more. Then the Loudness Wars happened (sorry Andrew for bringing it up :face_with_hand_over_mouth:). Everybody got creative with digital techniques and plugins to push the envelope. I understand and like the idea Andrew has described, I’m just framing it in the wider context of “mixing theory” or what have you. As long as it sounds good, and achieved your goal, there are “no rules” as they say. “If it sounds good, it is good.” As you pointed out, does shifting the load from the master buss limiting to tracks sound better or work better? Andrew seems to think so. We just have to experiment ourselves, if we want to.

2 Likes

As I was watching that Scheps Omni Channel " In-Depth" video, at around 21:23 (see quote below), he describes inserting an additional plugin. The SOC will let you insert one Waves plugin in the chain. He demonstrates with the Torque plugin. So both to your point, and potentially using that technique (along with Andrew’s) in SOC, if anyone is interested in it. :slightly_smiling_face: Regardless, if you use SOC then this video is extremely helpful. And if you open it in YouTube and look in the video Description, there is a helpful time-stamp index to all the topics!
.

21:23 – How to insert additional plugins into the Scheps Omni Channel

.

2 Likes

Aha! I’m glad you asked this, because this is at the heart of why this method works so well… No it’s definitely not the same… and here is why…

Take a snare drum - you can clip a snare drum much deeper than you can, say a vocal, or say some cymbals, or some other other musical instrument like a piano. By clipping or limiting the individual sources, you are tailoring the type and the amount of clipping and or limiting to suit. You have control over it individually.

When you slam all your stuff into a limiter on the 2 buss, it gets limited all the same amount. Some of that will be favourable to some the of programme material, while at the same time being devastating to other sources fed into it. The only option in that case is to reduce the amount of limiting until the devastation is minimised. Taking care of it most impactful sources “upstream” minimises the what you have to do at the end, and the net result is that it can sound better.

It’s the same principle as to why we use multiple stages of compression. Each earlier stage lessens the work needed by the processors in the next stage. So if you goal is to not “hear” compression working, then condition the material going into the compressor.

There is also the fact that transients in heavily rhythmic music compound, because they tend to all happen at once. If you tame each individual transient at the source the compounding effect of them is lessened.

committing the same crime

A “crime” is good analogy that actually illustrates the point:

Who is likely to be the more successful, wealthy and uncaptured criminal? The guy who walks in the front door of the bank with a balaclava and a sawn-off shotgun demanding money; or the quiet, nerdy computer geek who comes up with a way to silently skim a few dollars each from millions of different bank accounts?

Little gains add up to a lot, and can be pretty much penalty-free.

…I dunno, Adrian, I could talk all day trying to prove to you why it works. Ultimately, that is meaningless. If you care, try it; If you don’t, ignore it and carry on as you were.

3 Likes

I’m glad you explained this, based on Adrian’s postulation. It has always concerned me that with a Master buss limiter (or compressor, for that matter) you are having to use one attack and release setting to cover all the frequency range, as well as your point about the limiting being being the same amount (or at least generic). Different frequency ranges respond differently to attack and release (owing to longer or shorter wavelengths), or certainly can in many cases, so treating things more individually could potential give you more control with attack and release on a specific sound.

Certain side-chain filters have been a workaround for getting frequencies to actually be attenuated the same amount in compressors and limiters, including the Master buss. Otherwise they could be imbalanced. It seems to me you could somewhat solve or avoid that compromise with your technique? At least with the Master buss.

2 Likes

But why would you use a master limiter when mixing? I personally think that if you feel the need to use anything that deals with dynamics or frequencies on the mix bus, you have done something wrong in the mix and you should find the cause and fix it at the source rather than patching it at the end of the chain.

If the mix is done right, mastering can be done with harmless limiting, compression and EQ on the whole mix. It doesn’t have to be detrimental to the sound because it can be done in light touches. You shouldn’t have to make upfront moves like fast release times, >2 dBs of boost/cut on an EQ, narrow Q etc while mastering (unless you’re salvaging a bad mix).

It’s a good point, and I think it just became a trend and a workflow in the modern era. Many point to the invention of the brickwall digital limiter in 1999 (IIRC), which changed the way mixing and mastering was done, and brought about that famous audio war which shall remain unnamed. :wink: Efforts in recent years to undo that damage have had some success, and then I guess we can ask “where do we go from here?”

Your approach seems like a very classic one, which is refreshing. It does cause me to reflect on the reasons behind things. The quest for loudness, at least commercially, isn’t that new though. The famous mastering engineer Bob Ludwig got his big break when he gambled on a brand new and expensive Neumann cutting lathe (for vinyl) that could print the audio 6dB louder than before. He took all the business from everyone else for awhile (1960’s or 1970’s I would guess).

Yes, I would say that’s the classic standard. Though things have changed over time. When cutting vinyl, they had to do some pretty drastic EQ to the low end for the vinyl standard IIRC. And cutting the disc was part of the mastering process back then. Digital changed everything (or many things), and opened up a whole new world of possibilities. Things got wild and crazy, and probably out of control just a bit. With a renewed focus on recordings with actual “dynamics”, it may get back to that standard. I would remark though, that the explosion in plugin designers and technology probably doesn’t help. They have to create a need to sell their merchandise, and they’re not required to attach warning labels for its use. :smirk:

That’s very true, the offer is incredibly extensive and often quite cheap. It’s very easy to buy a plugin because of a trend, and designers have understood and taken advantage of the fact that we’re attracted to shiny things. Scrolling through the Facebook groups about audio engineering is quite scary in this regard, there are thousands of “producers” and “mixers” who think the quality of their work depends primarily on their DAW and plugins…

I have been a victim of this shiny objects syndrome myself when I started out. Fortunately I’ve matured and now I only buy something when I need it, or when something new brings a significant improvement to something similar I already have. And the biggest mindset change I made when I started doing this for a living is thinking a lot more in terms of ergonomics and time efficiency.

2 Likes

Because it’s the most effective way of hitting the target loudness.

Just a little addendum to this subject.

On the Audio Mix Club website, we had a mix challenge last month called “Loudness without Mastering”, where we were encouraged to put into practice the techniques outlined here, but to submit a mix that has no mastering limiting on it. I decided to submit a track that I had already posted a mastered version of here, but without the mastering processing.

Here is that track:

Here is what it returns in “Loudness Penalty” for streaming.

So, as I said above, whether you are aiming for a loud mix or not, using this method means you can get away with far less master buss limiting (and maybe even none) on your tracks.

There is also another advantage of controlling overly dynamic peaks in your individual tracks is that the simple act of balancing your elements is much, much easier. Trying to balance tracks that are constantly changing their internal dynamics in relation to one another is like herding cats - it just leads to frustration and an impossible spaghetti of automation. When you keep those dynamic relationships controlled, you are in a far better position to purposefully affect the song’s dynamics for a musically beneficial goal, not just as remedial “repair” work.

2 Likes

A brand new video on the exact subject.

1 Like

All in all a cool thread with some interesting discussion.

Just to pick up on Lophophora’s question about why you’d put something controlling dynamics on the master bus… It’s very common for mixers to do it. It’s also common for mixers not to do it. It’s easy to think of them as something that is a necessary evil to make a track loud, but it’s just as valid to think of them as something that gives the track a particular feel, or unity, which can enhance the emotion present in the song and the performances. That’s why I put a compressor on the mixbus, anyway. And actually, I think that compression can limit the future loudness potential of the mix as much as enhance it in some situations - so it’s definitely not a compromise to a volume war etc.

About limiting in stages, I absolutely do this. The first time it hit me that instead of having a limiter on the master bus to catch the occasional transient spike, which almost always comes from the snare, I could put it on the drum bus to do the same job and not crunch the rest of the mix in those places, it was a bit of a revelation.

I tend to do some things to limit transients early - the aforementioned drum bus limiter once I’ve got the basic fader balance, and a tape sim on the master bus (I love a particular setting of “Tape 99”). Then once the mix is a lot more finished and starting to come together, I think about which elements could benefit from some form of distortion - limiting, clipping, or some kind of soft saturation like a transformer, preamp emulation etc.

I definitely prefer doing this when the mix is further along, because in context it’s amazing how some elements can be pushed into outright distortion without sounding bad at all, but getting a load of extra energy and vibe. OTOH, other things want nothing at all, or only a very slight light touch.

Where it gets really magical is something like a kick, where you can often limit it a surprising amount and watch the peak level reducing, even on occasion to the point of being a brick wall 6dB or more below the un-limited transient peak, without disappearing into the mix - what is lost in amplitude is added in harmonic energy. And if you get it right, it sounds better than it did before!

1 Like

Yes, tape sims, transformer emulations, console emulations, are all great, subtle “transient tamers”.

The humble limiter can actually be a very interesting tone shaping tool. I know of a few big time mixers who often use (abuse) a limiter on a bass track to get it to generate distortion that can make it translate on smaller speakers. The first person I saw do that was Ken Lewis (of Mixing Knight fame, and prior to that Ken Lewis’ Audio School).

Some people are putting a lot of effort into artificially making audio louder, while a few mastering engineers go in the opposite direction and focus on audio quality and preservation of dynamics. The intricate tricks and techniques described above look like a disproportionate amount of hassle to achieve something that serve a questionable purpose in my opinion.

Dynamics are an essential component of music.

When you learn to play an instrument or to sing, one of the most important things that you start practicing early and never cease to improve is nuances, introducing life and movement into your interpretation. Dynamics.

Manufacturers have deployed boundless ingenuity to achieve impressive dynamic ranges in their AD converters. Developers have allowed DAWs to operate in 32 bits floating point with even greater dynamic ranges.

Why would someone at the end of the production chain spend so much effort into crushing these hard-won dynamics is beyond me.

You don’t have to agree with me of course, I know I am kind of travelling upwind here, when a majority of mastering engineers still deliver music at high LUFS and low dynamic range. But maybe looking at what someone notable does will make you reconsider?

I have been fortunate enough to exchange directly with Bob Katz a few times in the past months. I think you’ll agree that the guy knows a thing or two about mastering. Yesterday he was mentioning one of his past works, the album 3 Cities by Bombay Dub Orchestra. So I recorded one song of this album, Strange Constellations (https://open.spotify.com/track/7pdfCkCQsxrJru6cMdvD6w?si=6364277d8c9f43d4) into my DAW, from Spotify set on high quality (AAC 320 kbps) with normalization turned off, and ran it through the meters.

Bob Katz mastered this song at -16.6 LUFS, with dynamics averaging at 16.5 LU (peak to loudness ratio). You can easily check for yourself by doing the same analysis I made. If you play it on Spotify, how does it sound to you? Why would someone with that kind of experience and expertise choose to preserve the dynamics over loudness? I think it is worth reflecting on these questions.

By the way, if you’re interested in some insider info: the AES is in the process of ratifying a new recommendation about streaming loudness, which should be published in the next few weeks. They met with Spotify and other platforms last week and Spotify has agreed to follow the upcoming recommendations. It looks like they are going to reduce the target normalization loudness to -16 LUFS instead of the current -14 LUFS. And they had just switched from ReplayGain to LUFS a few months back. Another change that has actually already started is the use of album/playlist normalization as standard. SoundCloud was there too so that may be the sign that they are finally going to make the normalization move after talking about it for so many years. Things are changing quickly and certainly not in favor of low dynamic range content.

Without meaning to offend, you sound a little dogmatic - I understand not using compression or limiting for artistic purpose, but you don’t seem to understand the reverse.

Perhaps this will help; an acoustic drum kit is very dynamic. It can go from whisper quiet to literally painfully loud. If you make a recording of that, it’s mostly going to be heard in environments where the smaller details that are intrinsic to the performance would be lost in the ambient noise of most casual listening situations. Compression or dynamic control could, therefore, be a way of presenting a listener at home, in a car, in the office or listening on earbuds on the train an experience that is closer to how it felt to be in the room while the performance occurred - by allowing the details - the soft hits, the room ambience, the sympathetic resonance of the skins to be heard by the end listener.

Or you could preserved the entire dynamic range of the performance on the end product. In that case, the listener would need to turn the recording up very loud to appreciate the small details. And they would quickly run into distortion and bandwidth limitations in most consumer playback systems.

And in controlling the dynamic range of the drums in that situation, there’s no need to compromise the dynamic flow of the entire performance. It is literally a way to present the end listener with an experience more akin to the original sonic event, through a medium that is naturally going to be one step removed from it.

1 Like

I wonder what makes you think that? I never said or implied that I was against compression or limiting. I don’t like extreme dynamic range reduction, especially when the main goal is to achieve a loud mix, which is going to end up drastically turned down by normalization algorithms anyway.

I’m not sure why you are explaining the basics of compression to me here. Compression is a fantastic tool, it’s no wonder that it was one of the very first audio processing to be invented. Having recorded, mixed and produced music professionally for close to 30 years I have had many occasions to use compression in all kinds of situations and for all kinds of different purposes. It is very rare for me to not use compression while mixing.

I am expressing my opinion here, again you don’t have to agree with me. I am just suggesting that maybe there is something wrong when people keep reducing the dynamic range to extreme values in an attempt to end up louder when in the majority of cases, the opposite will happen (since most music is streamed normalized now, and the trend isn’t reversing anytime soon, quite the opposite).

There was some kind of logic to this when music was played on CDs. Now that most of the music is normalized, why not take advantage of the improvement that audio engineering has benefited from these recent years in terms of audio quality, particularly in the extension of dynamic range and resolution?

And by the way, I do believe that some of the music is actually benefitting from more DR reduction. Every song is different and should be treated as such. But systematically aiming at the loudest possible levels doesn’t make sense nowadays, and there are still many mastering engineers who work like this.