I know this has probably already discussed a million times but is there a “better” order for placement of processing items such as the EQ is first, compression second, reverb third and so on? The plugins I currently have are:
You can really do things any way you want, but generally your spectral (EQ) and dynamics (compressor/limiter/gate/expander) processing will come first in terms of shaping the signal. And it is usually recommended to do subtractive EQ (cuts) before compression, and additive EQ (boosts) after compression - but again, it’s up to you as to what works best for what you’re trying to do.
The time-based effects - like reverb, delay, chorus, flanging, phase-shifting - are usually last, kind of an “icing on the cake” that you already baked with EQ/compression.
It can also be helpful to refer to how analog processing used to be done, on consoles etc. EQ and compression are typically “insert” effects, that were put on individual channels for sculpting and gain-staging. The time-based effects were more often done through send/return or “Aux” functions, where they might be used on multiple channels. To some degree, this was driven by the schematics of analog processing. With digital and especially “in the box” mixing, it’s easier to just put what used to be Aux effects on just one channel (“insert”) if that’s all you need it for, and use the “Mix” knob to emulate a Send fader level. Reaper has the Mix knob (dry/wet) on every FX. But for FX like reverb that you might use on multiple channels/tracks, it is still good to put them on their own Aux track and Send to them. Using a shared sense of space on multiple tracks can help glue the mix together.
There’s a lot of leeway here! And keep in mind that there’s no rule saying you can’t have multiple EQ or compression plugins on the same track.
One thing that’s been clicking for me lately is that if you put some EQ on your track before compressing it, the compressor can do its job better. Think of it like this: if you have a vocal track and you know you don’t need anything below 80hz because it’s just useless room noise, cutting that out first with an EQ before you compress the track means that the compressor isn’t being activated and bringing the whole signal down when really the singer just hit the mic stand with their foot. That part is being cut out already.
I tend to think of EQ twice for each track, as needed. The first EQ is to shape the sound–mostly to get rid of unnecessary stuff. Something like a HPF on a vocal. I won’t always do this part, especially if the sound is from a VI. And then I look at EQ from the perspective of the whole song. With the whole song in mind, it’s sometimes useful to even put an EQ after a reverb or other effect in order to carve out space for other parts.
Both of these are excellent advice! However, there’s a tendency to interpret this incorrectly. The key is to think of two stages (I call them corrective vs tone shaping) instead of thinking of a plugin order. So this:
-any corrective processing -any tone shaping processing.
NOT this this:
-EQ -Compression -EQ
What’s the difference? The key is to remember that EQ’s and filters are not your only corrective tool. Compressors, Multi-band compressors, De-essers, gates, phase align tools, and distortions all serve corrective purpose. Check this out:
-Filter unwanted noise (surgical EQ)
-EQ out rogue frequencies and offending tones (surgical EQ)
-Fast transparent compression to control dynamics (surgical compressor)
-EQ to add warmth and character (tone shaping EQ)
-Slow and colored compression to add glue (tone shaping compressor)
Here’s a real life chain I might use on a vocal. I organize my recallable preset chains in ‘blocks’ which I can place right click and recall on inserts to get to them quicker.
Block 1:
-Filter unwanted noise (surgical EQ)
-EQ out unwanted frequencies (surgical EQ)
-Add console saturation (sonic enhancement positioned at the beginning)
-multi band compresson (surgical compression to control frequencies)
Block 2:
-glue compression (maybe an 1176 or Distressor or LA2A)
-EQ (maybe a Helios, Curve Bender, or Massive Passive that makes broad sweeping
color changing moves with subtle lifts and broad bell curves)
Not that I’m arguing with anything already said, but a possible exception to the corrective stuff first idea;
Ringing frequencies in vocals and guitars. I like to cut them after compression, because compression tends to bring them out. Say you’ve got a vocalist with a big 3.7kHz whistling tone when they really belt sustained vowels. Quite often I find that if I EQ that out first, I need to go back and deepen the notch cut once there’s 15 trillion dB of compression going on, whereas if I cut it after compression, it takes a shallower cut. makes sense I guess, since leaving it to go through the compression means the compressor’s helping clamp it down as and when it would be annoying.
So I guess my proviso would be that sometimes what you’re correcting is something that’s being exacerbated by prior processing - you sometimes need to take the rough with the smooth!
And on guitars - if all the guitars in a song were recorded through a cab that happened to have the usual 2.7kHz 12" speaker resonance, a 3dB narrow cut last on the guitar bus can be quicker and more effective than applying that cut across however many channels of guitar (bonus points if you use a dynamic EQ )
Well, if it’s constant noise where you’d use “noise reduction”, and you’re using Reaper (I think you said you switched?), you can use the ReaFIR plugin. Open it up on your track. Choose “Subtract mode” from the drop-down box, and check the “Automatically build noise profile (enable during noise)” box. Play only a segment of the audio where just the noise is and no performance. For example, say you recorded a vocal take which is good, but there was an air conditioner or fan running in the background. It has to be a noise that stays constant in the background to work. You build the noise profile with only a short sample of noise only (like beginning or end of track), then uncheck that noise profile box and the plugin does the rest. Voilà! Noise is gone and the clean audio track is left.
I prefer to use a dynamic eq to clamp down on sudden unwanted bursts of a certain frequency. And that’s corrective so any overall compressing comes afterwards.
I guess using a multi band for surgical compression is similar to using a dynamic eq?
I prefer the latter because it’s more accurate, but I’m sure there are reasons for preferring a multiband (like owning one that works outside the box? A dynamic eq is only available inside the box as far as I know).
I want to say thank you to everyone who has pitched in and helped me understand how to order the signal. I have been taking notes and trying to grasp as much as I can as a newbie to the mixing process.
I have a second HPF at the end because sparkly stuff tends to add back in some of the crap that is attenuated by the first HPF.
I have the EQ before the compression because I don’t see the point in making the compression go crazy working on frequencies that are going to be removed by the EQ.
I would add that the processing chain is secondary to:
a) Micro-editing the track.
b) Automating the levels to even out the signal.
If you get a) and b) right it’ll sound great before you even start on any processing.
Damn, you guys are so conscientious. I do (very roughly):
Tilt eq (to get in the ballpark)
vintage pulteq (optional)
5 band semi-parametric eq (with 1dB clickstops)
saturation 1 (subtle but good)
reverb (very subtle due to subsequent compression, or I might have to push it further down the order)
saturation 2 (maybe, different flavor of distortion/compression)
modern pulteq (rarely/sometimes)
2 knob comp/limiter (also optional)
this shit may vary quite a lot, and editing is pretty minimal and only if really necessary after this little lot.
Anyway, its nothing like textbook, but whatever, its pretty cool for the likes of me.