"EQ before or after compression? I say: before AND after applying compression." These were the words of my producing tutor, anyway.
But, he was speaking in general. And speaking of special contexts in such a simple fashion does not do justice to the craft of producing (or to my tutor, for that matter).
You might think I am just being nostalgic, but that isn't the case here. I just want to arrive to a point, having explored the fundamentals and common settings and situations first.
The "Back in the Day" Chronicles
Back in the age of analog equipment, there wasn't a question such as "so, do we connect the EQ before, or after compression?". It was understood that the EQ comes first.
The Reasons
Well, to begin with, the sound quality and tone clarity.
Applying compression makes sense only when the fuzziness and/or muddiness (especially the low end thumps) have been taken care of. Now, imagine a set where some source recording is of lower quality, and add some signal noise to it...
In such a situation, to compress before applying the EQ would be like cutting the branch on which you're sitting. The dynamic range will manifest more flexibility than it actually has, resulting in very suboptimal tone shaping.
Then, when you try to boost a certain frequency range with the EQ - disaster strikes: the audio track starts sounding both noisy and muddy. In fact, much worse than what it sounded before the compressor and the equalizer were applied. And you can't fix this in any way; literally nothing helps here!
Resources and Goals Today
Ultimately, the question "EQ before or after compression" nowadays depends on what you want to achieve.
The audio resources are top notch, the settings of plug-ins are defined out-front, the mix is stable to begin with, with the signal chain clearer than a newborn's eyes, while the master bus actively evolves into a master Airbus... Well, not really...
A ton of work remains to be done in the music production of today, but the flow is, for all intents and purposes, quite a lot more "plastic". And in a good way!
The compressor threshold for example, is both a number and a knob, allowing for a very nuanced dynamic range of each channel.
Likewise, EQs come in every shape or form imaginable, along with innumerable presets. And you can create new ones, adjust the existing and save them. You literally can experiment with your equalizer plug-ins to oblivion.
The Options and Scenarios
Well, either the compressor or the EQ will be first; there's no way around that!
But: what would your decision depend on? Well, let's take a look at a few standard scenarios.
A Thick Low End
Depending on the texture, it might be challenging to mix a single kick drum with the bass guitar or double bass. Many such situations will require some side-chaining.
Now, side-chaining is based on compression, but we're talking about the mastering phase primarily. So, side-chaining, even if first, does not mean that compression in general is to go before the EQ, at least not always.
The general rule here is: the thicker the low end, the more it'll affect the overall sound and tone. And a lot more in fact, than the thick high end would. The high end is thick by default anyway of course, as most of the overtones are in that register.
Because of this, even the initial stages of mixing simply have to begin with equalization here. Eliminate the mud and fog and create the space necessary; compress and adjust the dynamics later.
You can't really be "creative" everywhere. For example, you might need to take care of the timpani, orchestral bass drum, plus the double basses' section and the bass guitar... done that, and it's a nightmare if you don't know what you're doing. Trust me - at least try not to experiment here.
The Spatial Texture
From the sustained chords of a meditational track, all the way to the virtuosic repertoire of any classical section-type chamber ensemble (say, a string quartet or a woodwind quintet), the sound is quite spacious.
This means that the clarity is understood. You hear each note clearly, and this will naturally lead to having greater control and options within the mix.
Such a situation is where you can - and should be - creative. The mixing controls will affect the audio dynamics, with the compression almost being an instrument in its own right.
Putting the compressor first is a standard option, potentially leaving out the EQ altogether, depending on the context. However, note that being creative comes at a price.
Avoid getting creative out of context... if you're called to mix a professional string quartet recording, don't get wild with the compressor just because you'd barely be using an EQ.
Wide Dynamic Ranges
The dynamics are a major factor in every genre. It might sound counter-intuitive, but when tracks have wide dynamic ranges, the processing of sounds should be more strict and tight. In other words, the threshold of the compressor will need to be set to a quite high point.
The above is meant to compensate for what would otherwise result in more "flat" dynamics. The compressor should not affect the tracks by uncontrollably just boosting the quiet and turning down the loud elements.
When the huge disparity in loudness of the sounds is intentional, stick to the EQ and maybe enhance the balance just a tiny bit with the compressor. Notice: this is a situation contrary to the one where spatial texture is present.
How About Spatial Textures With Wide Dynamic Ranges?
Compressors are still barely needed; the wide dynamic range takes the cake. In such a situation, the EQ is what is prevalent anyway.
So, Did The Rules Change?
Hard and fast rules are less and less hard and more fast. The signal sounds smoother today, and the overall sound: tighter.
In other words, apply compression and EQ in the order the signal requires. You have the plug-ins and you pretty much know what to expect.
Remember that, as producers, we're after sound balance and the overall tone of a track; we do not seek to compress, equalize, mix, limit, bypass, make buses...
Signal, EQ, Compressor, Control, Course - The Conclusion
What comes first between the compressor and the EQ should depend on which of them is more needed, i.e. what the signal which is processed requires.
Technically, you can do what you want, but practically speaking, the flow will actually depend on what the mixes are calling for. The tone we're after will dictate the necessity, order and level of EQing and compressing as such.
Simply Put:
What is less needed will affect the sound even less, if it comes later. So, if the signal requires less compression and more EQing - the EQ should come first and the compressor second. Conversely, when the EQ is not as relevant, the compressor comes first.
A single signal will at times tend to play a key role in the course a song takes. The very same signal can trigger a drop or a development within tracks.
But a single signal is just that: a single signal still! Unless you're after extravagant sounds, the instruments of the tracks, their vocals and each and every other signal should speak for themselves too!