Artificially generated reverb is one of the most important tools at a mix engineer’s disposal. It’s also one of the most misused and misunderstood. All too often the complaint about reverb is that it tends to ‘swamp’ recordings, turning them into an inarticulate mess. But that’s a lazy description of what’s going on. Reverbs don’t swamp anything; you do! Here are a couple of ways to clean up your ‘mess.’
All too often, engineers send a dry signal to a reverb, return it on another channel and expect the result to be almost perfect right out of the gate. When it’s not, their next effort is to turn the reverb up or down. That’s some fancy technique right there!
The problem with this approach – if indeed you can call it that – is that here reverb is thoughtlessly assumed to be entirely about quantity: how much or how little does this sound need?
Mistake Numero Uno
But there are far more pieces to the reverb puzzle than that. Quite apart from the quality of the reverb being generated, whether by a hardware unit or software plug-in, the size of the spaces to which you’re sending the dry signal, their shape, reflectivity and so on, are all crucial components of what makes a reverb work in each specific context. You can’t just bung a Hall reverb on a channel and expect it to work. Doing that is a thoughtless, lazy, and frankly shambolic way to approach the use of reverb in a mix. Occasionally you’ll get lucky, but mostly you won’t.
And right here is where the problem begins. The perennial issue of reverb turning your mix into a ‘soup’ happens before you even switch the reverb on! Unless YOU switch on, you’ll soon be awash in a large hall reverb (invariably), that’s nearly always inappropriate for the circumstances! From there – and particularly if this happens to you regularly – you’ll be fighting an uphill battle to establish a setting that’s right for the song.
Hold Your Fire!
First and foremost, you’ve got to switch on. Even if you’re not the most skilful engineer when it comes to reverb parameters (the architectural blueprints of any reverb setting), before you hear any reverb at all, ask yourself some basic questions about what you’re hoping to achieve, like: ‘What size do I imagine the space I’m conjuring up to be?’ ‘How big is the overall soundstage?’ and ‘Where is my sound source in this picture?’
Once you can at least answer these basic questions you should then be able to choose a reverb that’s closer to the mark.
But if your answer to the question: ‘What is my reverb’s size?’ is simply ‘Big!’ (i.e., lots of reverb on it) ask yourself why, and be sure the answer isn’t just a reflex action. If you’re an engineer who often finds yourself awash with too much reverb at the end of a mix, ‘big’ might be the wrong answer! The quickest way to minimise the damage of a washy reverb is to shorten it.
Put another way, have you ever heard someone complain that their ‘short room’ reverb swamped their song in a cloudy maelstrom of echoes? The problem invariably starts with a large space, like a Hall. It’s no coincidence that nearly every hardware and software device ever made has a Large Hall as its very first preset: they’re glamorous and attractive, make the manufacturer look special… and regularly wreck the mixes of engineers who reach aimlessly for Preset 1.
But let’s just say you do want a Large Hall reverb on your vocal – you wouldn’t be alone in that choice. You’re certain that shortening it is only going to compromise the vision you have for the sound of the song.
At this point then, we need to find a new way to achieve the big glamorous space that you imagine for the vocal, without ending up where you always do: in a giant, flat, inarticulate mess of reverb that prevents the vocal from projecting outward, thus remaining clear and compelling.
Your Activation Key
Okay, so this is the plan. We send our vocal to the reverb, doing our best to make conscious choices about pre-delays, early reflections and echoes until we’re happy with the sound. In fact, it’s awesome now because we finally bothered to get in there and tweak some parameters, possibly even discovering how some of them work along the way!
But this time, before we simply return the reverb to our mix, we add a compressor after the reverb plug-in (directly below it on most DAWs). If your signal chain is hardware-based, the same rationale applies. Don’t worry yet about the compressor’s settings… we’ll sort them out in a moment.
The next thing we need to do is key the compressor off the vocal signal so that every time we hear the voice, it activates this compressor.
Stay with me now.
‘Keying’ may be a term you’ve not heard before, and no, it doesn’t involve running your keys down the side of someone’s car. If you don’t know what ‘keying’ is in an audio engineering context, that’s cool. I’d simply urge you to do some extra reading and YouTubing to clarify this process in your own head, because though slightly more involved, it’s a great technique for problem solving issues like spill and bleed on the one hand, and creative sound effects like pumping and unnatural dynamics on the other.
Basically, keying an input in this manner is a process by which a signal that is NOT going through an audio device is nevertheless controlling its behaviour. In this example, our compressor is under the spell of an external influence; the vocal. The audio signal travelling through the unit itself – our reverb – is no longer dictating terms about how the compressor behaves. It’s kind of like using The Force on it, manipulating its behaviour from afar.
So, now our vocal reverb is dynamically diminishing in level whenever the vocal is present, in some respects contrary to how it normally behaves. The compressor is after the reverb, don’t forget, so when the singer sings, the compressor turns the reverb down. Then, as if by magic, the instant the vocal’s gone (depending on the speed of the release setting on the compressor) up comes our big lush vocal reverb!
Depending on how big a contrast you want here, you can make the compressor react a little or a lot. Typically, when I perform this manoeuvre the volume shift is around 2 – 4dB… not huge. But, that’s entirely up to you.
Secret Weapon Revealed
So now we have a weapon at our disposal that, in some respects, gives us our cake to eat as well. We get projection from the vocal thanks to a dynamically diminishing reverb. The voice is more detailed and focussed sounding, appearing to push forward whenever it speaks. Then, when it stops, our big space expands to fill the newly created void. What’s doubly cool is that, to the untrained ear, the amount of reverb we hear between vocal phrases appears to have been there all along!
This awesome ducking trick can also be used on naturally recorded ambient tracks too, of course. The compressor is simply applied to the ambient channels and keyed off the source in the same way.
Let’s say you’ve recorded an awesome ambient component to an acoustic guitar in a large country hall. If you like the amount of ambience trailing off the end of the guitar phrases, but that same quantity tends to overwhelm the instrument when it’s playing, by ducking the reverb down a bit, you can satisfy both requirements.
No more messy reverbs swamping your music!
Once you learn this trick of putting things before, or after, another device to influence its behaviour in different ways, a whole new world of control opens up. Keying sounds via others can be a miracle cure for things like bleed on drums, for example, not only because you’re using the dynamics of the instrument as a natural automation pass, you’re potentially saving huge amounts of time along the way.
Another cool trick is to add time-delayed send signals before they reach a reverb, sending different delay settings from different instruments in the mix. So, for example, you could have just one hall reverb for your whole song, like was often the case back in the day… with a pre-delay setting of zero, or close to zero. The instruments themselves feed the reverb unit their own pre-delay settings via an aux send (along with EQ, flange, distortion or whatever!). This contributes enormously to the sense of where in the space the instruments appear to reside.
If the space you’re generating is trying to sound big, any instruments that send a pre-delay value of zero will tend to sound like they’re standing up the back. Instruments with modest amounts will appear in the middle distance, and larger pre-delayed sounds will be more up front. To reinforce this illusion, you need to make sure the loudest, most up-front sounds have the longer pre-delays, and the quietest (background sounds) none at all.
It’s a big wide world out there… don’t let it swamp you.
Andy Stewart owns and operates The Mill studio in Victoria, a world-class production, mixing and mastering facility. He’s happy to respond to any pleas for pro audio help… contact him at: email@example.com or visit: www.themillstudio.com.au
Published monthly since 1991, our famous AV industry magazine is free for download or pay for print. Subscribers also receive CX News, our free weekly email with the latest industry news and jobs.