Bruce Kaphan - Prepare Your Mix

From MusicTechWiki

Student ResourcesMix PrepBruce Kaphan - Prepare Your Mix

Excerpt by Bruce Kaphan from Recording Magazine.com

Intro

Before I go deep, I feel obligated to try to make it clear that my process is my process. I've arrived at my process after many years of plying my craft. I have no doubt that a good deal of how I go about mixing would resonate with many other mix engineers, but maybe some of how I do what I do, not so much. Over the course of my career as a musician, I've been fortunate to find myself in situations where I was involved with productions on which some really amazing engineers have mixed, including Bob Clearmountain, Jack Joseph Puig, Mark Needham, Jim Scott, and many others. Being a fly on the wall in their sessions has been remarkably informative; I've absorbed many techniques I learned from these masters, but they mix how they mix, and I mix how I mix. And you'll mix how you mix.

One size does not fit all

There's one especially good reason why relatively inexperienced engineers have a tough time wrapping their minds around mixing: it's really complicated! It's art meets science meets infinity. I'm guessing that given the same set of tracks to mix, it's more likely two engineers could enter the same lottery and both win jackpots, than it would be to give them identical tools and expect their mixes to be crafted identically.

I was 24 years old when I began my professional engineering career. It wasn't until I was in my late 30s that I finally felt confident in my ability, through mixing, to evolve a group of tracks into a whole that's greater than the sum of its parts.

You may be disappointed to know that I don't have/can't pass along a routine approach to mixing, not even for myself. I haven't come up with a concise start-to-finish routine that I follow mix after mix; In fact, I think the idea of trying to follow one would be counterproductive to inspired mixing. So what good could this article be?

Especially since our Editor gave me the green light to write this article, so as to vitally inform my writing, while I've been mixing I've been very deliberate about being conscious of my decision making process, and I've taken notes. What I've noticed, and what I think you might find valuable, is that while I don't have a routine, linear systematic overall approach to mixing, I've come up with many subroutines-ways of dealing with smaller, more manageable tasks-such that by the time the multitude of smaller tasks has been addressed, the meta-task of finally addressing the picture from the widest angle becomes manageable. And there's an added benefit to focusing on detail before you focus on the big picture. You will have heard every inch of every track, close up-you'll know where all the tastiest bits are hiding, and this is where inspired mixing begins (at least for me).

An excellent metaphor for mixing can be found in house painting. For a paint job to look good and last a long time, it's likely that more time will have to be spent in preparing to paint than it will be spent actually painting. Such is the case with mixing-if all the tracks receive focused attention to detail on a one-by-one basis, then the minutiae within them can be addressed with precision and focus, so that when finally bringing them all together, the depth of care and detail that's gone into minutiae informs the greater whole.

A contemporary way of looking at this would be to think of the effort put into preparing each track as building "metadata" into that track, and that the metadata enriches the overall effectiveness of the mix, because no matter where your ear focuses, there's something of interest to find there.

One important concept to understand, though, is that it ain't over 'til it's over: I may spend an ample amount of time building metadata into each track, but that doesn't mean that after I've "finished" preparing each track, that I won't have to circle back later to make adjustments to the work I've already done.

Learn the rules in two easy steps

I've long held the belief that in digital audio there are only two rules:

  1. Unless you like the sound of digital clipping, avoid those red channel peak lights!
  2. More relevant/more important in the context of this article: everything you change changes everything.

One of the reasons I think it's pointless to try to adhere to a routine method for mixing is embodied in Rule #2 above: a routine method suggests finalizing one aspect of the mix before moving on to the next. In my experience, since every move you make affects all the other moves you've already made and will make, until I've made my final adjustment, I'm constantly revising/tweaking aspects of the mix I've already worked on-it's a process of successive approximation. Hopefully as the process evolves, the adjustments become finer and finer, until a tenth of a dB (the smallest increment of level adjustment in Pro Tools) becomes the limiting factor on whether there are any further adjustments to be made.

So with the infinite nature of the task at hand in mind, following are a few relatively finite tasks to get the ball rolling. Please note that I work primarily in Pro Tools and as such will be using some Pro Tools vernacular throughout this piece. I apologize to those of you who use other DAWs; the ideas are there, and it shouldn't be hard to translate specific terminology where needed.

A short list of mix preparation processes

Here are the subroutines I perform to prepare tracks for mixing:

  1. preparing the work area for maximum efficiency
  2. creating a panning scheme
  3. cleaning clips: muting/in and out fades and crossfading
  4. dealing with noise problems
  5. scrutinizing and adjusting for phase issues within groups of simultaneously recorded (or combining re-amped with source) tracks
  6. modifying the essential nature of a track through signal processing
  7. leveling a track or group of tracks
  8. selecting a palette of processor voices {reverbs, delays, pitch shifters, etc.)

At first blush, this list may make it appear that there's not much to mixing. But when you start to peel the onion, it's amazing how much effort is required to perform this list of tasks – generally I find that about 2/3 of the time I spend mixing a track is taken up with these (pre-mix) tasks!

Depending on track count, the quality of tools and techniques used to create the multi-track recording, the level of detail to which you aspire, or (in the case of a professional charging to mix) what you con afford, the range of time you con expect o mix to toke could literally be minutes to days. In my experience, for a track count under about 3 , if my client can afford it I usually like to spend approximately 5 to 8 hours on a mix. I can mix in less time, but that usually means shortcutting processes that I think ore important in getting a mix to really sing. And now, on to"the subroutines!

Preparing the work area for maxi­mum efficiency

During the analog era, limitations on track count and cost of tape necessitated making many "destructive" decisions: you can't unsplice o tape. In the DAW era, data storage is ridiculously inexpen­sive and very few edit/mix decisions can't be undone. That means that during production, I can afford to keep all options open as long as possible. This means putting up as many microphones as resources, imagination, and time/budget will allow, and keeping every take of audio or MIDI recorded along the way. I can't count the number of times this practice has been rewarded! Sometimes, repeated listening has revealed flaws (or at least tangents) in a take, eroding a previously "good enough" moment to the point of utter distraction. I call this phenomenon "snagging". This is where post-production fishing through outtakes has come to the rescue for me.

But keeping every bit of data ever associated with a session can lead to a messy and inefficient workspace. When it comes time for mixing, when suddenly I want to optimize available DSP and make my workspace as streamlined as it con be (both to facilitate maximum mixing power, and faster navigation through the edit and mix windows), I want only the tracks that have made the final, final cut to populate the session.

I first do a "save as" (giving myself a breadcrumb trail to retreat by if necessary – always a good idea any time you're about to make a substantive change to your session), saving the session with a new name, so that I can always backtrack to where things stood at the end of production. Then in my newly-saved version of the session, from the track list, first I delete tracks that haven't been an active part of the session for a long time. Don't worry-by saving the session with a unique new name, if I make a mistake here, I can always revert to the earlier version of the session by performing an Import Session Data command.

Once I've deleted all the tracks that haven't been in use for a long time, before I take the next step, once again I do a "save as,” giving the session a new unique name. At this point, if there are any tracks that are still active in the session, but ore either muted or their fader is all the way down, I then make these tracks inactive, then hide these tracks I'll probably continue not using. This process can be evolutionary as the mix develops: if I've used multiple microphones to cover a particular instrument (let's use drum kit as on example}, I may decide that the way the production evolved, I may not want to use all the mics originally used to cover the kit. Since I fre­quently go for both close miking and one or more sets of distant mies, I may choose to abandon one or more of these perspectives, for a more or less focused overall per­spective on the drums. By the time mixing begins, this might already be apparent, so I can lose one or more channels of drums at the outset. Or not – this will become more apparent as the mix evolves.

To make future decision making faster, I go through my track list and remove all tracks that are no longer needed, remembering of course that I can always re-import tracks I regret deleting. I do all of this in the event that further thinning of tracks is something I want to try as the mix evolves. Once I've removed all the chaff, when I do this additional thinning, the possibly temporarily inactive or hidden tracks become easier to find and reactivate if their trial removal ends up reconsidered.

Creating a panning scheme

I don't really view panning as o mix preparation task per se, but you have to start somewhere, and since pan plays such on elemental role in a mix, it makes sense to at least commit to some kind of cursory panning scheme early on. Doing so will at least help define a point of departure and allow you to develop some momentum. Panning choices have such o profound effect on how a mix comes across, I often find myself experimenting with panning schemes up until the very final stages of a mix.

Getting pan to fit together well is one of the more esoteric aspects of the mixer's artistry. An often-useful construct, regardless of how tracks might hove been record­ ed, is to try to emulate how instruments and voices might be placed on a stage in a performance space. On the other hand, it's just as valid to approach panning entirely abstractly-imagine if Picasso had been a mixer! Whether or not the indi­vidual tracks were recorded (or synthesized) in mono or stereo will help guide you on how to build the panning scheme. There really are no rules; there con be as many constructs/schemes as you con imagine.

When coming up with a panning scheme, low frequency sounds deserve two special considerations. First, it's commonly accepted that frequencies below 80 Hz are perceived as more omnidirectional than frequencies above 80 Hz. Second, the laws of physics dictate that for equal loudness, more power is required to reproduce lower frequencies than higher frequencies. Taking both of these concepts into consideration sug­gests-and the vast majority of albums produced throughout the history of stereo recording confirms-that it makes good sense to pan low frequencies at or near center. These two phe­nomena are worthy of further exploration, but are out of the scope of this article. Suffice it to say that generally speaking, when I mix, the kick drum and bass, or whatever low frequen­cy elements there are, are usually panned center.

If a vocal arrangement includes just one lead vocal, then panning that vocal center is kind of a no-brainer. Of course there's no law that says you can't pan a solo vocal hard left or right or somewhere between... When more vocals are added to an arrangement, panning bears further consideration.

Drum kit panning can be approached in so many different ways. Are you presenting the pan from the drummer's perspective or from the audience's perspective? How wide should the drums be panned? Do you want the high tom coming out one speaker and the low tom coming out the opposite? Should the right crash come out of only the right speaker, the left crash only out of the left speaker? There's no one right answer to any of these questions.

I do have a personal preference on drums: audience per­spective (for a close-miked right-handed drummer),with kick and snare center, hat approximately 30-40% right, floor tom

As a general rule, I like to use "opposition,” balancing sounds of similar nature or function, when placing sounds in the stereo field. If I'm mixing a band with two of any instru­ment (e.g. two guitars), I'll try to put one leftish and the other rightish. If I only have one example of a particular instrument or function in the mix, I'll often pan it either leftish or rightish (hard left and/or hard right included), then send this track to a mono delay return that's panned opposite of the source, with a delay time of approximate l 23 millseconds, to the effect that the return is a "bounce" off the opposite wall. This technique allows the dry track to maintain its pan identity but also gives the sense of a "space" in which the track was performed.

Ultimately, panning is one of those attributes of a mix that helps define how infinite the process can be. As I wrote earli­er, even when a panning scheme seems kind of obvious to me from the outset, I like to continue to explore alternative panning schemes until one kind of "snaps" into place. I always know it when I find that magic final pan, and I don't stop searching for it until I've found it.

Cleaning clips: mutes, fades, and crossfades

Once the workspace includes only the tracks I'm pretty sure will be final and I've at least initially placed them on the sound­ stage, I start to focus on "cleaning" the clips. The definition of “cleaning" varies from project to project, but for a studio production where noise, noodling, banter, or count off aren't desired as part of the final mix, I'll go through each track, first trimming entrances and exits, then I'll create in and out fades at the beginnings and ends of all regions.

This is where having a solid under­ standing of the different applications of different fade shapes is critical. There is no one-size-fits-all in/out/crossfade shape or length. Use your eyes and ears to determine the length and shape of all fades-look at and listen lo how the instrument whose track you're (cross) fading fades naturally on; its own, and emulate that sound/waveform shape. This is also the time to decide whether or not to remove "dead air" (space where a player or singer "sat out") in any of the clips. Generally I remove "dead air" but if, and only if, changing the noise floor doesn't bring attention to itself in a distracting way.

Wherever possible, use crossfades on every edit, and if you batch fade, be conscientious and visually inspect every crossfade to make sure that no unin­tended audio from before or after the fade has been inadvertently revealed. Very short crossfades are advisable for making sure edits are clean, while not inviting any unwanted artifacts. When I first began using Pro Tools back in the 1990s, butt edits (sharp cuts without crossfading) sometimes produced clicks. I'm happy to report that in this regard, Pro Tools has improved vastly over the years, and so have many other DAWs as well. Still, better safe than sorry!

Especially when I'm mixing tracks I didn't record or haven't previously worked with, I like cleaning the clips at this early stage. The process makes me scrutinize, for the first time, each and every track. It's usually at this point that I'll begin to recog­nize if there are any tracks suffering from noise (or other) problems, and this is also a great way to become aware of the musical contents of each track.

Dealing with noise problems

As it was for me when I was first learning to engineer, I often find that newbie engineers are not as adept at recording tracks that are as free from noise problems as I would expect a professional engineer's tracks to be. (But let's be real-noise problems don't just happen to newbie engineers, they affect professionals too!)

Minimizing ground loops and/or broadband noise that results from using gear that's poorly designed, gear in need of repair, a lack of understanding gain staging management, or just predicaments where noise has been dealt with as best as it can be but is still on the verge of unacceptable, can result in tracks with very problematic noise floor. Similarly, problems with DAWs that have been improperly set up can result in digital timing errors that manifest as "digital clicks"- nasty little spikes that can destroy the integrity of a track.

Thankfully there are now numerous software tools available for repairing these kinds of problems, and remarkably, they can do so with very limited degradation of the sonic integrity of the part of the audio you want to preserve. I don't have experi­ence with all of the different brands available, but my preferred DAW/software vendor (Cutting Edge Audio Group in San Francisco, CA) recommended iZotope RX for this type of work. Since purchasing it, I've repeatedly been absolutely blown away by how effective this software is al removing hum, hiss, and clicks.

Now, every time I prepare tracks for mixing, I carefully consider whether or not lo use this software suite-noise is relative, so my decision to use or not use RX always boils down to whether the noise in question is distracting enough in the context of the mix to warrant asking my client to pay for the few minutes of time it will take to deal with the problem. Also, I always consider the small, but still perceivable, audible effect/side-effect that using such noise reduction software will have.

RX's click and noise removal plugins allow you to monitor both the "inside" or "outside" sounds – if you're using the Declick module, you can either preview or ren­der the audio without the clicks, or if you're interested in seeing just how bad the click problem really is, you can monitor just the clicks! Though I've never been asked for it, I imagine sending a client a clicks-only version of their track would settle any question about whether or not the declicking time I had to spend was worthwhile...

If any of the tracks need this kind of attention, it's best to get it over with early in the mix process, because as I indicated above, inevitably there's a reasonable chance that this type of operation will at least have a subtle effect on the timbre of the track upon which it's used – you'll want to be doing subsequent processing on this "repaired" (altered) track so you'll build on the repair as you go, and not have to rebuild your processing for a second time after backtracking to deal with noise issues.

Again, I highly advise leaving yourself a breadcrumb trail, by first duplicating each track before doing noise work, and by saving (as a preset) the noise reduction setup that you eventually choose. You never know-as you work with the track you may come to realize that you need to refine the noise reduction work you did earlier, or you may come upon another clip that needs the same processing.

That's four of our eight steps taken care of. Next time, we'll discuss phase issues and using signal processing to refine the character of each track. See you then!

Dealing with phase issues

Now's the time to make sure related tracks are all in phase. Phase is easily worthy of an article (a book?) all by itself, but in the context of this article, it's important to remember that the phrase "out of phase" is used to describe at least two very differ­ent conditions. One is electronic phase (actually polarity); the other is phase differential caused by differences in time alignment.

Polarity changes can happen for a variety of reasons: the channel on the mixer or interface has its "phase" button pressed, a piece of gear is improperly designed, a cable is miswired, or there's a piece of gear in the recording chain that follows the opposite wiring convention for its bal­anced input or output. Most audio gear built these days follows the standard of "Pin 2 Hot" (Pin 1 is the ground, Pin 2 carries the audio signal ("hot"), Pin 3 carries an inverted copy of the signal ("cold") to cancel out radio interference). However, a lot of vintage gear follows the opposite rule and has Pin l Ground, Pin 2 Cold, and Pin 3 Hot. (How will you know? Read The Freaking Manual!) When any of these things happens, the signal arrives at your DAW inverted from its actual polari­ty... and if it's part of a group of tracks recorded at the same time, it will trash your audio. You can easily hear this for yourself by taking any mono track in your DAW, duplicating it, and Flipping the polarity of the copy, panning them both to the cen­ter, then comparing them by turning the level of the copy up and down. With the two tracks at equal level, they cancel each other out completely and you have silence; at unequal levels, you'll hear a hollow, phasey sound that lacks power and depth.

Spotting polarity-inverted tracks can be as simple as a visual inspection of the waveform. Tracks recorded at the same time should all have their waves moving up at the same time, then down at the some time; a track that's inverted will look the opposite. Try inverting its polarity and see if that makes the sound stronger and more robust. Time alignment is a little more complicated. Sound trav­els (1126 feet per second), but it's not instantaneous, so two mics picking up the same sound at different dis­tances will be out of phase to some degree or another; when the sound wave hitting one mic is at a peak or trough, the same wave hitting a different mic an instant later may be near zero. This is a particular problem when close miking a drum kit. When you're doing minimalist miking (two overhead mies and nothing else, or a 3-mic Glyn Johns setup), you can usually minimize phase issues, but what about close miking?

In mixing, after visually inspecting the tracks to make sure the waveforms all appear to be in phase {as described above), methodically make sure tracks in a group of simultaneously recorded tracks are in phase by doing the following test. Start with all tracks panned to exactly the some position {choosing center is a good idea for this test), with all tracks as close to the same output level as possible, with delay compensation engaged and all plug-ins turned off.

Choose one channel, listen to it, and pay attention to the peak level reading on the master fader. Then add one channel at a time, switching its phase negative, then positive. Leave the phase switch in the position that results in the most robust response.

If there seems to be no difference between the two positions, the default assumption should to leave that chan­nel "in phase." If you end up choosing to flip the phase on most of the channels, then start over with phase reversed on the first channel and try again. The idea is that the majority of your tracks were hope­ fully recorded "in phase," so the minority should need to be phase inverted.

A trickier method is to actually choose one track as a fixed reference and slide the other tracks backward and forward in time to align them to its waveform. This can work very well (and can even be auto­ mated to on extent with plug-ins like Sound Radix Auto-Align), but move tracks with care and always listen to make sure you're making the sound better, not worse.

Using signal processing to shape the tone and timbre of tracks

I've separated this section from Step 4, because I believe that noise should be dealt with before any tone shaping is done. Start clean!

When it comes to tone shaping during mixing, EQ is one of the most powerful tools in the mixer's toolkit. I believe there ore two distinct reasons for using EQ: one is (for lack of better descriptives) reparative and the other is creative. Generally, as I prepare tracks for mix­ing, I approach EQ adjustment repara­tively. What do I mean by this? At its essence, reparative EQ is a process that minimizes aspects of a track's frequency response that will have a negative impact when the track is combined with other tracks. Subjective? Yes. But that's how I see (and hear) it.

Hopefully the following example will help: if you've ever been to a sound­ check at a live venue, you'll have heard both the Front Of House and monitor engineers "ring out" their systems. In this process, the engineers Find "problem spots" in the response of their systems, where the gear that's in use, its place­ment in the room, room acoustics, and gear settings all conspire to make cer­tain frequencies resonate (and in extreme cases feed back). The live sound engineers' solution to this problem can include changing out and/or mov­ ing mies and monitors onstage to achieve a semi-acoustic solution to the problem, but more likely, they'll go about fixing the problem by identifying and selecting the resonant frequencies and turning them down via EQ.

Identifying and minimizing resonant frequencies on a per-track basis is a big part of what I consider to be reparative EQ. Of course, since this resonance isn't going to be as obvious in the studio as it is in the live venue (where feedback is the telltale sign), you'll have to work a little harder to find it. Everything con­tributes to ugly frequencies: the nature and quality of the instrument being played (even really good instruments may have wolf tones or dead spots in certain keys), room acoustics, mic choice and placement, preamp, signal process­ ing chain... simple, right?

Generally speaking, I'll EQ every track in a mix, with my first attention going to identifying and minmizing resonance in the lower mids, then identifying and mini­mizing other obtrusive resonant frequen­cies, going after the most egregious ones first.

I use a fully parametric EQ for this process, and I find and kill these reso­nances using the "boost then cut" method. Here's how it works. First, turn down your monitors a bit so you don't hurt them (or your ears). Then select a fre­quency band on your parametric (starting with the low mids), tighten the Q as nar­row as it will go, and turn the gain up as high as it will go, producing a narrow fre­quency spike. Then slowly sweep the band's frequency across its range, and listen for something ugly: a particular frequency that honks out at you when you emphasize it. (It's easier to hear than describe.) You're looking for frequencies that make the mix feel obviously congested, irritating, or dis­tracting. Once you find such a frequency, turn the gain down past zero so the boost becomes a cut, and gently broaden the Q until you've made that honk o away, leav­ing a clearer-sounding tonal blend Repeal this process until all nasty resonances ore tamed, first in the low mids, then the mids, and maybe in the lows if needed.

At this time I'll also use a highpass filter to eliminate or minimize frequencies below the lowest pitch in the functional range of the instrument or voice on the track. I use the term "functional range" to make sure to not exclude any sounds that happen to be below the pitch range of the instrument, but that are still useful. One great example is a guitarist thumping the body of his or her guitar for percussive effects while ploying; you don't want to filter out everything below 82.4 Hz (low E); because the thumps will lose power.

I keep a very handy book at my mixing station: Music, Physics, and Engineering, by Harry Olson. The most dog-eared page in my copy is where Olson offers a table of musical pitch conversion to frequen­cy. He also lays out the frequency ranges of symphonic instruments. This kind of informa­tion is extremely useful in reparative EQing.

If you don't already know the musical key of the song you're about to mix, figure it out! Then, depending on the track you're about to EQ, figure out the lowest note/ fre­quency that the instrument or voice you're focusing on is capable of producing. Often I find that the ugliest/strongest resonant fre­quencies (especially within the lower mids band) are within the first few harmonics in the overtone series – the o ave above the fundamental, the fifth above that, the sec­ond octave, then the third, etc. In my opin­ion, it's these overtones that have a tenden­cy to obscure the focus of the lower mids in the overall mix, and there are a number of good reasons for this.

Many instruments and voices inhabit the lower mids and midrange-there's always a lot of competition in this fre­quency space, so at least in my approach to mixing, I find it important to try to manage overtones from instru­ments in this range. That lets me try to "clarify" more of the fundamental fre­quencies each instrument produces. This is a good place to start in my first (reparative) stage of EQ, even though I'll probably revisit it later when I get to creative EQ!

The last stage of reparative EQ deals with upper mids and highs. Depending on the sound of the track, sometimes I'll leave these two bands flat. Other times, I will use those bands to cut ugly frequencies, and still other times I'll use the upper mid controls to find a "sweet spot" and boost it a little.

Many parametric EQs have an option, both for the lows and highs, to select shelving instead of parametric control. Often I'll set the high control for shelving, and especially if the track was recorded using a ribbon mic or is otherwise a bit on the dark side, I'll add some high frequency at this point, just to give the track some "air". (I also use low shelving when I don't need to chase resonances down there, to add weight or clarify a track to make room for others.)

A versatile parametric EQ offers up billions of possible combinations of sett tings. It's no wonder that learning to EQ effectively is worthy of a book all on its own. So before you get frustrated, let me give you one core element of wis­dom on EQ – the only thing that really matters in the end is the sound of the track and how it works in the mix. I've listed some of the ways in which I think about EQ to try to explain why I approach EQing the way I do; intellec­tual understanding is helpful, but it only goes so far when it comes time to set on EQ-you hove to please your ears.

Certainly, there are many other sig­nal processors available to the mixer­ delays, reverbs, pitch shifters, harmon­ic distortion, etc. For the sake of this article, I'm considering these to be mix tools, not pre-mix tools, though it could easily be argued that any type of sig­nal processing could be considered a pre-mix building block for a later mix.

Leveling tracks and groups of tracks

In my view, there is no more powerful tool for creating a robust and concise mix than leveling. And done the way I most prefer to do it, there is also no more time consuming process! It could easily be argued that leveling is in fact mixing, but I've really come to view it as a pre-mix process. I consider leveling one track {and/or a group of related tracks) as leveling, whereas I consider adjusting all the tracks' levels together as mixing. As I stated last time, I view a pre­-leveled track as a track that has the power to free my creative mind later on. When I'm actually mixing and trying to keep my eye on a bigger picture, it's eas­ier to do that with tracks in which structur­al minutiae have already been dealt with.

Leveling can be done manually, by riding faders or drawing level automa­tion curves (what I call breakpoint edit­ ing). You can also do it automatically, with compressors and/or limiters, or soft­ ware gain controllers like Quiet Art Ltd.'s WaveRider. Even if I know that I'm going to absolutely crush a lead vocal with compression when I mix a heavy rock track, I still pre-level the vocal track by hand with breakpoint editing; I know from experience that the compression I apply later will work more predictably and reliably on a track that's been lev­eled in advance.

I do have an automated-Fader control surface, and I've done a lot of work on analog mixers. But given a choice, I find that adjusting levels via breakpoint edit­ing is a lot more precise than working with Faders; you may have different results. Hand-drawn breakpoint leveling is another one of those techniques that's much easier to demonstrate with teacher and student in the same room at the same time, listening to and watching the process as it unfolds, but I'll do my best here to explain how I do it.

I perform leveling at a moderate con­trol room level, loud enough that I can feel sound pressure on my body but not so loud as to be in danger of ear fatigue. At this point in the process, the mix certainly won't be complete–it will be under construction–so I just rough out a mix where things are feeling rea­sonably representative of where I'm headed. At this point in time, I will already have instantiated an EQ and done the reparative EQing described earlier. I'll probably also want to "place" the vocal in a room or have a bit of reverb on it. It's worth spending at least a little time at this point to dial in some­ thing that feels appropriate to the track. I recommend mocking up this "space" so that the spatial response of the vocal, as it will eventually appear, is at least approximated. Then I dive in.

I select the volume automation view and loop playback, then I highlight a phrase, generally from 2 to 4 bars long, and hit Play. I then highlight the first phrase, word, syllable, or sound that grabs my attention, so I can change its volume. At least in Pro Tools, to move a segment of the line requires creating 4 breakpoints: the first "breaks" the line, the second sets the line to a new level, the third breaks the new line, and the fourth returns the line back to where it began. Even at a tenth of a dB in adjustment, I can hear abrupt adjustments to the volume automation line as unnatural distractions either in the noise floor or in the program, so I never allow sharp (vertical) changes in volume automation unless they occur in dead space between clips. Unfortunately, the default operation of highlighting a seg­ment of volume automation to move it will create vertical jumps between breakpoints 1 & 2 and between 3 & 4. My workaround is to create two breakpoints that are slightly within the edges of my highlight (these will become breakpoints 2 & 3 by default). Then when I move the line, the third and fourth breakpoints are created by default at the edges of my highlighted region, and with an angled slope rather than a vertical jump at either end.

I still prefer to manually de-ess a vocal track. Yeah, call me crazy, but unless the sound of compression is the sound the client wants to hear, I prefer to have control over levels myself, because I hear a difference and know that I prefer the sound of the manual method. So usually, in the case of a vocal track, one of the first things I tend to level is sibilance. From there I usually knock down the loudest bits, then I go after boost­ing the quiet bits, which might include the valleys between particularly loud bits, or the ends of lines where the singer might be either running out of breath or trailing off, where I know this kind of stylized informa­tion will just end up getting buried in the tracks unless I pull it out.

Once I get my looped phrase feeling/sounding robust, where nothing is jumping out of the track in a distracting way, and I can hear all the detail and nuance even in the quietest bits of the per­formance, I leave that phrase and loop the next phrase, and so on and so forth until I've made my way through the whole track. I judge my work solely with my ears-when­ ever I watch the faders moving, either in the Pro Tools mixer, or the Avid Artist Control, I start second-guessing myself... it just doesn't look right at all. But I trust my ears!

While leveling, I think I'm actually "sensing" sound pressure with my body as much as I'm hearing with my ears­ this is where having your control room level set appropriately is key. And this is why leveling in headphones leaves a little bit to be desired, though in a pinch, with good phones and good judgment, it's not impossible. Ultimately, if my client wants the best work I can do, I'll go through each and every track/group of tracks in the ses­sion doing this hand leveling.

Depending on the length of the song and quantity of lyrics, a lead vocal can take me up to an hour and a half; the average is probably somewhere around 45 minutes. Instrument tracks are usually quicker, but to bring out all the tasty bits, though tedious and time consuming, it's great to focus on each instrument track one at a time (still in the context of a semblance of some reasonable facsimile of a mix, even if the track I'm focusing on might be set a bit hotter overall than it will be in the final mix, or even if I choose to mute some other tracks just to be able to better focus on the one I'm leveling).

It's very likely that based on an ever­ tightening mix, the amount of the adjustments I make at this point will have to be adjusted later, but by scour­ing all the tracks and at least learning where all the tasty bits are, and insert­ing breakpoints and raising them by some amount now, once I'm actually into mixing, all the tedious and dis­tracting work of discovering where the tasty bits are (and/or where not-so­ tasty bits may be), and creating the breakpoints will be done, leaving me much better able to focus on the big­ger picture.

When leveling a lead vocal, once I'm done hand leveling, more often than not I'll want to use some com­pression to shape the overall timbre of the vocal. I don't wont the compressor to work on the vocal pre-fader, or I lose the benefit of all those breakpoint edits. I'll want to either reset the chan­nel's output to a bus to a new aux return in the mixer where I've instanti­ated a new plug-in compressor, or send the channel through a hardware output to an outboard compressor, then back to a new audio channel in the mixer.

The last type of gain riding I'd like to discuss is software-controlled; various software companies ore now offering software gain riding plug-ins. I've been using Quiet Arts WaveRider for a couple of years and find that in those circumstances where a client isn't will­ing to pay for the time involved in hand leveling, this is the next best approach.

With the precision available in a typi­cal DAW, using only compression and/or limiting to level a track is my least favorite approach. As far as I'm concerned, even lookahead limiting is just a big dumb hammer-certainly a useful big dumb hammer in the right cir­cumstances. But at this point in the histo­ry of audio tools, I don't look at com­pression as a leveling tool per se. I love compression. But I look at it as a tone­ shaping tool that controls timbre through level control. I only use it in cases where my clients can't afford hand leveling or even software leveling. That's when I go totally old-school and smash the hell out of things.

Building a pale of processor voices

I'd like to end with a quick return to the project that started this article: my assist­ing Greg Lisher with his newest album. Because we're working on his DAW rather than mine, I'm not as familiar with the plug-ins we'll be using to mix. To speed up finding the right sounds, I had Greg (who's an experienced guitarist with a good understanding of signal process­ ing) perform an exercise that I think you'll find useful as well, as a final step be re beginning the actual mix.

For each song on the album you're going to mix, choose a reverb and a delay that you think will work for that song's sound. Then assign them to a track where you think they'll be used prominently, loop a phrase of the music, and start auditioning presets by stepping through them. Every time you hear a preset that seems to work well, save it to a new user bank of presets named for the song. Once you've collected a bunch of good presets, do the whole process again and select only the best presets from that bunch, then tweak the results until they're perfect. In this way, you build up a ready library of signal processor settings that will speed your mix along. You can set up multiple returns with these presets in place for when you enter mix mode.

Conclusion

I know, that's a huge amount of thinking to go through before you even get to the actual mix! It seems like a lot of work, and it is... but if you use these processes diligently and tweak them to your own working style, they'll become second nature, and you'll find in the long run that with this sort of preparation under your belt, you won't be daunted by even the most complicated mixes.