THE COMPLETE GUIDE TO MASTERING

Whether you are new to mastering or an experienced engineer, the following few pages could change your life. Here is THE complete guide to a process that will transform your sound.

Any article that claims to be ‘The Complete Guide…’ is worth starting with a bit of a history lesson, so let’s look back at the early days of this ‘dark art’. Mastering began as a means of transferring recordings from magnetic tape to various media, such as vinyl, while maintaining as much of the audio fidelity as possible. Vinyl, for example, cannot handle much bass without the risk of skipping, while the high end is susceptible to hiss. Mastering engineers would use an RIAA pre-emphasis EQ curve when cutting lacquers, which reduces the bass on the vinyl and boosts the highs. The bass-light vinyl would then be compensated for on the playback system with an RIAA de-emphasis EQ curve which would boost the lows and cut the highs. The final effect would be a linear frequency response, but with no skipping because of excess bass and less hiss.

Cutting the lacquer itself required further attention and skill on the part of the mastering engineer. The running time played an important part in what could be achieved, as longer playing times meant that the grooves had to be closer together. Even with the RIAA pre-emphasis, the mastering engineer still had to pay close attention to the amount of low-end information, using elliptical filters if necessary to mono the bass below certain frequencies. Further limitations of the medium relating to overall signal level, phase correlation and sibilance all limited what was achievable, but the overall goal was simply to overcome these technical limitations and create a master as close to the original mix as possible. Thankfully, the process would soon become more creative…

It all changed in the late 70s when mastering engineers such as Bernie Grundman, Bob Ludwig and Doug Sax made a name for themselves by ‘improving’ the quality of mixes that came through their studios. To these engineers mastering became a creative process. Instead of focusing on making a mix translate to whichever format it was being released on, mastering became about the process of polishing a mix, to bring about louder, wider mixes, and subtly shaping the tone of the mixes, putting more power in the bass and more gloss in the top end.

Into the groove

Even back in the 70s, artists and their labels wanted their mixes to stand out. Mastering engineers were being pushed to produce louder and louder masters, but this was again limited by the physical medium. The grooves on a vinyl record risk overlapping and creating skips if the signal is too loud. This can be compensated for by spacing the grooves wider apart, but this, in turn, reduces the amount of playing time on a record. And if the signal is too loud the energy can burn out the voice-coil in the cutting head, even with liquid or helium cooling mechanisms in place.

To overcome these problems, limiters can be employed, which reduce the peak levels of the audio, enabling the overall level to come up without risking any overly loud sections from causing skips or cutting heads to overheat. When used with care, limiters can be extremely transparent, having far less sonic impact than compressors. However, when pushed hard the distortion introduced by a limiter can become audible and this balance prevented signal levels from getting out of hand.

With the switch to digital audio and the rise of the CD, a new world of mastering – and further technical restrictions – came into play. The RIAA curves were no longer necessary as neither bass energy nor hiss was a problem. Also, bass could have all the stereo information you wanted, the high end was freed of hiss and mixes would even be wider. And since the data on a CD is burnt by a laser, there is no risk of it burning out like a cutting head.

Instead, CD created strict restrictions on the level by imposing a ceiling at 0dBFS which could not be exceeded under any circumstance, and any attempt to do so would simply create digital clipping distortion.

Initially, CDs were treated as an inferior medium. Only a few people had the means to play them, so CD masters were often made from the same signal as the vinyl masters, but it didn’t take long for engineers to realise the potential of the CD format, and growing interest among consumers ensured its rise. Even so, the processing done to CD masters was generally kept in the analogue domain, with engineers preferring to use their now highly specialised mastering consoles to fine-tune the audio for the CD pressing.

It wasn’t until the advent of digital brickwall limiters that things really started to change. Analogue limiters had long been used to increase the average level of program material, reducing fast transients in order to create more headroom. However, digital brickwall limiters took this to a new level. Even the fastest analogue limiter takes some time to respond to a transient, so the gain reduction kicks in after the signal has passed the threshold. This means that some of the transients will have been missed by the gain reduction, even at the fastest attack settings.

Digital limiters are able to look ahead in order to see transients coming and anticipate what gain reduction will be needed, and with an ¬∞:1 gain reduction ratio they ensure absolutely no signal passes the threshold level. The result of this isn’t simply to chop off the tops of the curves but to lower the gain of the entire wave until it fits within the threshold. This means that brickwall limiters can be incredibly transparent while increasing the overall program loudness.

This all enabled mastering engineers to make even louder masters, utilising tools such as the Waves L1 Ultramaximizer to achieve this loudness without running the risk of incurring audible distortion.

Mastering today

Mastering has come a long way from its very technical beginnings. The process no longer simply involves creating a copy of the mix that can be transferred to vinyl, but instead is a creative process aimed at getting more energy into a mix and creating consistency across tracks. It should ensure that, irrespective of the listening environment the music is played back in, the experience remains as good as it can be. In short, the mastering engineer’s role is to ensure that the final product is the best possible release it can be.

The tonal balance, loudness, punch, width, even subtle colouring and sometimes even reverb can all come into the realm of mastering, so a range of tools are needed. All-in-one mastering suites such as iZotope’s Ozone or IK Multimedia’s T-RackS can provide many of these tools, and if you’re looking to master your own music they can be excellent starting points. However, if you are looking to get deeper into the world of mastering it is a good idea to have a wider selection of tools to choose from. Mix and match some of your EQs, compressors and limiters, and if you’ve got the budget, don’t be afraid of sprinkling some analogue hardware in with the plug-ins. Breaking out of the box can be very inspiring and lead to a wider understanding of the tools you’re using and their impact on the music.

Monitor issues

Before we look at what sound-processing tools you need it is important to consider the monitoring chain itself. Before you can go making 0.5dB or less changes to the EQ curve or millisecond changes to the attack or release envelope of a compressor, you have to know that the environment you are listening in is going to provide you with an opportunity to hear those changes. This is important in the mix stage, but even more so in mastering.

Room-correction systems such as IK Multimedia’s ARC or KRK’s Ergo will help to balance the sound of your monitors in the room, but they cannot correct for reverb tails, so it’s a good idea to invest in some acoustic treatment for your room if possible. A good mixture of absorption and diffusion can transform a room, and is well worth the investment for both mixing and mastering. We’ve covered acoustic treatment several times in the past, even going into detail on how to build your own acoustic treatment in Issue 122 (May 2013). In this same issue we also looked at a selection of the best studio monitors available for your budget, which is the next most important thing to consider after your room acoustics.

When considering monitors, bear in mind that you want a very neutral-sounding pair with as much low-end extension as possible. A subwoofer can help to bring up the low end, but they can also create as many problems as they fix. Getting the sub positioned correctly, in-phase, is hard enough, but you must then also set the crossover frequency and the level of the sub such that you get a linear frequency response between your mains and the subwoofer. There are tools to help with this, and room-correction tools will also help a great deal, especially if they manage the crossover (as KRK’s Ergo system does).

A good pair of headphones can be a great alternative to monitors in imperfect rooms as they remove the potential for reflections, directing the sound straight into the ear canal. Again, however, it is important to find a neutral-sounding set with as much low-end extension as possible. As good as some headphones may look, hyped bass response is the last thing you want when mastering. Open-backed headphones such as the Beyerdynamic DT 990 Pro or Sennheiser HD650 are preferable to closed-back or in-ear styles.

The final link in the chain is the audio interface itself. It can be easy to overlook, but a high-quality I/O can make a surprising difference to what you are able to hear inside the mix. Entry-level interfaces have improved dramatically over the past few years, but better clocking and more transparent circuitry can open up the mix, enabling you to hear details that might otherwise be overlooked. Thankfully, unless you’re using outboard equipment to master you won’t need more than two channels of outputs, and even if you are using outboard, four-channels of I/O is probably enough, so you can invest in quality without having to also invest in quantity.

Tone quest

The mastering chain itself can comprise any number of processors, but the most common are EQ, compression and limiting. The brickwall limiter will always come last in the chain. However, the order of the EQ and compressor can vary depending on the needs of the track. Often, mastering engineers will use a variety of different EQs in different places in the signal chain to achieve the desired curve they want rather than simply relying on one EQ in a fixed position.

The reasons for using one EQ over another for any given frequency band are largely down to taste. An engineer may prefer the sound of a Pultec-style EQ for the top end but prefer a more flexible digital EQ for fine-tuning the bass frequencies. Similarly, some engineers will prefer a linear-phase EQ for certain tasks. Linear-phase EQ is incredibly transparent, which allows you to get away with bigger cuts or boosts without sounding unnatural. The disadvantage of linear-phase EQ is that it introduces latency into the signal processing, which is why it isn’t very common in mixing. However, latency is rarely a problem in mastering, so it can be used quite freely.

Unlike the choice of EQ, the reasons for boosting or cutting at any given frequency before or after the compressor are subject to more basic rules. With the caveat that all rules are there to be broken, a quick rule of thumb is that subtractive EQ should be applied before the compressor, with boosts kept until after the compressor. This will prevent the compressor from responding to any frequencies you are cutting out of the mix. Boosts to the very top end can often be placed before the compressor, as this region usually carries far less energy than the bass and midrange and therefore can be boosted without the risk of these frequencies triggering the compressor. In fact, as some compression will reduce the high-end energy a boost at the top is often useful, and placing it before a compressor will mean that any noise introduced by the compressor isn’t boosted by an EQ placed after it.

The exact approach to the EQ is going to change with every mix that you work on, and while one track may need more energy at 40Hz to bring out the sub frequencies of the kick, another may require the opposite. Some tracks may require a huge amount of low end alongside glossy, bright highs, while others may have little to no sub frequencies at all and a steep roll-off in the highs, and there is little point approaching a scratchy folk guitar recording in the same way as you would a dancefloor smash. However, while the approach may change, the goal is almost always the same: you want to most accurately represent the content of the music; tuck away any frequencies that might be poking through the mix; bring separation to any elements that seem to be overlapping; and ensuring that the low end isn’t going to completely disappear over smaller systems.

The best judge is your ears, so compare the song with other, similar-sounding songs. Spectrum analysers can be useful and provide a visual aid, but don’t lean too heavily on them. Assuming that the mix has arrived for mastering close to how the artist wants it to sound, there shouldn’t be need for any heavy EQ changes, and it’s hard to see where small boosts or cuts are needed on a frequency graph. It is extremely important to A/B any changes you make to the EQ curve, preferably while also adjusting for any gain, so that you aren’t simply hearing the louder version and opting for that. We usually find it useful to go away and come back once we’ve finished EQ’ing, giving our ears an opportunity to relax before coming back and hearing the changes we’ve made. A quick cup of tea or a flick through MusicTech should be enough to give your ears a break. Upon returning we often find that we end up reducing the gain on any EQ decisions we may have made, especially in the top end.

Another important equalisation tool that is often forgotten about during mastering by less experienced engineers is the filter. High- and low-pass filters can be incredibly useful tools in a mastering context, just as they can be in a mix. A high-pass filter set below the fundamental frequency of the kick drum or the lowest bass note will do two things. First, it will clean up any redundant energy that may have crept into the mix. Low-energy rumble might be almost inaudible, but it can still eat into your headroom, taking up valuable clean loudness. Secondly, the filter will almost certainly exhibit a slight frequency boost just before the cut. If you’ve set the filter just below the fundamental, this bump can help to give a little extra energy just in the right place. Similarly, a low-pass filter can be used to create a bump in the high frequencies, just before you filter them out. Set at around 18kHz, therefore, there will be a slight lift below 18kHz and then a roll-off above 18kHz. Some mastering engineers will exploit this feature at even higher frequencies – the Manley Massive Passive Mastering Version, for example, has low-pass filter settings at 15kHz, 20kHz, 27kHz, 40kHz and 52kHz. Some may argue that setting the filter as high as 52kHz is somewhat pointless given that even 96kHz digital audio can carry information only up to around 48kHz. Anything above this is therefore already going to be sharply filtered out by the Nyquist filters in the digital converters. However, the correct use of these filters lies not only in what they remove but also in what they boost in the signal, as well as any phase colouration they impart on the signal.

Buss compression

Mix-buss compression is a tricky area – get this right and your mix can really come together. The instruments will all punch as one unit, and the gentle boost will bring up the quieter sounds, thickening the overall sound. However, get it wrong and you can suck all the life out of a mix.

Mastering compression is very similar to mix-buss compression. It is a two-channel compression stage applied to the stereo mix, and so many mix engineers will prefer to leave compression off their mix, hoping that the mastering engineer either has better tools or more experience in this task. This is often a mistake. When used from the start of a mix, compression can influence the mix itself. Fader-rides will change; the very balance of the song itself will alter. Applying a compressor at the last stage of a mix, then, is going to make changes to your mix, and this technique is most often where mixes fall apart. In mastering there is no option to have the compressor on the mix from the start, so you have to be more careful to get the settings right.

The first decision to make when compressing is what compressor (or compressors) to use. Do you want something with a bit of its own colour, or do you want a cleaner signal? The safest bet is usually to go with a cleaner compressor, as colouration can be added in many other ways. However, take a look around most professional mastering rooms and you’ll usually see at least one analogue compressor famous for its unique character. The most common compressors in mastering are vari-mu or optical designs such as the Fairchild 670 (vari-mu) or Maselec MLA-2 (optical). Contrast this with the most common mix-buss compressors, such as the SSL G-Buss and API 2500, which are VCA-style compressors. However, it is worth noting that this is not a strict rule and VCA designs do also make their way into mastering studios. Vari-mu and optical compressors tend to have slower responses than VCA compressors and some mastering engineers will even use two compressors together so they can have one fast compressor and one slower compressor. The Shadow Hills Mastering Compressor combines both an optical and VCA compressor in one unit for exactly this reason.

Slow compressors will allow transients to pass through untouched, working instead on the body of sounds and enabling you to gently bring up the level without reducing the impact of the drums. However, with slower attack and release times you might get audible pumping as sounds duck just after being played, before slowly coming back up. This is especially noticeable on longer, sustained notes.

Fast compressors can grab hold of transients and reduce them quickly, and with a fast release you shouldn’t get any audible pumping. However, this can reduce the impact of the sounds, and so a good blend of both fast and slow compression characteristics is often useful.

Remember, though, that a brickwall limiter applied at the end of the chain will act like a very fast compressor, so don’t compress too heavily with faster compressors at this stage. Some models of software compressor, such as the FabFilter ProC, provide powerful visual feedback of any gain reduction taking place, which can be incredibly useful for seeing how the compressor is responding to the transients in your audio. More commonly, however, you will simply see an LED or VU meter that displays the amount of gain reduction, but isn’t as easy to spot the subtle attack and release characteristics with these devices.

We usually start experimenting with a slow optical or very-my-style compressor at low ratios, perhaps 2:1 or even lower (sometimes as low as 1.2:1). Using fairly long attack and release times (around 50–100ms), adjust the threshold to a point at which the loudest part of the signal is being compressed by no more than 2–3dB. This is a good starting point, and while some tracks may require more, others may need less. Again, be sure to A/B the compressed mix with a non-compressed version and make level adjustments to ensure that you are comparing like with like. Not all mixes need additional compression at all – don’t get sucked into thinking that because you are mastering something you have to make changes, the best mixes will need no alterations. It can be just as rewarding to hand back a master with little to no changes made to it as it can having made drastic changes to a track.

Another tool worth experimenting with is the compressor sidechain. By filtering out the low-frequency content from the compressor sidechain it is possible to let the high-energy, LF information through uncompressed and focus the attention of the sidechain on peaks in the mid-band and HF content. This can reduce any pumping effects that may otherwise reduce the effectiveness of a compressor, especially when using slow attack and release parameters. Remember that unlike a multiband compressor, the whole spectrum is still being compressed, including the low end. It is simply that the compressor threshold does not respond to the low-end information.

If we are going to use a second compressor we will usually put it in the chain immediately after the first compressor, using much faster attack and release settings, perhaps 10–20ms. This compressor will then be used to simply shave off the tops of transients before they hit the limiter. Again, we’re likely to use a low ratio, although perhaps not quite as low as the slow compressor – something around 2:1 or 4:1 will do the job, depending on how big the transients in the mix you are working on are. As we’ve explained, pulling down the transients can take the impact off the percussive elements in a mix, and the brickwall limiter will be doing more of this later on in the chain, so don’t go crazy with the fast compressor. Even as much as 1dB may be enough to remove the spikiness from an overly transient mix.

When using two compressors it is also important to think about how the characters of each will affect each other. We usually prefer to have a slow, characterful compressor paired with a transparent, fast compressor. So a tube vari-mu compressor (like the Thermionic Culture Phoenix) can be paired with a second software VCA-style compressor such as the UAD Precision Buss Compressor.

Multiband compressors

Sometimes a mix comes in where even multiple compressors in series and a whole host of EQs before and afterwards isn’t enough. For these situations multiband compression becomes a very useful tool. However, where single-band compression has the capacity to spoil a mix, leaving it lifeless or pumping, multiband compression can not only cause all the same problems but has the capacity to ruin the tonal balance, too. Careful adjustment and regular A/B’ing is vital throughout.

The hardest parameters to set on a multiband compressor are the crossover frequencies. Get this right and you are on the way to clean, transparent compression. Most multiband compressors will let you audition the band being compressed. Use this to hear what the compressor will be responding to and compressing. If there are any problem frequencies in your mix this is the perfect opportunity to hear and respond to them.

Take it to the limit

While your full mastering chain may include any number of additional processors – from harmonic exciters to saturation effects and even reverbs – the final stage of mastering is almost always limiting.

As a rule of thumb we will always place a brickwall limiter as the final element in our mastering chain, setting the ceiling to -0.3dBFS and using the threshold to bring up the level of the track. If more than 1–2dB of gain reduction is required at this stage we will usually precede the brickwall limiter with another form of limiting known as soft-clipping. Soft-clippers such as the T-RackS Soft Clipper emulate a trick some mastering engineers realised they could do by driving high-end converters. Instead of hard-clipping the signal, the converters would soft-clip it, enabling mastering engineers to get a couple more dBs of level cleanly before going into the brickwall limiter. For extremely loud masters the trick of balancing the gain reduction across a range of different processors is incredibly important. While it is possible to throw a single brickwall limiter across your master buss and crank the threshold until you’re getting 7–8dB of gain reduction, the amount of resulting distortion will be huge compared to a well-mastered equally loud version placed through three or four well-configured processors.

Like this post? Please share to your friends: