Music production and sound engineering are two of the most valuable skills out there. Music production involves recording, mixing and mastering.
This post will focus on mastering. I will do my best to provide some career advice for people looking to learn mastering. Even more specifically those that are not sure about how long its going to take them to become good at mastering.
With that said, let’s get into the meat of this post.
How long does it take to get good at mastering?
It’s going to take you about 6 months to a year to learn the basics of mastering but it will take you 4 to 7 years to actually get good at it and have the ability to master music to a commercial standard. The challenge is always in understanding and applying the fundamentals of mastering to different mixes that are all distinct in their nature.
Your first year or first 10,000 hours of mastering will be spent learning the basic fundamentals and getting acquainted with a Digital Audio Workstation of your choice.
The years that will follow will be spent on ear Training and gathering technical experience to become proficient in mastering.
Tips for Learning mastering
Get on YouTube
YouTube has the best collection of videos in all industries, careers, niches, sub-niches etc.
There are a lot videos teaching mastering and you should watch as many as you can.
Online courses and Resources
You can also register and subscribe to online mixing and mastering courses that can supply you with the much needed fundamentals.
Keep practicing everyday.
Below are the fundamental things that you’ll need to learn in order to become proficient at mastering.
Monitoring is a term that is used commonly in audio production.
It simply refers to listening and analyzing the musical and technical aspects of the sound being created.
In this part of the learning processing you’ll basically learn how to listen to music closely and actively.
It takes years to learn how to look for the right things In a record and know exactly what to do to get the best possible master.
Audio file Editing and noise reduction
To get clean records, you need to know how to edit recordings as well as carry out fundamental noise reduction techniques.
This involves learning how to handle noise reduction systems and tools, plus knowing exactly what should be part of an audio signal.
Editing also involves the process of knowing how to choose the best recording takes and cleaning them up.
Noise reduction is essential. Without it, we lose the most important part of every master, Clarity.
Part of the mastering process is Equalisation.
Equalisation is a tool that is essential for getting balance and unity among elements.
EQ also ensures that we place elements in their rightful frequency range so that they don’t get in the way of other elements.
In this part of mastering you’ll learn the fundamentals of frequencies with regard to various audio elements such as individual instruments and vocals.
These things are important when you carry out equalizatiin because you’ll need to know the frequency relationship among mix elements to get a good mix and master.
Dynamics processing is another fundamental part of mastering.
Dynamics processing is simply the process of altering the dynamic range of an audio source in order to make it easier to place in the overall mix. Common types of dynamics processors include compressors, limiters, noise gates and expanders.
A compressor is a kind of amplifier in which gain depends on the signal level passing through it.
A limiter is similar to a compressor, it simply limits the upper dynamic range of a signal to a specific threshold.
Expanders increase the dynamic range of a signal after the signal crosses a threshold
A noise gate helps to reduce unwanted sounds by only allowing the signal to be heard once it has exceeded a certain amplitude.
Serial, parallel and multiband processing
Serial processing is simply using different processors or plugins on a mix element or elements.
Parallel processing is simply a technique of processing a copy of a track in a mix and then combining it with the original.
Multiband processing splits a signal into multiple frequency bands like lows, lower mids, upper mids, and highs and then processes each band individually. It provides the user more control over frequencies.
There are a number of compressors out there such as Tube Compressors, VCAs, FETs and optical.
All these compressors perform different functions and you’ll need to learn how to use them depending on the situation at hand.
All these compressors are available in digital and analog format and you’ll need to learn how to work with both.
Control and balance of stereo image
The stereo image is simply a sonic field on which all elements of a mix are arranged.
Mastering involves the use of various tools to impact the stereo image of a mix.
The stereo image of a song is important because it creates the fundamental experience for the listener.
Stereo L-R vs M-S processing
This simply refers to Stereo Left and Right processing as well as Mid-Side component processing.
Both effective tools in processing.
Use of reverb and harmonic enhancers in mastering
Reverb is an effect that is used to add reverberation to a signal.
It’s important application is in the creation of space around mix elements as well as in reducing their dryness.
Harmonic enhancers on the other hand are responsible for generating new harmonic content. They can make a sound clearer and brighter than it originally was.
Both these tools are important facets in mastering fundamentals.
Limiters and levels for CD and streaming platforms
Like I earlier discussed limiters are basically specialized compressors that are used to keep a mix from clipping while driving it close to 0dB.
Most streaming services these days provide the mastering specifics that one has to follow when looking to have their music on these streaming platforms.
There are also rules to follow when mastering for CD and mastering for Vinyl.
Loudness, LUFS and dBs
LUFS is a newer standard for measuring loudness and is considered as the most accurate. In practical applications, LUs are equal to decibels.
LUFS basically measures the average loudness of a piece of audio, measuring loudness over a specified period of time. Measuring loudness with LUFS also takes into account the perceived loudness.
Dither is an intentionally applied form of noise that is employed as means to randomize quantization errors. Dither is routinely used in processing of both digital audio and video data, and is often one of the last stages of mastering audio to a CD.