Pro Audio Files Logo Pro Audio Files

Elevate Your Ears Become a Member

What Automated Mastering Services Can’t Do For You

Article Content

If you’re expecting an article that critiques and dismisses the intrinsic sound qualities of what people are now calling “automated mastering” or “AI mastering,” this is not that type of article.

Instead, this article focuses on facts regarding a few things that what people call “automated mastering” simply cannot do for you, and why that is important or perhaps detrimental to your project. I’m a firm believer that a project isn’t fully mastered until it’s 100% ready for distribution and production. Mastering is more than just stereo bus processing.

If these services called themselves “automated stereo bus processing” services, I’d have no problem with that. What I think is disingenuous to users of so-called “automated mastering services” is that by calling it that, it gives users, particularly less informed users, a false sense of receiving the same treatment that a human mastering engineer would do.

I’ve purposely not even commented on the actual sound of these AI services because it’s subjective. I wasn’t expecting the actual definition of mastering to be so subjective, but now it is thanks to these AI services attempting to dumb down what the process entails.

Mastering isn’t processing, mastering is a process. One component of that process is stereo bus processing. Among others, LANDR, ARIA, eMastered, and now Plugin Alliance/Brainworx are now comfortable dumbing down and redefining what the mastering process is.

The “be better than the AI and you have nothing to worry about” comments are really short sighted. How do you even have a chance to be better than AI if companies are falsely changing the definition of mastering to current and future generations? I have plenty of new and recurring clients each month, and it’s been increasing for years so I don’t see things like this as a direct threat.

I’m more concerned about the big picture and how these services give people a false sense of getting all the things that the mastering process entails, not just the stereo bus processing. A human mastering engineer does FAR more each and every day than just stereo bus processing.

If I just ran songs through my processing chain without speakers and/or headphones, just looked at the meters and numbers, and did NOTHING else, that would not be mastering and I would have very few (if any) satisfied clients. That would be considered “educated guess stereo bus processing.”

Stereo bus processing is just one component of the mastering process, but not ALL of it. I get that AI stereo bus processing can be a tool for mix engineers and producers at times to see how their mixes react to being pushed louder or to help get mix approvals, but that’s exactly why it should be called “Automated Stereo Bus Processing” and not “Mastering.” It gives people the impression that uploading their songs to an AI mastering service will be equal to hiring a human mastering engineer. I can throw a burrito in the microwave but that doesn’t mean I’m a chef that can prepare a great entire meal that will please all my dinner guests.

The true test is this: Run a song through an automated stereo bus processing service, and then send the result back through again. If a human mastering engineer with some intuition and sensibility was sent a song they already mastered to master, they would do no additional processing because they already did exactly what the song or project needed the first time, right? Nothing more, nothing less. An AI mastering service is just guessing and would apply some processing again because it is not capable of realizing what’s going on, it’s just guessing that something needs to happen. This alone should tell you something.

I don’t think any pro mastering engineer is worried about these AI services stealing their jobs because as mentioned, it’s only doing one aspect of the mastering process, not all of it. The problem is with how it’s being presented and sold as real mastering.

How Important is Mastering Anyway?

Mastering is not going to make a bad song great. I think sometimes people unknowingly put too much faith into the mastering process. If it’s not a great song, performed well, produced well and mixed well, the mastering isn’t going to save it.

ADVERTISEMENT

Mastering is a strange art where some clients expect you to make drastic changes (usually attempts to improve on subpar recordings and mixes), and other clients expect you to change very little and be as gentle as possible because it already sounds great and/or how they want it. It’s not always easy to know what a client is expecting without some discussion, and in some cases, a test master of a song or two for the project before getting started. LANDR can actually give you a rough idea of what your mix will sound like after it’s been mastered to a louder level which could be useful as you are getting close to finalizing your mix.

I had heard about how affordable LANDR was, how it was devaluing the art of mastering, and how it’s ruining the business of human mastering. I don’t think that’s the case. The basic monthly plan gets you unlimited 192kbps mp3 files, but you still have to pay for 16-bit/44.1k WAV masters on a per-track basis, and pay even more for what they call “HD” WAV masters. This can actually add up fairly quickly for a full album project. It’s still likely cheaper than using an experienced human mastering engineer, but maybe not that much if you decided to get the standard WAV files and the “HD” version. I actually searched my account and the LANDR app quite thoroughly and didn’t see any sign of being able to buy the 24-bit “HD” version. Maybe this is a new feature just being rolled out and not fully implemented. Not to mention, by having these WAV files, your master isn’t necessarily ready for distribution and production.

I can be quite certain that some people are only getting the mp3 versions from LANDR, and then uploading those to SoundCloud as the master source, and/or converting these mp3 files to WAV as the master source for other digital distribution services which is a very bad practice. A well encoded mp3 can sound pretty decent actually, but where you will get burned is if you convert the mp3 to WAV and submit it for digital distribution. This is sure to sound bad for the end user because the audio will be transcoded when it’s converted to a lossy format for a 2nd time by the retailer. So, you’re basically distributing an mp3 of an mp3 which can be hard to listen to. I’m not saying that this practice is the norm by any means, but I have no doubt that people are using the mp3 versions from LANDR as their main distribution master which is asking for trouble and without proper education, could become an unfortunate trend.

LANDR vs. Human

The weakness of AI services is not in its sonic algorithms and processing. They actually seem to have some intelligent and surprisingly musical processing going on at times. Where automated mastering services fall short is delivering full album and EP masters ready for all forms of digital distribution, as well as CD or vinyl production. Basically, anything beyond a single song released in a “digital only” format has the potential to get fouled-up if the user needs to do any additional editing to the files to make them work as an album master.

Production/Distribution Master vs. Raw WAV Files

If you’re doing a CD release, digital release, and maybe even vinyl, your mastering work is not actually complete after you receive the tracks from an AI service. The album usually still needs to be sequenced and outputted to the correct formats for production and distribution which is where things start to get more complicated.

I do a lot of work for a CD manufacturing broker and my job for them is to simply take “mastered” WAV files from their client’s “mastering engineer,” assemble them in my mastering software, sequence the WAV files in order, add track maker IDs, add CD-Text as well as ISRC codes and UPC when necessary. From there I create the DDP for CD production which needs to be approved by the client. Sometimes after hearing the DDP, clients ask me to add additional space between the songs because it’s not already built into the WAV files. Sometimes I need to offset a track ID marker from the start of the given WAV file to prevent the start of a track from being too abrupt. This is easy to do but now the CD master can potentially have different timing between songs than the WAV files they may have already submitted for digital distribution. If they export WAV files of each track from the DDP Player I provide for approval, then they will have 16-bit/44.1k WAV files that match the CD master regarding the gaps between songs, but what about 24-bit/high sample rate masters which most digital distributors accept now? As I said, it can get messy and I didn’t even mention that it’s quite common these days to make a special vinyl pre-master with little to no limiting, less RMS level, more dynamic range, and attention to vocal sibilance and other things to make it more vinyl friendly.

Details Matter

Let’s say you are happy with the audio processing that an AI service has done and each song sounds great, but when you play the files in sequence using consumer software such as iTunes or Windows Media Player, maybe the spacing between some songs is not ideal, or you need to fade out the ending of a song or at least fade out the final sustain of an instrument at the end to shorten it a bit more or remove some extra dead space at the end. Doing this in any modern DAW will instantly change the audio stream to 32-bit float which would require you to dither again to 16-bit when you render the final DDP for CD production, or 24 or 16-bit when you create the master WAV files for digital distribution. The same is true if you change the level of a song by even a fraction of a decibel. The audio stream becomes 32-bit float and by following best practices should be dithered again to 24 or 16-bit which many people overlook and is a worst case scenario though. The best case scenario is that you never alter the audio after it is reduced to 24 or 16-bit

AI services might do an OK job of taking a group of songs and getting them all to the same loudness measurements and similar tonal balance (EQ), but that doesn’t always translate to our ears and brain as being correct. For example, you might have an album of 11 rock songs and one acoustic song. If an AI service sets the level of the acoustic song to have the same average loudness as the rest of the songs, the softer acoustic song is very likely to sound unnaturally loud in the context of the album and could also make the rock songs sound weaker than expected after listening to the acoustic track. So, why not just turn down the AI version of the acoustic song you may ask? Well again, if you’re doing this to 24-bit or 16-bit audio, you will need to re-dither or worse yet, truncate those additional bits when you render or save a new file as 24 or 16-bit after your changes.

Dithering is something that is ideally only done one time in the mastering process, and ideally done as the very last processing step, no exceptions. By dithering more than once, or failing to do it at all you are damaging your audio in a way that will only get worse down the line when your masters are encoded to lossy formats. Distortion and artifacts can magnify quickly downstream with digital audio if you’re not careful.

Quiztones for iOS EQ ear training screen

Ready to elevate your ears?

It doesn’t have to take years to train your ears.

Get started today — and you’ll be amazed at how quickly using Quiztones for just a few minutes a day will improve your mixes, recordings, and productions!

Bring The Noise

The other thing that AI services are just not set up to do is address noises, clicks, pops, plosives, and other unwanted distortions in your music. I can’t tell you how many of these types of things I hear in unmastered mixes these days. I actually do a dedicated listening pass just to scan for and remove/minimize any unintended noises in the material using iZotope RX, a spectral editor.

Often times these things can go undetected before sending off to mastering due to less than optimal listening conditions, inattentive listening, or a combination of the two. Also, these things can become more noticeable after the mastering processing is done where everything is usually louder, clearer, and more compressed with less dynamic range. A stray pop from a bad edit or plugin glitch could become much more noticeable after mastering. Mouth sounds/clicks are notorious for becoming more noticeable and unnaturally loud after mastering. Most often when a client hears a stray noise that they insist wasn’t in their original mix, it’s often actually there if they listen closely. Either way, it’s not a hard problem to solve but it does take some extra time, plus human skill and intuition.

ADVERTISEMENT

There are some great tools out there now for fixing issues like this. iZotope RX is a life saver for me. The thing is that you can’t just apply the De-Noiser, De-Clicker, or Mouth De-Click across the entire song and call it good. These tools are not 100% transparent but if they are used skillfully and tastefully, and applied only to the small sections that need the repair work, they can be 99.9% transparent and improve the musicality of the song and remove distractions from the music when needed, rather than do damage.

In other words, applying iZotope RX De-Click to an entire song can damage the transient peaks of any percussive instruments, but if you hone in and just process the problem sections one at a time, no discernible damage is done. I’m talking about fractions of a second in most cases. The same is true for noise reduction. A song might have some excessive noise at the start or end, or maybe in a quiet breakdown section. With a sample of just the noise which is often found at the start or end of the song to use as a fingerprint for the noise reduction software, you can do some very transparent noise reduction. However, in most cases the song reaches a point where the noise is no longer an issue as it gets going and the musical elements bury the noise floor. Instead of processing the entire song with noise reduction, I often process only the intro and/or just the final sustain if that’s where it’s noisy. This keeps the core of the song free of any noise reduction that may harm the higher frequencies and make the song feel a bit dull or swirly in the high end. It’s a bit of a trade off that requires some human intuition and emotion to determine what is too much, and what is just right.

The other thing that AI services simply can’t do is the art of sequencing an album and providing you with all the master files you might need for production and distribution such as DDP for CD production, WAV files for basic digital distribution, and 24-bit/high sample rate WAV for websites like Bandcamp, SoundCloud, and and the growing number of digital distributors that accept 24-bit/high sample rate masters now.

If you care about your album as whole, you should also care that the exact same spacing exists between songs on the CD version, and any digital release versions. If you’ve got some audio software skills, it’s not too hard to take the processed WAV files from an AI service and use any of the DDP creation tools out there, but then what about your high resolution/HD master and reference mp3s? Also, if you are working with only the dithered versions from an AI service, and need to do any fades or level changes to songs, you run into the issue of needing to dither again or risk truncating your audio from 32-bit float to 24 or 16-bit.

There is a free plugin called “Bitter” from Stillwell that you can insert in your master section and watch a 24 or 16-bit audio file turn into 32-bit or even 64-bit floating point audio, even with the simplest gain change or during any fades. It’s important to be aware of it.

As you can see, it’s easy to get into a mess if you are using an AI service to master your entire album. For single songs, you could argue that some of this is a not an issue but for projects that are more than 1 song and any processing at all needs to happen after the AI service does its thing, using the dithered 24 or 16-bit WAVs as the source can lead to issues.

In the end, I compare AI services to going through the drive-thru at McDonald’s rather than eating a gourmet meal. There are times when going through the drive-thru is called for, but for the scope of an entire album, I think there are many details to consider that are best left to a human mastering professional.

A Careful Listen Goes A Long Way (Quality Control)

Quality control is a HUGE part of the mastering process. Some of the larger mastering studios have assistants or people that only do quality control. This means they sit and listen to every second of every master file before it goes out to make sure it’s free from any unwanted noises, sounds, or glitches. It’s an often overlooked but very important part of the process.

I personally take great care to make sure that there are no unwanted noises before, during, or at the end your songs, which are often easily missed in the mixing stage because there is so much more to focus on. I also take care to make sure the spacing of your songs is 100% constant between a DDP master for CD production, 16-bit/44.1k WAV masters, 24-bit/96k WAV masters, as well as reference mp3 or AAC files I make for clients and the vinyl or cassette master if those formats are being made. For those still making CDs, regardless of where you put your track ID, the marker placement will be quantized to the nearest CD frame when you make a CD or DDP. In some cases this could be a millisecond or less, or in some cases more. The point is that if you’re not careful, your track times can vary from a little to a lot if your album is not carefully sequenced by a skilled mastering engineer. It can get especially tricky if you have songs that overlap or a live album where seamless audio is needed.

Another increasingly important aspect to mastering is metadata and CD-Text. Whether I’m sending my clients a DDP, WAV files, or reference mp3 files, I make sure that all CD-Text and metadata is correct and complete as can be, as well as ISRC codes and UPC as needed.

The importance of metadata and CD-Text can vary from project to project and at this time, the major independent digital distributors do not read or use any metadata in the master WAV files, but I’m a believer in adding any and all metadata that you have for the sake of future-proofing the files, and there are other uses for master WAV files than digital distribution. One big one is music licensing, perhaps the main way independent bands and artists are even making any money from their music these days.

So to summarize, just be aware of what you’re getting by using an AI service. They are really only providing one aspect of the detailed and nuanced process that we traditionally call mastering and factually cannot and do not do a number of the things that you get when hiring a human mastering engineer for your project.

Justin Perkins

Justin is a mastering engineer from Milwaukee, WI. More at mysteryroommastering.com and justincarlperkins.com.