Bit Depth 101 — What is it?

Transcript:

Hi, guys. Matthew Weiss here — www.weiss-sound.com, www.theproaudiofiles.com

In five minutes, I’m going to attempt to clear up what has been argued for thousands of pages on the internet, and that is the debate involving bit depth and sample rate.

Alright, so what are bit depth and sample rate? Well, bit depth is the resolution, and I’m going to explain that term in a moment, of the resolution in amplitude of a signal.

Sample rate is the resolution, and I’m going to explain that word in a minute, in time.

Why is that important? Well, when you have a waveform, it is continuous. That is how analog sound works. That is the definition of analog. Going into the digital realm, we are breaking it into information that is discontinuous. That is the definition of digital.

So, we have to plot data points that we can use to accurately reconstruct our wave. Now, in terms of fidelity, bit depth and sample rate has very little to do with that. It’s not like we’re taking actual slices and chunks of the audio away, like the shutter on a film camera. That’s not what it is. It’s not fidelity. It’s information resolution which has a very different effect in the real world.

So with bit depth, the way it works is we have million of points of amplitude that we are recording. The difference between amplitude point one-million, and one-million and one, is very, very, slight.

But, in a continuous waveform, it would naturally possibly fall, if you were to grab a point, probably somewhere in between one-million and one-million and one, it might be something like 1,000,000.2576. In which case, we have to decide if it’s going to one-million or one-million and one.

Well, we round it down, and that’s called quantization, and of course, quantization is constantly happening, no matter what data points we’re grabbing, it’s never going to be exactly on a single point, so there will always be quantization error.

Quantization error is a very, very, quiet noise that is produced. It is the digital noise floor. The difference really comes into play when we are talking about the very, very tiniest points of audio.

Point number one, two, three, four, because if you have an amplitude point that would naturally come in at 0.644 and we round it up to 1 as opposed to 0, that’s a fairly noticable difference. That’s a big error. We’re not getting an accurate picture down there. The quantization error is only going to distort the very bottom things like the very tail end of a reverb might get a little scrambled, or the very first 1/100th of a millisecond of the attack of something, or the very end sustain in the ambient noise. Those really quiet things. That’s where bit depth makes a little bit of a difference.

Now, is there a noticable difference between 16 and 24 bit?

Yeah, a little. But it’s pretty darn slight. However, what else is slight, is the actual difference in CPU usage between 16-bit and 24-bit. That’s so slight, that it’s just worth grabbing the higher resolution of data for that little tiny extra inch of little bit of reverb in the quietest spots or whatever you have.

Now, here’s the myth. That you need to capture the audio as close to the digital ceiling as possible to get the most accurate resolution. Well, here is where the word resolution has been equivocated to fidelity.

It’s true that the higher you are in amplitude, the more data resolution you’re going to get, because the slighter the quantization error makes a difference. However, the actual fidelity difference doesn’t exist. It does not matter. In 16-bit or 24-bit, you could give yourself easily 20 decibels of headroom at your peak points without losing a drop of fidelity. No one is going to be disrupted by the quantization error during their listening experience. It’s just not going to happen.

Okay, what about 32-bit float? That math on that’s a little bit complicated, but basically 32-bit float is saying, “instead of having a fixed point of amplitude data in our mix, we’re going to have it scalable.”

So, you know, something like zooming in and out of a picture. We’re going to have a scalable point data representation, which means there’s no longer a headroom ceiling. You can keep turning things up and up and up and up.

Just keep in mind that when you do go back to 24-bit or 16-bit, which is eventually how things have to be printed, it’s going to be printed in a fixed-point, not a scalable point. If you are way over where your headroom should be, you are going to clip your signal, so be careful.

Remember, in terms of where your levels actually lie, you can be pretty darn well below the digital ceiling and be fine.

That’s why they call it the green. Yellow means slow down, red means too much. You can be in the green on every channel and you are fine.

Matthew Weiss

Matthew Weiss

Matthew Weiss is a Grammy nominated and Spellemann Award winning audio engineer from Philadelphia. Matthew has mixed songs for Snoop, Sonny Digital, Gorilla Zoe, Uri Caine, Dizzee Rascal, Arrested Development, 9th Wonder, !llmind & more. Get in touch: Weiss-Sound.com.
Smiley face
Recommended