Return to KLUBNL.PL main page

rsgb_lf_group
[Top] [All Lists]

Re: LF: 16 bit vs 24 bit ADC?

To: [email protected]
Subject: Re: LF: 16 bit vs 24 bit ADC?
From: Steve Dove <[email protected]>
Date: Fri, 02 Dec 2011 17:43:14 +0000
In-reply-to: <[email protected]>
References: <[email protected]> <[email protected]>
Reply-to: [email protected]
Sender: [email protected]
User-agent: Mozilla Thunderbird 1.0.6 (Windows/20050716)

Hi Bill,

Audio afficionados are high on opinion but notoriously hard-put to show statistically significant discrimination in true ABX or ABC scenarios; I wouldn't be using them as any sort of criterion. That said, there are strong reasons for using 24-bit S/D audio convertors, some qualitatively based involving enhanced dynamic range, but mostly that it is quite difficult to find anything else nowadays! (Except perhaps in the icky depths of soundcard-land.)

But '24 bits' ain't what it seems - even the *best* integrated audio convertors *well implemented* are 'only' capable of 120dB dynamic range measured honestly. Even though the internal architecture/filters etc. may be 24 bits wide, it's in effect a 20 bit convertor with 4 'marketing bits'. With luck the noise is incoherent enough to use as free dither, but that isn't guaranteed. Lesser parts, the lower cost ones more commonly found in consumer-world rather than pro-audio, tend to come out shy of 100dB, at best about the same as (non-existent) 'perfect' 16-bit convertors.

It is a long, long time since I've been able to see the lowest bit of a convertor toggle with applied signal (and turn into a pseudo-PWM!) - they're ALWAYS noise-bound nowadays. The question turns into whether the device's noise is going to dominate, or yours (the application; usual).

So, in audio (or as used baseband in SDR) we're stuck with a mere 120dB dynamic range in a 20kHz bandwidth. Which actually isn't shabby at all, giving 'pure analogue' a run for its money.

As for the way-more-serious ultra-fast convertors used in direct-SDR it is to be presumed that even the latest generation of 16-bit monsters are similarly noise-bound.

It's common practise (at least during development) to deliberately run the convertors at very low level and listen/measure amplified up to see what the noise floor sounds/looks like. The better the implementation, the less coherent crap and the noisier the noise. Cheaper parts are often riddled with internal squirglies, the good ones less so. THIS - the nature of the noise-floor - could have a far greater bearing on useful dynamic range in a radio application than mere number of bits.

But you raise a very valuable question that transcends the 'what is possible'. It's how much dynamic range do we really NEED?


        73,

                Steve




Bill de Carle wrote:
We have seen a tendency to use ultra narrow bandwidths to coax out weak signals not otherwise discernable. This calls for enhanced frequency stability at both ends, but that is not the only factor involved. At some stage we use an ADC to convert the analog signal to digital, thereafter working only with numerical approximations of instantaneous voltages: that's where information can be lost.

Traditionally, 16-bit ADC's have served well - they're readily available and inexpensive. At some point though, the signal we are looking for becomes just too small to register on a 16-bit ADC. It seems intuitive that in a pristine noiseless environment, if a feeble sinewave has insufficient amplitude to toggle the LSB of our ADC it might as well not exist at all. Clearly, if a signal produces no effect on a radio receiver (where "receiver" includes the ADC looking at its output) - the end result is the same as if that signal wasn't there. No amount of post-processing or extended sampling time is ever going to detect it.

One might argue that adding a little noise such that the sum of noise + weak signal will then be enough to change the ADC output will solve the problem. Unfortunately things aren't that easy. Let's remember that an ADC *approximates* some true value; the quality of such an approximation isn't entirely specified by the number of bits used to represent the ADC's output, there are other factors, in particular the "correctness" of the result. For example, say an 8-bit ADC is supposed to resolve 7 amplitude bits plus a sign. If the full-scale input is specified to be 1 volt, the least significant bit represents 7.8125 millivolts. Assuming all the other bits are zeros, for the LSB to be honest, it should say "1" if the input voltage is greater than or equal to 3.390625 millivolts (i.e. half of 7.8125 mV), and "0" otherwise. ADC manufacturers will guarantee monotonicity, but that's much easier to achieve than the absolute honesty we'd like.

We sometimes take many consecutive readings of a parameter then average them to get a better idea of the true value. If we take hundreds of measurements known to be correct to one decimal place, we might have some confidence in expressing the average to 2 decimal places. It would be justified if 1) the parameter being measured doesn't change significantly over the sampling period (or it changes in a constrained way e.g. in an FFT) and 2) the ADC is honest. Any intrinsic error in our ADC (say the one above has a comparator offset and reports 3.5000 millivolts as "0" instead of "1") cannot be "corrected" by long term averaging. Once information is lost it is unrecoverable. Even with "dithering" (adding some white noise to jostle the LSB), our long term average will still be limited by the honesty of the ADC.

It has been said that weak signal reception is all about the signal-to-noise ratio, not about the strength of the signal itself. I think in today's digital world with limited dynamic range that old adage needs to be rethought. Remember that truncating a numerical approximation to any number of bits less than infinity is equivalent to adding noise.

Since manufacturers usually guarantee monotonicity, there is reason to believe a 24-bit ADC would be more "honest" than a 16-bit ADC, i.e. will give us numbers which are closer to the true underlying values. Audio afficionados sometimes claim they can hear the difference, saying a 24-bit ADC sounds better to them, often using a lot of non-scientific words to describe the improvement. My question is this: In a practical situation with low QRN/QRM, would going to a 24-bit ADC soundcard result in an ability to detect a weak LF signal that would not show up on a Spectrum Lab display computed from 16-bit values?

Bill VE2IQ





-----
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.1873 / Virus Database: 2102/4652 - Release Date: 12/02/11





<Prev in Thread] Current Thread [Next in Thread>