Return to KLUBNL.PL main page

rsgb_lf_group
[Top] [All Lists]

Re: LF: 16 bit vs 24 bit ADC?

To: [email protected]
Subject: Re: LF: 16 bit vs 24 bit ADC?
From: Andy Talbot <[email protected]>
Date: Fri, 2 Dec 2011 16:18:44 +0000
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; bh=yWNC536BiOvxAor6ZU4x1gPoUOBKyrcBNZlzARRkmDE=; b=VbxbXvlC9RbRsvuPvPcRVND/N/SMbQZCIK1KArYnMLbrRa9OjGnCdC37/bU7P+ZBT+ iI5e7QyZ70YrjdyequuK7E2YXfLYCrXttbdIv9YDDLq7Q8oEfuxl/cdItD3dvC1TpVWQ uVDCSko+Y3xnJLeozseKyMjcBDbC2quntILQo=
In-reply-to: <[email protected]>
References: <[email protected]> <[email protected]>
Reply-to: [email protected]
Sender: [email protected]
Bill -

Its not that bad.
Consider a weak signal with a peak to peak amplitude of less than one
quantisation step,  in the presence of larger amplitude noise that
toggles the first few bits of the A/D.   Over many samples a
statistical weighting builds up.  When the weak signal is negative, it
introduces a few lower random numbers on teh noise signal.  When it
goes positive, teh average of teh noise values is increased.

When the signal then passes through a decimation process, ie
downconverted, filtered and the sampling rate reduced, the average
values out of the noise and the signal pops up.   It is easy to see
this in action with any narrow band signal on an FFT as an FFT in
effect is the decimation process.

In fact, any sigma-delta  converter is doing exactly this.  And all
codecs in soundcards are SD type.  A sigma delta converter in its
simplest form is a one bit D/A  runing at a very fast sampling rate.
It doesn't rely on noise added to the signal, but instead generates a
feedback signal to optimise the successive signal esitimation, but it
is still a one bit converter which after data conversion (slowing the
sampling rate down to 48kHz or whatever) gives the 16 or 24 bits of
result.

On a more practical example, your own design of 8 bit A/D used for
Coherent many years ago, and my PIC equivalent that did the same job,
could be used for very narrow band and weak signal work, and did in
the days before sound cards made theat converter obsolete.   An 8 bit
A/D with 10kHz sampliong (in my case) should only offer 40dB or so of
dynamic range, and you'd think woudl be incapable of quantising at
levels you could still hear a signal at.  But that was certainly not
the case.   After two or 3 stages of decimation it would easily show
QRSS signals that were way below the quantisation level.    The DSP
used in those days didn't permit very long FFTs, so it was necessary
to first reduce the sampling rate by mix down, filter and decimate
then do a short FFT.

All of which is now soundcard-able directly and with long FFTs

So, unless there are 'other' artefacts like linearity and monotonicity
that a cheap A/D would have, after decimation / filtering / FFT
processing, there is not going to be any difference between the result
seen from a 16 bit or a 24 bit A/D.   Especially as they are both
sigma-delta types anyway.

Andy
www.g4jnt.com



On 2 December 2011 15:53, Bill de Carle <[email protected]> wrote:
> We have seen a tendency to use ultra narrow bandwidths to coax out weak
> signals not otherwise discernable.  This calls for enhanced frequency
> stability at both ends, but that is not the only factor involved.  At some
> stage we use an ADC to convert the analog signal to digital, thereafter
> working only with numerical approximations of instantaneous voltages: that's
> where information can be lost.
>
> Traditionally, 16-bit ADC's have served well - they're readily available and
> inexpensive.  At some point though, the signal we are looking for becomes
> just too small to register on a 16-bit ADC.  It seems intuitive that in a
> pristine noiseless environment, if a feeble sinewave has insufficient
> amplitude to toggle the LSB of our ADC it might as well not exist at all.
>  Clearly, if a signal produces no effect on a radio receiver (where
> "receiver" includes the ADC looking at its output) - the end result is the
> same as if that signal wasn't there.  No amount of post-processing or
> extended sampling time is ever going to detect it.
>
> One might argue that adding a little noise such that the sum of noise + weak
> signal will then be enough to change the ADC output will solve the problem.
>  Unfortunately things aren't that easy.  Let's remember that an ADC
> *approximates* some true value; the quality of such an approximation isn't
> entirely specified by the number of bits used to represent the ADC's output,
> there are other factors, in particular the "correctness" of the result.  For
> example, say an 8-bit ADC is supposed to resolve 7 amplitude bits plus a
> sign.  If the full-scale input is specified to be 1 volt, the least
> significant bit represents 7.8125 millivolts.  Assuming all the other bits
> are zeros, for the LSB to be honest, it should say "1" if the input voltage
> is greater than or equal to 3.390625 millivolts (i.e. half of 7.8125 mV),
> and "0" otherwise.  ADC manufacturers will guarantee monotonicity, but
> that's much easier to achieve than the absolute honesty we'd like.
>
> We sometimes take many consecutive readings of a parameter then average them
> to get a better idea of the true value.  If we take hundreds of measurements
> known to be correct to one decimal place, we might have some confidence in
> expressing the average to 2 decimal places.  It would be justified if 1) the
> parameter being measured doesn't change significantly over the sampling
> period (or it changes in a constrained way e.g. in an FFT) and 2) the ADC is
> honest.  Any intrinsic error in our ADC (say the one above has a comparator
> offset and reports 3.5000 millivolts as "0" instead of "1") cannot be
> "corrected" by long term averaging.  Once information is lost it is
> unrecoverable.  Even with "dithering" (adding some white noise to jostle the
> LSB), our long term average will still be limited by the honesty of the ADC.
>
> It has been said that weak signal reception is all about the signal-to-noise
> ratio, not about the strength of the signal itself. I think in today's
> digital world with limited dynamic range that old adage needs to be
> rethought.  Remember that truncating a numerical approximation to any number
> of bits less than infinity is equivalent to adding noise.
>
> Since manufacturers usually guarantee monotonicity, there is reason to
> believe a 24-bit ADC would be more "honest" than a 16-bit ADC, i.e. will
> give us numbers which are closer to the true underlying values.  Audio
> afficionados sometimes claim they can hear the difference, saying a 24-bit
> ADC sounds better to them, often using a lot of non-scientific words to
> describe the improvement.  My question is this: In a practical situation
> with low QRN/QRM, would going to a 24-bit ADC soundcard result in an ability
> to detect a weak LF signal that would not show up on a Spectrum Lab display
> computed from 16-bit values?
>
> Bill VE2IQ
>
>


<Prev in Thread] Current Thread [Next in Thread>