We have seen a tendency to use ultra narrow bandwidths to coax out
weak signals not otherwise discernable. This calls for enhanced
frequency stability at both ends, but that is not the only factor
involved. At some stage we use an ADC to convert the analog signal
to digital, thereafter working only with numerical approximations of
instantaneous voltages: that's where information can be lost.
Traditionally, 16-bit ADC's have served well - they're readily
available and inexpensive. At some point though, the signal we are
looking for becomes just too small to register on a 16-bit ADC. It
seems intuitive that in a pristine noiseless environment, if a feeble
sinewave has insufficient amplitude to toggle the LSB of our ADC it
might as well not exist at all. Clearly, if a signal produces no
effect on a radio receiver (where "receiver" includes the ADC looking
at its output) - the end result is the same as if that signal wasn't
there. No amount of post-processing or extended sampling time is
ever going to detect it.
One might argue that adding a little noise such that the sum of noise
+ weak signal will then be enough to change the ADC output will solve
the problem. Unfortunately things aren't that easy. Let's remember
that an ADC *approximates* some true value; the quality of such an
approximation isn't entirely specified by the number of bits used to
represent the ADC's output, there are other factors, in particular
the "correctness" of the result. For example, say an 8-bit ADC is
supposed to resolve 7 amplitude bits plus a sign. If the full-scale
input is specified to be 1 volt, the least significant bit represents
7.8125 millivolts. Assuming all the other bits are zeros, for the
LSB to be honest, it should say "1" if the input voltage is greater
than or equal to 3.390625 millivolts (i.e. half of 7.8125 mV), and
"0" otherwise. ADC manufacturers will guarantee monotonicity, but
that's much easier to achieve than the absolute honesty we'd like.
We sometimes take many consecutive readings of a parameter then
average them to get a better idea of the true value. If we take
hundreds of measurements known to be correct to one decimal place, we
might have some confidence in expressing the average to 2 decimal
places. It would be justified if 1) the parameter being measured
doesn't change significantly over the sampling period (or it changes
in a constrained way e.g. in an FFT) and 2) the ADC is honest. Any
intrinsic error in our ADC (say the one above has a comparator offset
and reports 3.5000 millivolts as "0" instead of "1") cannot be
"corrected" by long term averaging. Once information is lost it is
unrecoverable. Even with "dithering" (adding some white noise to
jostle the LSB), our long term average will still be limited by the
honesty of the ADC.
It has been said that weak signal reception is all about the
signal-to-noise ratio, not about the strength of the signal itself. I
think in today's digital world with limited dynamic range that old
adage needs to be rethought. Remember that truncating a numerical
approximation to any number of bits less than infinity is equivalent
to adding noise.
Since manufacturers usually guarantee monotonicity, there is reason
to believe a 24-bit ADC would be more "honest" than a 16-bit ADC,
i.e. will give us numbers which are closer to the true underlying
values. Audio afficionados sometimes claim they can hear the
difference, saying a 24-bit ADC sounds better to them, often using a
lot of non-scientific words to describe the improvement. My question
is this: In a practical situation with low QRN/QRM, would going to a
24-bit ADC soundcard result in an ability to detect a weak LF signal
that would not show up on a Spectrum Lab display computed from 16-bit values?
Bill VE2IQ
|