Hello Bill,
if you reduce the data from 16 bit to 12 bit teh way you did, you actually
reduce sensitivity by 24dB (you loose the sample values less than 16/65536) and
reduce the dynamic range from 96dB to 72dB.
So as long as the signal you want to catch is strong enough I don't expect to
see any significant difference.
But for a weak signal 12 vs 16 bit can make a big difference.
73, Rik ON7YD - OR7T
________________________________________
Van: [email protected] [[email protected]]
namens Bill de Carle [[email protected]]
Verzonden: zaterdag 3 december 2011 6:57
Aan: [email protected]
Onderwerp: Re: LF: 16 bit vs 24 bit ADC?
First of all, thanks to everyone for the great comments, much appreciated.
I don't have a 24-bit ADC here to run Jim's test but tried another
experiment with interesting results. Tonight Stefan, DK7FC is
transmitting his callsign using DFCW on 136.172 Khz. Watching the
screen, I noticed Stefan's signal was at times marginal here in
Ontario Canada so I decided to record it for an hour, change the
recorded data from 16-bit samples to 12-bit samples, then process the
file using Spectrum Lab with exactly the same settings. Basically I
threw away the least significant 4 bits of each signed 16-bit value
from the ADC by AND-ing all the samples with a mask of FFF0. My
recording covers the period from 0048z thru 0148z on December 3rd,
2011 (some 329.6 Mb). The sample rate was 48000 and I injected an
accurate 10 Khz audio reference tone through the ADC so SL could use
it to correct for sound card sample rate drift. Wolf's sample-rate
correction algorithm seems to work very well for both live ADC data
and played back data. I was using a 4096-point FFT with 512 X
decimation to give a bin width of 22.8882 mHz, equivalent noise
bandwidth of 34.3323 mHz and screen scroll rate was one pixel every
10 seconds. The SL noiseblanker was enabled for both real-time and
off-line processing. I really expected to see significant
degradation of this already marginal signal when using the 12-bit
data but the results surprised me. In the attached image
16_vs_12.jpg you can see the two traces. The top image is the one
taken from a screen capture as the live data was coming in, using
16-bit ADC data; the bottom image was captured (at the same jpg
quality) from a second instance of Spectrum Lab running on another
computer but processing the data chopped to 12 bits. Not only was
there no obvious degradation, in some respects there seemed to be an
actual improvement! In no case was any portion of the trace
discernable in the top image but not in the bottom. The background
of the 12-bit data seems to be a little more obvious (visually loud?)
but the traces of Stefan's signal are brighter too. I actually find
the 12-bit bottom image easier to read.
73,
Bill VE2IQ
At 12:02 PM 12/2/2011, Jim Moritz M0BMU wrote:
>Dear Bill, Andy, LF Group,
>
>It seems to me it should be quite easy to subject this to practical
>testing. First, using SpecLab etc., measure the SNR of a weak signal
>in the presence of noise, with the FFT parameters chosen so that the
>signal is near the quantisation noise level.. Then attenuate the
>signal and noise at the ADC input by, say, 24dB. This would
>effectively reduce the resolution/accuracy of the A-D conversion by
>4 bits ( I guess you would want to chose the signal, noise and
>attenuation levels so that the external noise was well above the
>sound card or other ADC noise floor, with and without the
>attenuation). Then see if the SNR in the FFT output has changed.
>
>Cheers, Jim Moritz
>73 de M0BMU
|