Return to KLUBNL.PL main page

rsgb_lf_group
[Top] [All Lists]

Re: LF: 16 bit vs 24 bit ADC?

To: [email protected]
Subject: Re: LF: 16 bit vs 24 bit ADC?
From: Andy Talbot <[email protected]>
Date: Sat, 3 Dec 2011 10:23:10 +0000
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; bh=pw3XQYAswjTZiVft7W4cwOiMA1pmIfhC4Ao0NoHDWdE=; b=apQmhC3vZerKJuFAJjjj/QELrBA8Az6Wn1mRvZZXtI69C366oegrXGcIv2RWCGs6BG qJCU7zhToaneoq6vO8K9sCknmcTC4NmUTakprJIScoeGorDp0HbTpj9HlKTJOWCZpvHS 3QQSxNzTTs/fwyhm2cqvBkxJshB7BRrM7cN9I=
In-reply-to: <[email protected]>
References: <[email protected]> <[email protected]> <CAA8k23RcqVfXC9MWoTWm9yESwuSG5ZP6fFFZA2y2j-C4f0DPHw@mail.gmail.com> <80A163C9035C44FBA23C7501118A6FDF@JimPC> <[email protected]>
Reply-to: [email protected]
Sender: [email protected]
Which is exactly the point I was making earlier.   When a signal is
buried in noise, and subject to post processing that decimates the
sampling rate, the initial quantisation noise becomes irrelevant

Since my earlier posting, I've had a thought about the matter and
reckon an approximation can be made along these lines:

Assume the quantisation noise is  equal to one bit RMS (a bit of a
crude assumption, but stay with it for now)  = Nq

Place your receiver noise so that is covers the first four
quantisation levels (0 - 15) (take that as the RMS value; ignoring the
crest factor is probably the same bodge as the one bit Nq assumed
before) and add the two.   Its clear that the input noise will be
greater than Nq  by 20.LOG(16) = 24dB.   When Nq is added to the Rx
noise, it will increase the overall in the ratio
 (1 + 16^2)  / 16^2 = 257 / 256 or by about 0.017dB.

So, provided your input noise is comfortably above the quantising
threshold, it really doesn't matter as far as weak signals are
concerned.

Where greater sampling intervals really scores is in dynamic range.
If all you are doing is quantising the audio output from an SSB filter
then you are probably limited by the total dynamic range of the Rx,
which is only going to be 60dB or so, and any old A/D of 12 bits or
more will work

If you are downconverting directly with no AGC and a straight linear
mixing process, then teh 120dB range of a 24 bit soundcard is probably
going to score when impulsive noise and strong signals come along to
drive into saturation.    But only assuming you make the most of
dynamic range in each case by putting a smaller mean level into the
higher resolution A/D

Andy
G4JNT



On 3 December 2011 05:57, Bill de Carle <[email protected]> wrote:
> First of all, thanks to everyone for the great comments, much appreciated.
>
> I don't have a 24-bit ADC here to run Jim's test but tried another
> experiment with interesting results.  Tonight Stefan, DK7FC is transmitting
> his callsign using DFCW on 136.172 Khz.  Watching the screen, I noticed
> Stefan's signal was at times marginal here in Ontario Canada so I decided to
> record it for an hour, change the recorded data from 16-bit samples to
> 12-bit samples, then process the file using Spectrum Lab with exactly the
> same settings.  Basically I threw away the least significant 4 bits of each
> signed 16-bit value from the ADC by AND-ing all the samples with a mask of
> FFF0.   My recording covers the period from 0048z thru 0148z on December
> 3rd, 2011 (some 329.6 Mb).  The sample rate was 48000 and I injected an
> accurate 10 Khz audio reference tone through the ADC so SL could use it to
> correct for sound card sample rate drift.  Wolf's sample-rate correction
> algorithm seems to work very well for both live ADC data and played back
> data.  I was using a 4096-point FFT with 512 X decimation to give a bin
> width of 22.8882 mHz, equivalent noise bandwidth of 34.3323 mHz and screen
> scroll rate was one pixel every 10 seconds.  The SL noiseblanker was enabled
> for both real-time and off-line processing.  I really expected to see
> significant degradation of this already marginal signal when using the
> 12-bit data but the results surprised me.  In the attached image
> 16_vs_12.jpg you can see the two traces.  The top image is the one taken
> from a screen capture as the live data was coming in, using 16-bit ADC data;
> the bottom image was captured (at the same jpg quality) from a second
> instance of Spectrum Lab running on another computer but processing the data
> chopped to 12 bits.  Not only was there no obvious degradation, in some
> respects there seemed to be an actual improvement!  In no case was any
> portion of the trace discernable in the top image but not in the bottom.
>  The background of the 12-bit data seems to be a little more obvious
> (visually loud?) but the traces of Stefan's signal are brighter too.  I
> actually find the 12-bit bottom image easier to read.
> 73,
> Bill VE2IQ
>
> At 12:02 PM 12/2/2011, Jim Moritz M0BMU wrote:
>>
>> Dear Bill, Andy, LF Group,
>>
>> It seems to me it should be quite easy to subject this to practical
>> testing. First, using SpecLab etc., measure the SNR of a weak signal in the
>> presence of noise, with the FFT parameters chosen so that the signal is near
>> the quantisation noise level.. Then attenuate the signal and noise at the
>> ADC input by, say, 24dB. This would effectively reduce the
>> resolution/accuracy of the A-D conversion by 4 bits ( I guess you would want
>> to chose the signal, noise and attenuation levels so that the external noise
>> was well above the sound card or other ADC noise floor, with and without the
>> attenuation). Then see if the SNR in the FFT output has changed.
>>
>> Cheers, Jim Moritz
>> 73 de M0BMU
>
>


<Prev in Thread] Current Thread [Next in Thread>