your thoughts about an undersampling LF receiver are very interesting!
The "tunable crystal filter" solves the problem with multiple alias passbands
associated with undersampling but there are other problems that may
be harder to solve - sampling aperture time and aperture jitter.
An undersampling system needs some kind of sample & hold circuit. IIRC from
digital audio, the aperture time should be less than T/10 (T=sampling
period) in order to keep the high frequency rolloff below 1 dB. So if we
assume a hypothetical 300 kHz sampling rate (for 0 .. 140 kHz BW), the
aquisition time should be less than 300 ns, which should not be too hard to
implement:-). Since the hold time is about 170 us (6 kHz), we are talking
about 1:500 aquisition/hold time ratios, which can be a problem due to hold
time leakage.
The highest slew rate for a 135 kHz sine wave is about half the peak-to-peak
(full scale) value in just over 1 us. In a 16 bit system, this corrsponds to
32000 LSB in 1 us, thus, the sample time jitter should be less than 30 ps in
order to keep the peak amplitude error smaller than 1 LSB, the average error
is much less than this. I guess that crystal oscillators built around badly
bypassed CMOS gates should be avoided :-). With a good quality primary
reference crystal oscillator and a simple frequency divider chain (with
plenty of bypass capacitors), adequate jitter levels should be obtainable.
Perhaps
quantization noise will be a problem too (due to the low Fs ), I don't know...
At least in the old days of digital audio, when 14 and 16 bit ADCs were
used, a low amplitude (1 LSB) dithering noise was added to the audio signal
prior to entering the ADC, thus, the quantisation noise was no longer
related to the desired audio signal. By adding dithering noise, low and
medium frequency tones could be reproduced well below -100 dB, when the
teoretical limit for a 16 bit system would be -96 dB. When modern 20 .. 24
bit ADCs are used, the microphone and preamplifier noise is usually
sufficient to convert any quantisation noise into more or less white noise.
If some super-selective front end is used (say 250 Hz bandwidth) in the LF
receiver, the digital signal may contain signal related quantisation noise,
thus some form of dither noise may be required. However, if the analog
bandwidth is larger, I guess that the band noise may be used as dither noise.
Since the bandwidth after the sample & hold is only a few kHz, 16 to 24 bit
ADCs are available and since these ase usually running at 48 to 192 kHz
sampling rate, one or two extra bits could be obtained when downsampling
these signals, provided that there is some kind of dithering noise present,
so I do not think the quantisation noise is a big problem.
In the previous message I forgot to mention why I suggested the 6 kHz
sampling rate for receiving the 135.7 to 137.9 Khz LF band. The harmonics of
the sampling frequency will fall on 132 and 138 kHz and if the input signal
is band limitted to that band (or a small fraction of it), the alias with
the 138 kHz will fall from 2.3 kHz to 0.1 kHz (inverted spectrum). However,
if say, 5 kHz sampling rate is used, the harmonics would be 135 and 140 kHz,
thus, frequencies around 137.5 kHz would alias with both these harmonics,
making that section of the band more or less useless.
Of course, sampling at or above 300 kHz (to satisfy the Nyquist criterion)
is out of the question, due to the availability of such high sampling rate
ADCs and the huge dynamic range required and also the huge processing power
required.
Not necessarily out of the question.. Hardware decimation chips *are*
available.
One example is Intersil (formerly Harris) HSP50016 which has a 16-bit parallel
input capable of >50 MSPS.
When I return home from my vacation, I have to dig up the old Harris data
book, which might also contain a description of this chip. From your
description, it appears that it requires an external 16 bit ADC running at
the full sampling rate and all this chip does is to reduce the data rate.
The primary problem of converting the entire 0 to 150 kHz band with
sufficient accuracy still remains.
Using some kind of spectral density calculation, the total power of a 1 Hz
bandwidth desired signal (e.g. QRSS) is very small compared to the full
bandwidth of the whole 150 kHz band, even if it is only occupied by white
noise, whith a spectral density only slightly lower than the desired signal.
If 16 bits are used to represent the whole bandwidth, then only 6 or 7 bits
are available to represent the 1 Hz BW desired signal. This assumed that the
unwanted signals were white noise, but in practice, this is far from reality
on LF, so even less bits would be available to represent the desired signal.
Apparently the other unwanted signals would have some dithering effects,
shifting the threshold levels, but since these are not noise, there might be
some intermodulation products with the desired signal due to quantisation.
A suitable ADC for this kind of RX is Analog Devices AD9260. This is a 16-bit
ADC with input BW = 1 MHz. It is "oversampling" at 20MSPS and contains a
Fs/8 decimation filter on-chip. With 20MHz clock, the output Fs is 2.5 MHz
which
can be used as clock for the HSP50016.
From this description, it appears as if this device is actually a 13 bit
flash ADC and the three extra bits are generated by the Fs/8 downsampling
:-). This assumes the existence of some dithering signal and that the
decission levels of the 13 bit ADC are ideal (no linearity error).
When looking at specifications for devices that appear to be far better than
ordinary contemporary devices, it is a good idea to look very carefully, how
these figures have been obtained. Usually these devices are designed for
some specific application compromising some less critical specifications.
These devices might not work as expected, when used in a completely
different environment.
Paul OH3LWR
|