Idid ask for this some time ago, and Richard sent me the sources for an older
version of the code, explaining that it was actually a hard problem, but if I
wanted
to have a go etc...
The issue is that we have to move from 32 bit calculations to 64 bit
calculations in the most heavily optimised section of the code. I did take a
quick look at code in the FFT butterfly, and it looked quite hard.
The improvement in the bandwidth would also be of use in doing doppler
propagation studies as per the RADcom article of about 2 years ago.
Regards
Stewart
Andy Talbot wrote:
>I don't know what "dwell" really means in this topic but I
>think it's the time of data collection and folding/averaging
>in background. What means "smoothing"? In time or frequency
>domain? If it means "data windowing" the WELCH algorithm
>outperforms the other (Rectangular, Gaussian, Parzen,
>Hanning, etc.) since it does not "smear" too much,
>especially in case of LORAN-C background.
My understanding of the GRAM software is that the 'dwell' time is just a
period of wasted time idling between performing an FFT on each set of data
rather than increasing the FFT length. If this is the case then there is no
advantage at all in going to longer dot lengths USING THE GRAM SOFTWARE as
there is no way of further narrowing the receive bandwidth with a narrower
FFT bin. In which case alll the power being transmitted during each dwell
period is going to waste. I seem to recall that the author of the
programme quickly included the dwell option at the request of LF operators,
suspect that but not being particularly interested in the radio related
aspects of that software did not try to take it any further. I may be
wrong, but the dwell option appeared so quickly that a simple delay is the
most likely route for generating it.
The lowest bandwidth therefore possible is from 8000 Hz sampling with a
16384 Point FFT giving a bin size of 0.49 Hz and therefore a bandwidth of
close to 1 Hz using a Hamming window. Therefore, with 1 Hz BW the 3
seconds dots everyone seems to have settled on as being optimum will fill 3
FFT samples ans show up clearly as a line of three pixels or more depending
on what overlap has been incorporated in the data sampling. But there will
NEVER be any advantage in going to longer dots unless the sampling rate is
reduced (will GRAM allow 5513 Hz sampling ?) or the FFT size can be
increased above 16384.
I use the DSP EVM board to mix the audio down to DC and then data reduce /
filter to arbitrarily low bandwidths to give the full advantage of going to
longer dot. periods with no wasted dwell time. The current limit is in the
region of a reciptocal-minute bandwidth, around 10mHz, meaning dot lengths
of 40 - 60 seconds could be used and it would be a straightforward job to
increase the decimation further.
GRAM does not use the mix down and decimate technique, but it just may be
possible for the author to include a longer FFT routine as an option if
asked. I have seen million point FFTs processed on Pentium PCs in less than
one second, so it is certainly possible! It is also possible to ignore
alternate samples of the input data, giving an effectively reduced input
sampling rate - that is what decimation is all about - but the audio must
then be pre-filtered to prevent any alliasing terms being generated as no
filtering is being done in software.
However, if a CW receiver with a narrow filter giving an audio output in the
region of say 500 - 900 Hz is used, there is no reason why the 8kHz sampling
can't be decimated by four giving an effective sampling of 2kHz, and four
times the resolution bandwidth with the same 16K FFT. That is a 6dB S/N
improvement immediately for a software mod as straighforward as including a
dwell time.
6dB of capability is being wasted anyway, using these non-coherent energy
detection techniques, but that's another story ...............
Andy G4JNT
|