Dear LF Group,
At 09:41 05/12/2001 +0000, Laurie wrote:
...but it would seem sensible to have defined time and
frequency "slots" into which the signal could drop,so that all the available
energy could be used rather than it be spread out in timeand frequency.
I think we have been here before ... If you chose the optimum resolution
for the speed of QRSS you are using, you will get the best signal to noise
ratio between "dot" and "no dot" when all the samples containing a signal
are used for the FFT algorithm that calculates the spectrum of the signal.
This suggests you would get the brightest dot on the screen if the start of
the FFT is made to coincide with the start of the dot. But in fact this
more or less happens already; suppose you have 30 s dots and have set up
the FFT to use 30s worth of samples. Normally, the spectrogram software
will perform an FFT at least once every few seconds, let's say 3 seconds,
using the received signal from the previous 30s. Even if the relative
timing of the dots and FFT is totally un-synchronised, at least one FFT
will start within 3 seconds of the start of the dot being transmitted, and
so be very close to optimum. The FFTs performed on data from before and
after the optimum time will contain less signal, so the dot displayed on
the screen will fade in and out, reaching a peak of intensity in the
middle. The only effect of synchronising the start of the FFT with the dots
will be to eliminate the sub-optimum FFTs, and retain the optimum one where
the timings coincided - the effect would be the same as superimposing an
opaque mask on the screen with slots coinciding with the timing of the
transmitted dots. I suppose this would not be hard to do, but would it be
an advantage? You would get rid of some "clutter" on the display, but it
would be harder to tell when bursts of noise and so on had occured. The
display would not actually contain any more signal information, though.
I think Rik's idea of displaying 2 tones differentially ought to work, but
it would place quite stringent demands on frequency stability. The current
spectrograms are not too fussy about exact frequency, so long as the drift
is smaller than the FFT resolution during one dot period, and the signal
stays on the screen. But if we were to compare two tones, it would require
accuracy in the frequency shift that was smaller than the resolution of the
FFT - a few millihertz with the longer dot lengths. Judging from last
year's experience with Wolf, it is quite hard for this kind of accuracy to
be set up and maintained throughout the transmit/receive system, when the
equipment being used includes amateur-type rigs, sound cards etc.
BTW: I hope to get my DFCW modulator finished as soon as I get a bit of
time to do it - should be a real advantage for someone whose callsign is
mainly dashes!
Cheers, Jim Moritz
73 de M0BMU
|