Hello Laurie & group,
I have thought about synchronising the Tx and Rx using some form of
time signal but 1) this might be thought as cheating and 2) the path
variations, phase changes etc would be a problem. So my point is WHY
would it not still be possible to read the message without these gaps
hardly there anyway).
My suggestion is to use 'time synchronzed' QRSS or DFCW. Whith 'time
synchronized' I mean that any dot/dash will start at a known time (eg. if
you use 60 seconds dots a dot/dash will always start at the full minute,
for 30 seconds dots it would be at the full minute of the half minute.
This requires sufficient accurate timing at TX and RX side. But at
dotlengths of 30 seconds upwards (as used in TA tests) a 1 second accuracy
would be sufficient, and this can be achieved by setting the PC clock
manualy with teh radio-signal clock (eg DFC77) as reference.
What would be the advantages :
1. No more needs for gaps, even the 5, H, 0 etc can be copied without doubt
2. No more need for 'over-FFT-ing' at the RX software :
Software as ARGO is taking FFT's at a rate of several per second while in
principle only 1 FFT per dot is sufficient, provided the FFT bin contains
exact 1 dot (or space). Maybe the 'calculating power' of the PC can be used
to analyze this one FFT instead of making many FFT's of the same dot.
I refer to DF4YHF's software (Spectrum Lab) where one can manualy change
the 'reference level' and 'dynamic range'. I found that this can be very
usefull to 'dig in the noise'. Based on this experience I believe that some
kind of 'intelligent AGC' (that controls reference level and AGC) can
Regarding 'cheating' : making skeds with very accurate arrangements of the
transmitting periodes is common practice for EME, MS etc... so I don't
think that this will be a problem on LF.
73, Rik ON7YD