When receiving a DX signal, many people use a faster Argo screen than
the transmitted dot length. For instance, setting Argo to 60s dot
when receiving a QRSS120 signal. But this is not common practice with
more local contacts with faster speeds.
Is this effect because Argo does not work well at very slow speeds?
Or is it something to do with DX propagation? Is it that Argo has to
compromise between best S/N on a dash and the dot, and the dot loses
out (which would affect DFCW worst)? Do other people experience this
effect on other spectrograms (eg SpecLab)?
I am not attempting to rubbish any software. I am just trying to
understand why this deliberate mis-match of Tx and Rx speeds is done
in practice when theory suggests that it should make things worse.
Mike, G3XDV
==========
|