At 09:57 PM 9/10/2009, W1TAG wrote:
J.B.,
For work at 500 kHz, I'd be tempted to go no longer than 16 frames.
I'm beginning to think that the fast fades and poorer phase
stability hurt the build-up of copy. While you may get your first
clear copy on the 31st line, I'll bet that you would have gotten it
quicker by flushing things out earlier.
That said, 8 lines appears to be too short for any reasonable copy
of M0BMU here. I used 12 earlier in the week with decent results,
and have gone back to it.
WOLF was originally set up as a batch mode but the GUI adaptation
made it convenient to run it continuously. However, some of the
batch mode philosophy still comes into play because the "integration
time" builds up gradually. It starts with one frame, then builds up
to a resultant over n frames, then dumps everything and starts all
over again at one frame, etc. That isn't particularly compatible
with continuous operation. It would probably be better to do it like
we used to in the COHERENT/AFRICAM BPSK software, i.e. to keep a
"running" average. COHERENT/AFRICA had a parameter called GRAB,
which specified the number (n) of consecutive frames to be averaged
together. On startup, the algorithm worked just like WOLF, it built
up gradually, using first one frame, then two, etc, until n frames
were in the average. Thereafter it continued to use only the *last*
n frames. It was done by keeping a set of accumulators and a large
historical database of past samples. Each time a new frame was
received, its values were added into the accumulators and
simultaneously the oldest values (from n samples back) were
subtracted off. That way we had a sliding window n frames wide,
moving continuously forward through time. One advantage was that we
were working with much shorter frames (i.e. 16 or 32 bits vs 960 bits
for WOLF blocks), so the amount of memory and computation required
was considerably less. Obviously we cannot afford to be
compute-bound at the end of each frame for more than the time it
takes to receive the next frame. On the other hand, we have much
faster processors these days and memory isn't an
issue. COHERENT/AFRICA could integrate in two ways: by sample or by
decoded bit. By sample gives better results but is impractical for
long GRABs unless the timing is locked to GPS at both ends: once you
accumulate half a Hz of frequency offset adding in more samples
actually hurts more than it helps because destructive interference
sets in. Integrating after bit decoding allowed us to run for much
longer times because it still pays off to do it until the offset
reaches half a bit-time (50 msec vs 62.5 usec). WOLF also does a
frequency search at each update which multiplies the amount of
compute time whereas COHERENT/AFRICA assumed the frequency was a
known parameter. We did eventually incorporate a tracking filter
which was an attempt to follow a slowly-changing frequency, but that
never proved to be very helpful. The optimum value of n can be
expected to vary over time depending on current propagation
conditions. Perhaps some kind of adaptive filter to adjust the value
of n continuously to give the best decode of WOLF's 480-bit PRN sync channel?
Bill VE2IQ
|