I'm puzzled by the I/Q inconsistency. Is there no convention
on the signs in the complex mixer? Perhaps it is
vlfrx-tools and ebnaut which needs to change Q sign?
I'd like to thank everyone for giving this coherent BPSK a
try at LF: those taking part and the rest putting up with the
extra list traffic. We've got some worthwhile results and
pushed things to the limit in a few directions which has
revealed a few bugs and weaknesses.
Now I'm back to the drawing board to make some alterations.
One of the things being looked at is the CRC size which
determines the strength of the outer error detection code
which selects correct solutions from the Viterbi list output.
More CRC bits means less chance of a false detection and
therefore more coding gain from the outer layer. But more
CRC bits is extra overhead to be carried by the Viterbi layer
which reduces its coding gain.
Some trials of 4 character messages using 8K19A seem to indicate
that when CRC size is increased, the extra outer layer gain
is roughly cancelled by an equal loss of inner layer gain.
The result being the overall coding gain doesn't change
significantly.
Some figures after 10 hours of 120 cores running trials at
Eb/N0 = -1.0 dB:
CRC_bits best_list trials success_rate
8 50 708 51.0%
12 400 605 50.3%
14 3000 973 51.9%
16 11000 1653 53.8%
18 55000 1938 52.8%
20 200000 1131 52.3%
22 500000 1565 54.4%
24 2000000 1767 53.1%
For each CRC size there's an optimum list size for best success
rate and the success rate drops if you use longer or shorter
lists. That's the expected behaviour. But interestingly, there
isn't significant change in overall success rate between 8 bits
and 24 bits. The outer code strength increases considerably
when going up to 24 bits, but the loss of Viterbi gain is
also high. The 4 char message itself is only 24 bits, so a
24 bit CRC doubles the number of bits that must be encoded in
the same duration of message (necessary to keep Eb/N0 constant).
Probably when I repeat this test with longer messages the
conclusion may be different because the extra burden of the
CRC bits will be a smaller proportion of the message payload.
But first I need to work out a faster way of running these
trials or AWS/EC2 will be making a profit out of me.
I should say a bit about the test protocol. The decoder is
allowed to complete its phase search and output the strongest
(lowest BER) decode that it found. If this is the correct
message it counts as a success. If it's not the correct message
it's a false hit. Or there could be no message at all if the
Viterbi list failed to turn up any codeword acceptable to the
outer layer. Here there is no operator to cast an eye over
a list of hits looking for a callsign. Each trial has to take
the solution with the lowest number of errors.
For this experiment I'm restricting the phase search to only
uniform phase in 30 degree steps. A full phase search would
probably have an effect on the optimum list sizes.
--
Paul Nicholson
--
|