VO1NA 2015-11-29/30 8K19A 2S 5C
20:00 6.2 dB
20:30 7.0 dB
21:00 3.2 dB
21:30 1.4 dB
22:00 5.4 dB
22:30 no decode
23:00 8.7 dB
23:30 6.8 dB
00:00 no decode
00:30 5.4 dB
01:00 4.9 dB
01:30 no decode
02:00 no decode (Andy 1.5 dB)
02:30 no decode
03:00 1.5 dB
03:30 1.1 dB
04:00 no decode
04:30 no decode
05:00 no decode
05:30 no decode
06:00 -0.3 dB
06:30 1.4 dB (Andy 1.4 dB)
07:00 3.3 dB
07:30 2.8 dB
08:00 no decode
08:30 1.2 dB
09:00 -1.6 dB
09:30 no further decodes
Andy wrote:
> ... an outer checksum for message validity. Why does
> this allow the garbage decodes to get through - surely
> that must be failing the checksum. Or are they the few
> out of millions of possible garbage decodes that pass the
> checksum criteria?
Yes, just random matches with the checksum.
With intensive list decoding combined with brute force phase
search, a large number of candidates are presented to the
outer layer for testing against the CRC.
With a 5 character message, there are about 17.5 bits available
for the outer layer to use for validation. Each Viterbi
decode therefore has a random 1/2^17.5 ~= 1/185000 chance of
accidentally passing the outer layer. If you have a list size
of 6000 and do a full phase search (108 Viterbi runs), there are
6000 * 108 = 648000 trials at the outer layer. The probability
that *none* of these will be accepted by the outer layer is
(1 - 1/185000) ^ 648000 ~= 0.03
meaning that with this list size and message size, you've got
a 97% chance of getting at least one false decode.
If we allocate more bits to the outer layer we increase its
coding gain but the inner (Viterbi) layer has to carry those
extra bits so has reduced coding gain.
For each message size there's probably an optimum number of
outer layer bits and list size which maximises the overall
coding gain but I haven't worked that out yet. I'd guess that
it occurs when the probability of false outer layer acceptance
equals the probability of inner layer failure to decode.
It needs a rigorous study before making any code changes but at
present the balance is probably about right at 20 or 25 chars.
The outer code is too weak on shorter messages and too strong
on longer messages.
Some notes on outer layer performance in the appendix,
http://abelian.org/ebnaut/appendix.shtml
As I mentioned in a previous post, the operator looking at
the list of decoder outputs is adding another layer of list
decoding, and the necessary bits for this come from an implicit
restriction of the message space.
--
Paul Nicholson
--
|