Return to KLUBNL.PL main page

rsgb_lf_group
[Top] [All Lists]

LF: EbNaut

To: [email protected]
Subject: LF: EbNaut
From: Paul Nicholson <[email protected]>
Date: Thu, 24 Dec 2015 22:04:32 +0000
Reply-to: [email protected]
Sender: [email protected]
User-agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130106 Thunderbird/17.0.2

While sitting here slowly consuming my own weight in mince
pies, I've been busy with EbNaut and can report some
progress.

The repeated list decodings during the phase search often
produce several decodes which have a valid outer code CRC.
The decoder has to choose which of these to present as the
final result of the run.   In these tests, success or failure
depends only on the computer making the right decision - there
is no operator looking at the list output to spot the callsign.

The policy used up to now by EbNaut is to keep the decode
which has the lowest BER.  That would seem to be an obvious
and natural method of selecting the final decode of the run.

But there are two other possible schemes: one would be to select
the decode with lowest list ranking, and another would be to
select the decode which has the overall maximum likelihood in
the Viterbi layer.

It turns out that the performance is different when all
three methods are tested.   In fact, 50 dollars of CPU time
demonstrates that choosing the decode with lowest BER actually
always has the worst overall success rate.

For example, 4 character messages with 8K19A at Eb/N0 = -1.0dB
produces a success rate of about 54% with an optimum list size
of around 10000, when selecting the lowest BER solution.

Switching to selecting the lowest rank produces a success rate
of nearly 59% and this method no longer has the complication
of an optimum list size: as you increase the list size, the
success rate doesn't reach a peak and then start to fall.
Instead it rises asymptotically.

Selecting the overall maximum likelihood turns out to have
the best success rate of the three methods, achieving 60%
and also doesn't have an optimum list size.

The reason for this that the BER arithmetic uses the hard
decision Hamming distance but the decoder uses the soft decision
Euclidean distance to construct the path metrics.  The Euclidean
distance gives the true maximum likelihood with Gaussian noise.
Quite often a false detection with a less likely path metric
actually has a lower Hamming distance than the correct decode.

A 6% improvement of success rate might not sound much
and represents about 0.1dB extra coding gain.  But when
experimenting this close to the limit there are no large gains
to be had, only little increments.   It's quite nice to find
such a simple improvement and with hindsight it should have been
fairly obvious.  But I was led there by trying to calculate the
optimum list size and however I tried, the calculation said
there wasn't one. So that led me to reconsider the criteria
which selects the final result of the phase search.

I'm still looking at CRC lengths because there's a broad and
shallow optimum emerging for each message length.

So far it looks like a 5 character message prefers a CRC of
about 22 bits and 12 characters has the best success rate with
16 bits.  Getting the right CRC length could add another 0.1 dB
of coding gain.

I'm working on finding smarter ways to test instead of brute
force trials and making some progress.  So far I can evaluate
a message length over a range of CRCs in about an hour with
160 cores and 560 Gbyte RAM, at a cost less than 2 dollars.

--
Paul Nicholson
--

<Prev in Thread] Current Thread [Next in Thread>