Return to KLUBNL.PL main page

rsgb_lf_group
[Top] [All Lists]

Re: LF: Eb/N0 values for amateur modes

To: [email protected]
Subject: Re: LF: Eb/N0 values for amateur modes
From: Paul Nicholson <[email protected]>
Date: Sun, 11 Jan 2015 13:14:13 +0000
In-reply-to: <[email protected]>
References: <[email protected]> <[email protected]> <[email protected]> <FD91181D329143B38573246548F8F478@gnat> <[email protected]>
Reply-to: [email protected]
Sender: [email protected]
User-agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130106 Thunderbird/17.0.2
Pieter-Tjerk, PA3FWM wrote:

> one could argue those bits have also been transfered almost
> always correctly and could thus have been used for data.

I have to agree with you that it can be appropriate sometimes.

When measuring the performance of different convolution
polynomial sets, I counted as payload all the bits going into
the convolver because I wanted a score for that part of it.

> O.t.o.h., if the CRC is used to select among multiple likely
> outcomes of the decoding of the (outer) code, then of course
> those bits are coding overhead and not payload.

An example: when using a Viterbi 'list' decoder, the CRC is
an essential part of the protocol because you get a list
(maybe thousands) of possible decodes and the machine has to
choose the correct one.    The CRC is then considered to be
an outer error detection layer to the coding and must not be
counted as payload.

If the CRC is not being used internally to choose between error
correction alternatives or to discard false decodes before the
operator sees them, then it is arguably pointless and should
be left out of the protocol completely.

Another point is source coding.   This is where the text (or
whatever) to be sent, is compacted into, ideally, the fewest
possible bits before the transmission coding.

It is the output from the source coding which should be counted
as payload bits, not the raw text input.  This is because the
source-coded bit count represents more accurately the true
information content of the message.

A case in point, recent VLF BPSK experiments employed a 6 bit
character set, but only 52 of the 64 codes are actually used.
Really there are only log2(52) = 5.7 bits of information per
character.

But the encode/decode was conveying all 6 bits and I could have
allocated characters to the 12 spare codes, so I reckoned with
6 bits/character to calculate channel efficiency.

Just recently I altered the decoder to make use of the spare
0.3 bits per character in the outer error detection stage.
It needed this because the 16 bit CRC is a bit short for the
large Viterbi list lengths being used.  So now a 20 character
message for example, conveys 114 bits instead of 120, and the
outer code effectively has 16 + 6 redundant bits to use for
error detection.   The result is a slight increase in coding
gain because a longer Viterbi list can be used.

So from now on when calculating performance I will have to use
5.7 bits per character instead of 6.

--
Paul Nicholson
http://abelian.org/
--

<Prev in Thread] Current Thread [Next in Thread>