Pieter-Tjerk, PA3FWM wrote:
> one could argue those bits have also been transfered almost
> always correctly and could thus have been used for data.
I have to agree with you that it can be appropriate sometimes.
When measuring the performance of different convolution
polynomial sets, I counted as payload all the bits going into
the convolver because I wanted a score for that part of it.
> O.t.o.h., if the CRC is used to select among multiple likely
> outcomes of the decoding of the (outer) code, then of course
> those bits are coding overhead and not payload.
An example: when using a Viterbi 'list' decoder, the CRC is
an essential part of the protocol because you get a list
(maybe thousands) of possible decodes and the machine has to
choose the correct one. The CRC is then considered to be
an outer error detection layer to the coding and must not be
counted as payload.
If the CRC is not being used internally to choose between error
correction alternatives or to discard false decodes before the
operator sees them, then it is arguably pointless and should
be left out of the protocol completely.
Another point is source coding. This is where the text (or
whatever) to be sent, is compacted into, ideally, the fewest
possible bits before the transmission coding.
It is the output from the source coding which should be counted
as payload bits, not the raw text input. This is because the
source-coded bit count represents more accurately the true
information content of the message.
A case in point, recent VLF BPSK experiments employed a 6 bit
character set, but only 52 of the 64 codes are actually used.
Really there are only log2(52) = 5.7 bits of information per
character.
But the encode/decode was conveying all 6 bits and I could have
allocated characters to the 12 spare codes, so I reckoned with
6 bits/character to calculate channel efficiency.
Just recently I altered the decoder to make use of the spare
0.3 bits per character in the outer error detection stage.
It needed this because the 16 bit CRC is a bit short for the
large Viterbi list lengths being used. So now a 20 character
message for example, conveys 114 bits instead of 120, and the
outer code effectively has 16 + 6 redundant bits to use for
error detection. The result is a slight increase in coding
gain because a longer Viterbi list can be used.
So from now on when calculating performance I will have to use
5.7 bits per character instead of 6.
--
Paul Nicholson
http://abelian.org/
--
|