Return to KLUBNL.PL main page

rsgb_lf_group
[Top] [All Lists]

Re: LF: Re: GPS-Disciplined BPSK

To: [email protected]
Subject: Re: LF: Re: GPS-Disciplined BPSK
From: "Bill de Carle" <[email protected]>
Date: Wed, 14 Feb 2001 14:58:28 -0500
Reply-to: [email protected]
Sender: <[email protected]>
At 05:28 PM 2/14/01 +0100, you wrote:
Bill de Carle wrote:


When removing jitter from a signal that's accurate over the long term,
we can do better than just averaging.  By letting a counter run (e.g.
from the oscillator we're trying to discipline) and latching it with the
1 PPS signal, we get the number of oscillations that have occurred from
one second to the next.  One approach would be to simply average up a
bunch of these and say that's the best answer.  But if we don't clear
the counter after each 1PPS latching signal - if we just let the counter
continue running upwards, then we can use least squares to solve for the
*slope* of the line.  Over any given time period that least squares slope
solution will give a significantly better result than averaging alone.
Try it if you don't believe it.


Isn't that a variant of Kalman filtering ?

Probably, but I don't know enough about Kalman filtering to be able
to state just what the relationship is.

My tendency is to try to understand complicated things by comparing
them to simpler things, not the other way around :-)

Another analogy could be a frequency counter -

Say we have an input frequency, f, being counted over a 1-second time
window that produces an error in the frequency count of e due to the
jitter in the duration of the window.

Over 1 second the measured frequency will be f +/- e.

If we average up a bunch of these reading the errors will tend to iron
out and our answer improves according to the square root of the number
of samples averaged.

Now let's say we open the gate for 10 seconds.  Clearly there will only
be one error to deal with and its expected value will still be e.  The
resulting count will be 10*f +e.  Dividing this sum by 10 gives f + e/10.
I.e. the error diminishes in direct proportion to the length of the
count window.

The third approach is to measure the slope of the line by least squares,
using 10 equally spaced samples over that 10-second window.  We will be
processing more data; we have more information; statistically the answer
will be closer to the true value than with either of the other methods.
Method 2 essentially measures the "slope" by taking only the last reading and 
dividing by the total elapsed time; any error in that reading gets divided by 10 and 
appears in the answer.  The least squares approach minimizes the sum of the squared 
errors over the entire line - so the
actual value of the error at the end (the last reading) doesn't hurt us
as much as it otherwise might.  The errors are averaged out.

It's easy to check this out on a computer - just use fixed frequency and
a random number generator to add a certain amount of "noise" (jitter) to
each sample.  Make many trials using each of the 3 methods and you will
see the least squares regression line method gives the best answers.

Bill VE2IQ



<Prev in Thread] Current Thread [Next in Thread>