Ok then - so the FTDI-chip based interfaces are the ones to use.
In the meantime I got the serial port adapter working at 460800 bits
per second (where the communication with the PIC16F1783 looks good)
but not at 500000 bits/sec (where the PIC could realize the bitrate
without any error). The rise- and fall time of the RS232 level
converters seem to be different, so when sending a loop of 0x55
bytes the '0101010' pattern doesn't look like a square wave with 50
% duty cycle at all.
But anyway, for narrow band applications it's ok, and even the
dsPICs I have here at the moment (with 12 bits/sample from the ADC)
are specified for 200 kS/sec at the maximum (the PIC16F1783 was
announced for 100 ksps but the recent datasheet says its only 75
ksps). I will try to run it at 80 ksps and decimate down to 20
ks/sec for the output, with some of gain from a CIC filter for
effectively 16 bits per sample. 20 kHz * 2 bytes * ( 1 + 8 + 1 )
bits = 400 kBit/second (raw, including start + stop bit) - that's
close to what the 'Prolific' adapter can handle.
Brings up a crazy idea: Since there is not enough bandwidth for a
3rd byte (for a sample frame), the receiver could examine the amount
of 'noise' in the bits of each byte, to tell the high-byte from the
low-byte just in case an unknown number of bytes get lost on the
way... the least significant byte will have more toggling bits than
the most significant byte.
Well, it's good fun to push the 'midrange PIC' to its limits
(haven't been doing this for a long time, being more used to ARM
controllers these days). The PIC16F1783 has 4096 program steps, 256
bytes of RAM, and (Andy will know what I'm talking about) it now has
TWO 'address pointers', FSR0 and FSR1 instead of only one, and an
add-with-carry instruction which the older PIC16's didn't have...
makes efficient programming in assembler a bit easier, even though
the CIC filter (with N=4, R=4, N=2) consume almost CPU time when
running at 80 kHz sampling rate. Oh well, there still is the option
to use a dsPIC instead.
All the best,
Wolf .
I've just made some tests on the FTDI232 chip at high baud
rates.
Using a custom routine written in PowerBasic (CC) , I used
that language's own interpretation and interface to the driver
to pass arbitrary baud rates to the chip. With the device
set for 8 data, 1 stop bit and no parity, I repeatedly send
the character 0x55 in a continuous loop. This pattern of
bits, if sent contiguously (characters are sent LSB first)
should therefore result in a square wave being generated at
a frequency exactly equal to half the specified baud rate.
The FTDI232 data sheet states that baud rates are
determined based on a 3MHz input to a divisor of (N + M/8,
where N = 2 to 16384, plus two special cases on N = 0 and N =
1 for 3Meg and 1Meg rates respectively. This setting is
hidden from my application, and happens transparently based on
the baud rate I specify beign passed to the driver via
PowerBasic.
The following results were obtained :
Requested Frequency
Baud rate Generated
1M 500000
2M
__._,_.___
__,_._,___
|
|