[hpsdr] Call for Comments OzyII

L. Van Warren van at wdv.com
Sun Jul 26 18:54:45 PDT 2009


One problem with decimation is that by taking every nth sample, useful
information is being thrown away. A better algorithm is sample averaging
which can be implemented in powers of 2 stages on a pipleline DSP or FPGA.
This has dramatic effects on both signal to noise ratio and image rejection.
The benefits of oversampling apply.

The opposite argument (interpolation) applies to sampling ADC's. A signal
sampled in the time domain can be quantized by slope as well as
instantaneous level. Signal levels and slopes could be encoded floating
point numbers rather than n-bit sequences and there is work being done in
this area by the gnu-radio project on the software side.

The next step after that is adaptive sampling in time, using more samples
where the signal is changing rapidly and fewer where it is not. This reduces
the bandwidth required to transmit the sampled representation of the signal
for a given quality.

One could FFT the signal BEFORE transmission, transmit the FFT coefficients
as numbers and then do the reconstruction on the receive side.

This is the sort of thing I am hoping to see. 


Van / AE5CC / wdv.com




More information about the Hpsdr mailing list