[hpsdr] FFT latency

Alex, VE3NEA alshovk at dxatlas.com
Tue Apr 7 07:18:50 PDT 2015


Hi Peter,

In order to minimize the latency of your DSP pipeline, you have to find out where the delays actually come from. Contrary to the 
popular belief, FFT does not introduce any delays. Consider this scenario. You have a buffer filled with zeros. You receive your 
first audio sample, place it in the rightmost position in the buffer, compute FFT, and then compute IFFT. You take the output 
from the rightmost position in the buffer - which still contains your original sample. As the next sample arrives, you push it 
in the buffer at the right side, and repeat the process. This pipeline includes both FFT and IFFT, but it has a zero delay.

Of course, such pipeline is very CPU intensive and not very useful. In most cases we simply do not have enough CPU power to 
recompute the FFT for every input sample. We process samples in blocks, and this is the first source of delay. Consider a second 
buffer, say, 64 samples long, that we place in our DSP pipeline before the FFT. As we receive samples, we simply put them in the 
buffer. Only when the buffer is full, we push our data in the FFT buffer, at the right side, and do our FFT/IFFT work. Now the 
very first input sample has a chance to affect the output only 64 samples after its arrives. Note that this delay is avoidable, 
as there is a trade off between the number of FFT's that you have to compute and the amount of delay. By changing the size of 
your input buffer, you change the delay, but this does not affect the output samples in any way.

Version 2 of our DSP pipeline has acceptable CPU requirements, at a cost of some delay, but it still does not do anything 
useful. In order to make it do some work for us, we have to modify the data somehow between the FFT and IFFT, and this is where 
a long, unavoidable delay comes into play. Multiplication in the frequency domain corresponds to a convolution in the time 
domain, and convolution always shifts the samples back. As has already been mentioned in this thread, a convolution with a 
symmetric kernel (impulse response) delays the signal by (N-1)/2 samples, where N is the kernel length. Subtraction (as in the 
spectral subtraction algorithm) does not add a delay per se, but the values being subtracted, if they change in time, are more 
then likely computed using an algorithm that could be viewed as a convolution, which is what causes the delay in your pipeline. 
This delay is unevoidable because if you change the length of your equivalent impulse response, you change not only the delay 
but also the shape of the output waveform. To optimize your algorithm, you have to figure out 1) where filtering (convolution) 
occurs in your algorithm, 2) what its kernel length is, and 3) how to change your algorithm to minimize the kernel length 
without sacrificing the quality of denoising.

The minimum useful kernel length is, of course, algorithm dependent. I don't know how you compute the values that you subtract 
from the spectrum, but if you view these values as a frequency response of some filter, then the longer the kernel, the sharper 
transitions can be achieved in this frequency response. You should use the kernel only as long as is needed to achieve the 
desired filter shape.

73 Alex VE3NEA



On 2015-04-07 05:02, G3XJP wrote:
> ***** High Performance Software Defined Radio Discussion List *****
>
>
>
> Simon, it is precisely LMS noise reduction and auto-notch that I am replacing.  COS - in brief - I don't like listening to
> voices coming out of a drain-pipe.  The freq domain solution is way more effective. But at a price, namely inherent delay.  So
> I'm not looking for other solutions.  I just want to be sure those inherent Fourier delays are .... ummmm .... inherent.  Peter
> G3XJP
>
> Simon Brown wrote:
>> Peter,
>>
>> One alternative to FFT-based noise reduction would be adaptive LMS noise
>> reduction with ~ 99 taps (or a variable number of taps). LMS can also be
>> used with great effect for an automatic notch filter.
>>
>> Simon Brown G4ELI
>> http://v2.sdr-radio.com
>>
>>
>> -----Original Message-----
>> From: G3XJP [mailto:G3XJP at RhodesG3XJP.plus.com]
>> Sent: 07 April 2015 09:46
>> To: Simon Brown
>> Cc:Hpsdr at lists.openhpsdr.org
>> Subject: Re: [hpsdr] FFT latency
>>
>> gm Simon,
>> 1) I don't have and don't want a waterfall/spectrum display and
>> 2) my SDR does not run on a PC let alone Windows.  It runs on a dedicated
>> radio.
>> I do have to have Fourier and his inverse in the real-time signal path -
>> because I need to get into the freq domain, do the Noise Reduction stuff and
>> then get back to the time domain.  In that sense, FFT and IFFT are pure
>> overheads, a means to an end.  And I would have to forget the whole idea if
>> they produced inherent delays that would make ANY radio on ANY platform
>> unuseable.  20ms is acceptable.  Only just.  Peter G3XJP
>>
>> Simon Brown wrote:
>>
>>> Peter,
>>>
>>> You don't have to perform FWD / INV FFT in the demodulation path at all.
>>> Essentially there are two IQ paths in a generic design:
>>>
>>> 1) For the waterfall / spectrum display,
>>> 2) For demodulation.
>>>
>>> Demodulation can get down to < 20ms on Windows if you're careful,
>>> using WASAPI audio API helps a lot :) .
>>>
>>> Simon Brown G4ELI
>>> http://v2.sdr-radio.com
>>>
>>>
>>
>>
>>
>
>
> _______________________________________________
> HPSDR Discussion List
> To post msg: hpsdr at openhpsdr.org
> Subscription help: http://lists.openhpsdr.org/listinfo.cgi/hpsdr-openhpsdr.org
> HPSDR web page: http://openhpsdr.org
> Archives: http://lists.openhpsdr.org/pipermail/hpsdr-openhpsdr.org/
>

 1428416330.0


More information about the Hpsdr mailing list