[hpsdr] Odyssey dsPIC33 RAM usage

Frank Brickle brickle at pobox.com
Tue Jun 3 00:06:40 PDT 2008


On Mon, Jun 2, 2008 at 11:48 PM, Murray Lang <murray.lang at bbnet.com.au>
wrote:


> The AGC problems you refer to remind me of a posting way back by Phil H.
> regarding Epimetheus. It was suggested (by myself and others) that it
> include A/D inputs, potentially for things like ALC. This seemed to stir up
> repressed anxieties in Phil resulting from bitter experience.


If Phil tells you to be concerned, then be afraid, be very afraid.

There is absolutely no reason to believe that an arbitrary AGC or ALC loop
is stable, in the absence of further information. The only reason we believe
we can get away with our dumb little scaling scheme is that it has the
daylights damped out of it and it's sharply bounded a priori.


> While I've got you, could you please give me a 1 to 3 sentence description
> of how the Odyssey SDR software hangs together. Something like "the codec
> interrupt handler simply dumps the samples into a queue, with all of the SDR
> code running in a loop in the main line" or whatever. Maybe you use a 3rd
> party multitasking kernel (which can both simplify and complicate things).


You're almost there. There is a pair of sample I/O buffers for the obvious
ping-ponging strategy. The main loop consists essentially of a pair of waits
for completion on each of the buffers in turn. As one of the pair completes,
a processing routine is invoked with the base of the most recent buffer as
an argument. The processing rountine is just a loop over each sample of the
buffer passed as an argument. The results are stuffed back into the same
buffer for output before the next input go-around on that buffer. The
processing routine returns to the main loop which resumes on a wait for the
other buffer, which, on completion, invokes the processing routine on that
buffer base, and so on.

There isn't enough asynchronicity (?) to justify multitasking. The lengths
of the buffers are powers of 2 between 64 and 512, determined at compile
time. Since data from the IHU is arriving every millisecond, 48 samples at a
time, it's enough for the data from the IHU to be stuffed into a circular
buffer that's the lcm of the RF signal buffer (64, 128, 256, or 512) and 48
(that works out to 192, 2*192, 4*192, or 8*192). This circular buffer is
initialized to zeros and the IHU data is read off one sample at a time,
processed, and mixed into the output along with the RF sample data. It's all
pretty close to lock-step. The processing is very simple, basically just
complexifying and mixing.

There are some curlicues involved in sucking up the CW beacon text and
telemetry, but not enough to disrupt the fundamental symmetry of the major
processing loops. All of this is feasible in a direct way because no input
data is really arriving asynchronously.

If we had time to get fancy we'd be using freeRTOS, but that's mostly for
the insult, not because it's needed. BTW freeRTOS can be used either
cooperatively or preemptively. If we had to use it, I'd be inclined to work
cooperatively as you describe, yielding every 16 samples (gcd of the RF data
buffers and the IHU data chunks).


> I like the idea of a cooperative multitasking kernel for this application.
> Since the system as a whole requires each of its parts to complete, there's
> no point in preempting one task with another (other than I/O interrupts of
> course) because this just adds overhead. Obviously this requires that the
> tasks be written in a certain way, but preemptive multitasking imposes its
> own discipline as well. Since my programming experience is almost entirely
> with big fat OSs, I'd appreciate comments from anyone who's used a distinct
> cooperative multitasking kernel, home brewed or otherwise, for a DSP
> application.


It's not *exactly* on point, but there's an old article from the Computer
Music Journal,

   - Accurately Timed Generation of Discrete Musical Events
   - David P. Anderson and Ron Kuivila
   - Computer Music Journal, Vol. 10, No. 3 (Autumn, 1986), pp. 48-56

that deals (IIRC) with cooperative scheduling for RT applications, although
at a much coarser granularity (MIDI messages) than you're talking about here
.

Computer Music Journal, Vol. 5, No. 1 (Spring, 1981),
a later issue, is full of good stuff about logical concurrency for DSP apps
like this.

73
Frank
AB2KT

-- 
The only thing we have to fear is whatever comes along next. -- Austin Cline
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openhpsdr.org/pipermail/hpsdr-openhpsdr.org/attachments/20080603/43bec95b/attachment-0003.htm>


More information about the Hpsdr mailing list