Just moving this discussion back onto this thread.
My feeling is that there is an elegance and a simplicity to a multi-producer single-consumer (MPSC) queue to be used for the main run loop.
Under this model, anything can be a “producer” and post an event to the queue (trigger IRQ, timer IRQ, “consumer” code). But only “consumer” code can run an event. Consumer code cannot be preempted by other consumer code. Contested resources, such as access to SPI, synchronous I2C read/writes, running a script, etc, only happen in consumer code, and thus are guaranteed to only be accessed by one piece of code at a time.
Now sadly “perfect” models like these never survive their first encounter with the real world… still it’s something to strive towards.
Taking your example of CV and using two different timers. Both CV and screen are updated over SPI. None of the SPI code I’ve seen is re-entrant, you always need to spi_selectChip, and there is no masking done. Now you’re in a situation where the screen SPI is done in the main run loop, which could be preempted by the high priority CV timer. The locking overhead from making that work, might defeat any performance gain. (See below for what happens in 2.0 as it stands.)
Before going down that road it would be worth having measurements done to see if we can quantify what the performance is. And what needs improvement.
As an alternative to going down the multiple priority timer route, could I instead suggest have a multiple priority event queue. You still have the benefits of only one “consumer” running at a time (e.g. less locking required for resource access), but you gain the ability for, say, CV updates to run before a screen refresh.
There are dangers with this, the 2 big ones I can think of are:
-
High priority tasks starving low priority tasks of CPU time. You could get into the unfortunate situation where e.g. the keyboard handling code never got the time to run. I think this will be hard to get perfect, but should be easy to get good.
-
Slow running events can hold up the queue, because only one event runs at a time, and can’t be preempted by another event. So you could have a slow screen render in progress while several high priority events have queued up, but we must wait until the screen render has finished before the run loop can process the next event.
The simple solution for this is to make events short... Screen rendering (when it's needed), does 2 slow things. The first is a lot string manipulation, then second is sending data over SPI to the OLED. These could be split into 2 separate events on the queue, 1 to prep the data, 1 to send the data to the screen.
Synchronous I2C is another really really slow thing, that will be much trickier to fix.
Personally I quite like this idea, but then again, I’m emphatically not an embedded developer, and I hate all things IRQ. This could just be my way to not have to deal with them too much.
Just coming back to the 2.0 code and SPI. I believe what happens is that screen rendering SPI is done in the main loop, but that can be preempted by the CV timer. But not the other way round. Thus you can (and do) end up with the following code running…
spi_selectChip(OLED, ....)
spi_write(OLED, ....)
// preempted by CV timer
spi_selectChip(DAC, ....)
spi_write(DAC, ....)
spi_unselectChip(DAC, ....)
// return to main thread
spi_write(OLED, ....) // uh oh, no chip selected!!!
spi_unselectChip(OLED, ....)
Thus the screen can occasionally get garbled, but not the CV.
A Rigol DS1054Z, hacked to be faster stronger, etc, etc. I was planning on doing a bit more electronics, but then I fell into the rabbit hole of 2.0…