Timing for teletype (and other monome euro mods)

Continuing the discussion from Monome Teletype and Walk:

teletype and other monome euroracks inherit the soft-timer system from aleph codebase.

ADC soft timer handler may be preempted by others. in fact the priority order is simply defined by the order in which the timers are added. here:
[ https://github.com/tehn/mod/blob/master/teletype/main.c#L2276 ] … so as it happens, ADC timer is processed second-to-last, just before refreshing the screen. which means it will be preempted by anything triggered from clock input, delay, metro or whatever.

if the total execution time of all the soft-timer handlers is greater than the heartbeat period (which is still 1ms: [ https://github.com/tehn/mod/blob/master/skeleton/init.c#L86 ]) then the next heartbeat interrupt will be skipped.

all this is worth keeping in mind when estimating latencies - the nominal soft timer period (40ms for ADC, or so i hear) has variance of at least 1ms - and also considering order of execution when other processes depend on ADC input values for their logic. and it is possible for e.g. clock input to be processed and ADC input to be skipped, on a given hearbeat tick.

in short, it might make sense to move ADC timer processing closer to clock processing (maybe before it), since these are likely to be logically co-dependent.

1 Like

wouldn’t you also have to set the ADC timer rate to 1ms? which is set to 71ms on TT and 100ms on WW and MP - my very limited understanding of it says that’s because reading ADC is a slow operation?

it does look to me like the ADC timer period is set to 61ms on TT:
[ https://github.com/tehn/mod/blob/master/teletype/main.c#L2279 ]

but i should add that i didn’t directly contribute to any eurorack source code and am only inferring from what i see in github. the comments haven’t changed so i assume that 1 heartbeat tick still equals 1 ms, but maybe the core clock was changed and comments weren’t updated. (actually looking at board conf files, seems like maybe core clock was bumped down to 6Mhz instead of 12Mhz?? then heartbeat timer interval is actually 2ms, because the timer reload value has not changed. but i await correction.)

anyway my point is a little different. my point was that if there is a lot of processing in any of the soft-timer callback functions, enough to exceed 1ms (not sure how likely this is, probably not likely at all really,) then ADC could be pushed to the next heartbeat after clock or delay servicing. which could do weird things if you do something on the ext clock or metro callback that is dependent on value from ADC. (in a really bad edge case, you might never see the ADC value change, because a new hard-timer interrupt would come in before you get to processing the last ADC soft-timer calback, but you’ve already processed the clock-polling soft-timer callback.)

but to answer your question: no, ADC reading in itself shouldn’t be a slow operation. it’s reading 16 bits over SPI with a 20Mhz clock [ https://github.com/tehn/mod/blob/master/skeleton/init.c#L135 ], plus toggling chip selects and placing the results in memory. so like 1µs or something.

but, i don’t know exactly what else is being done in the ADC polling callback [ https://github.com/tehn/mod/blob/master/teletype/main.c#L391 ]. maybe tele_set_val() is potentially heavy.

i don’t quite follow why you would necessarily want ADC to be processed on every tick. i mean sure, in theory its good to do everything fast… but on aleph we allowed that interval to be arbitrarily throttled according to the user’s judgement, because it might propagate through the operator network and set off a lot of things, maybe triggering DSP updates and other blocking/peripheral stuff. i think there is similar potential in the teletype, so it maybe makes sense to limit polling to 50ms or something to accommodate the worst cases.

… ah wait… i take some of that back. i think all soft timers will be processed eventually if they are successfully added to the event queue. a hard timer interrupt can’t directly preempt soft timer processing. but it can max out the event queue, which looks similar.

i think all things based on this soft-timer system would be well served by having an obvious indicator when the event queue is maxed. lord knows this can happen on the aleph if you do too much stuff in a bees patch. maybe it’s not a problem in TT.

Many thanks for taking the time to explain this. Although i just woke up and have a hard time trying to get it all (my french brain is still sleepy and this is all quite technical).
But if i understand well : the latency will always be somewhere between 0 and 40 ms PLUS all the processes triggered before the ADC call. Right ?

yeah, setting ADC timer to 1ms would just fill up the event queue… if reading ADC is cheap then perhaps a better solution would be updating the ADC variables right in the ADC timer callback, and using the event handler for things where timing is not critical (using the knob to select scene etc).

i guess you could still have a condition where a trigger event gets processed before the ADC timer has a chance to update the values, but it’d be down to 1ms as opposed to anywhere between 0 and whatever the ADC timer interval is (you’re right, 61, not 71ms, was looking at the wrong one).

assuming CV changes at the same time as the trigger arrives - is there any additional latency that would be needed due to the ADC hardware itself? does it need to “settle” when the voltage changes?

oh, and TT uses system, not skeleton - both are based on aleph skeleton i think? but might have some additional differences other than the changes that were needed for TT specifically.

ah thanks! i did not quite understand that those are two different things filling the same role. but it looks like they are identical in all the relevant aspects.

i really doubt that the ADC itself would introduce any additional artifacts that would be relevant on this time scale.

This all goes beyond my understanding. Sorry.

yes i’ve been thinking about updating the ADC on demand rather than on a timer, which would make a lot of sense.

my initial goal was to not have SPI transactions happen during TT script execution, but i didn’t benchmark anything to arrive at that conclusion.

/system and /skeleton are incredibly similar, but i made some changes that would’ve broken compatibility with the older modules. it might not be difficult to simply update the older modules to interface with the changed tt requirements.

i’ll get this on the list for optimizations to the next rev.