so, delays are handled this way: clockTimer running with RATE_CLOCK rate generates kEventTimer events in its clockTimer_callback. these events are processed by the event queue, and handler_EventTimer gets executed whenever there is one (so, each 10ms). it then calls tele_tick (which is where delays are processed) with time set to RATE_CLOCK, so 10ms as well:
tele_tick simply subtracts time from the remaining time for each delay, and if the result is 0 or less it will execute the delay command(s).
so we could have this scenario:
-
clockTimer just executed, so the next cycle will occur in 10ms
- a delay is created right after and is set to 1ms
- 10ms later
clockTimer gets executed and runs the delay.
this means the delay was executed 9ms later than scheduled. now imagine this scenario:
- a delay is set to 10ms
-
clockTimer runs right after
- the delay gets executed
so the delay is actually 10ms earlier than expected.
rounding delay times won’t help as the jitter is due to the phase delta between a delay and clockTimer which we can’t do anything about. 2 solutions would be either running clockTimer at 1ms rate, or giving each delay its own timer, both will affect the system performance. a proper solution imo would be implementing a more advanced event queue with support for setting event priority, or something along these lines.