Teletype Delay Limitations

I’ll preface this with my question. Is there a reason Teletype is limited to 8 delays, and what possible pitfalls are there when working with teletype delays. More information can be found below if interested.

I recently implemented my first Teletype patch adding a new delay command XDEL which creates multiple delays as a single command. The format is as follows:

XDEL x y: ...

Where x and y represent delay time and number of delays to queue. XDEL creates y delays at spaced intervals of x milliseconds. So for example XDEL 500 3: X ADD X 1 would create 3 delay commands. The first would occur at 500ms, the second at 1000ms, and the third at 1500ms.

One interesting this about XDEL is that it can act as a 1 line clock multiplier. For example:

XDEL DIV M 4 4: TR.P 1

Trigger 1 is pulsed 4 times at spaced intervals for each execution of the Metronome script. However this functionality is limited by the number of delays which Teletype can queue up. This brings me to my question in whether it is safe to increase the maximum number of delays.

Puzzled as the format

says two args. But the example

only has one (m/4). Am I reading this wrong or missing something?

1 Like

Why not change DELAY_SIZE in state.h to 32 or something and just try it out? :slight_smile:

Ah ya that’s a mistype thank you for pointing that out. Correction is as follows:

XDEL: DIV M 4 4: TR.P 1

I strongly considered doing this as from my observations it seems fine. However, I wanted to confirm there wasn’t some sort of technical reason I was missing. Mostly want to confirm this won’t cause any sort of command processing delay which might not be immediately noticeable.

Looks like you’re in luck. Crank 'em up!

1 Like

Thank you! I swear I did 3 searches before making this thread not sure how I missed that.

No problem. I’m following your development with interest… it’s a really useful-sounding operator!

Speculating on a possible sister op: REPEAT or REP (or RATCHET / RAT), where the only difference from XDEL is that it executes the command once immediately, then basically does XDEL @ y-1 repetitions. This would work the same as:


…but it would save a line of code :slight_smile:

Ooh ya that’s interesting. Also I really like the sound of a RAT command. Looks like some concept of this was explored before in that thread you linked. I’m going to have a look through that, but there’s certainly a lot of variation that can be made on this command.

Also if you wanted to give input there’s a certain functionality that I’m unsure how to handle. Given that the max delay time is ~32 seconds, when XDEL increments over this mark the delay time value becomes negative and the command is triggered immediately. Not sure if I should throw away commands over 32 seconds, set them at delay time of 32 seconds, or let them trigger at 1ms delay. Thoughts?

(disclaimer: i’m pretty sure this is correct but i haven’t worked with the delay code very closely)

i don’t think there is really a good reason to keep the number of delays low - they don’t actually spawn new timers, it’s the same method that checks which delays are up for execution. so the only impact on performance really depends on what those delays do.

but due to this implementation delays accuracy is 10ms (it’s dictated by RATE_CLOCK constant which is the rate of the timer responsible for running delays, among other things). so while you can’t make it lower than that it does mean we could increase the max time to 320 seconds by storing the number of milliseconds divided by 10 (and then converting back).

That’s certainly interesting. If I understand correctly the user still enters time in milliseconds(max still 32 seconds), but internally a conversion is done to handle delay times longer than 32 seconds resulting from the increment process.

right. to clarify, it’s not how it’s done right now, i’m just saying it could be changed to allow for longer times since it’s not 1ms precise anyway.

From the peanut gallery: I’d rather see actual 1ms resolution than a longer timer. What amounts to +/-5ms of jitter (if I understand the timing relationships correctly) is not really usable (by me, anyway) for some things.

(Edit: assuming a 1ms timer was possible, of course–and I’m aware of the metro changes that may be part of this same conversation …)

yeah, unfortunately getting 1ms accuracy for delays would require quite a significant change, if it were to be done with the least impact on overall performance. the jitter is actually up to +9/-10ms, depending on when a delay is set.

1 Like

Sorry could you explain how 10ms jitter would occur? Also, while it wouldn’t be the best solution, couldn’t the jitter be reduced by rounding delay times when they are queued? Pretty sure I’m missing something obvious here.

Want to reiterate how appreciative I am of your help/patience with my questions so far by the way.

so, delays are handled this way: clockTimer running with RATE_CLOCK rate generates kEventTimer events in its clockTimer_callback. these events are processed by the event queue, and handler_EventTimer gets executed whenever there is one (so, each 10ms). it then calls tele_tick (which is where delays are processed) with time set to RATE_CLOCK, so 10ms as well:

tele_tick simply subtracts time from the remaining time for each delay, and if the result is 0 or less it will execute the delay command(s).

so we could have this scenario:

  • clockTimer just executed, so the next cycle will occur in 10ms
  • a delay is created right after and is set to 1ms
  • 10ms later clockTimer gets executed and runs the delay.

this means the delay was executed 9ms later than scheduled. now imagine this scenario:

  • a delay is set to 10ms
  • clockTimer runs right after
  • the delay gets executed

so the delay is actually 10ms earlier than expected.

rounding delay times won’t help as the jitter is due to the phase delta between a delay and clockTimer which we can’t do anything about. 2 solutions would be either running clockTimer at 1ms rate, or giving each delay its own timer, both will affect the system performance. a proper solution imo would be implementing a more advanced event queue with support for setting event priority, or something along these lines.

1 Like

Thank you this helps a lot with understanding how events are processed.

no problem! it’s a bit confusing right now as we have a mix of different ways events are done in teletype:

  • there is the main event queue which is the main loop in main.c, it checks the event queue and calls up the appropriate event handler
  • hardware interrupts have their own functions, some of which just do what they need to do, and some will add events to the queue to be processed by the main loop
  • the above applies to timers too, some of the timer callbacks will just do whatever they need to do, while others will add events to the queue.

ideally, we should have everything using the same event queue but with support for different priorities. this would make it cleaner/more predictable, and would allow to fine tune responsiveness vs performance.

Ya I’ve looked into events, timers, interrupts and callbacks. There’s certainly a lot going on there. Do you know if there is a limitation on the number of timers and callbacks?

Also on the topic of hardware I’m under the impression TR outputs are digital correct?

there are no limitations specifically for callbacks/timers. one thing to keep in mind is that every timer adds overhead, even if it’s not doing anything. and since timer processing happens in an interrupt routine even small things can have impact on the overall system performance.

TR outputs are indeed digital, they’re are either on or off.