Teletype CV calibration?

Thanks @catkins for this writeup, it was a big head start!

Some possible improvements:

  1. Doing the calculation purely with 14-bit Teletype integers is a bit risky. The last two bits don’t affect the output, but depending on your search technique, you could land anywhere in the 0-3 range of those last two bits, and that does affect the calculations. Probably safer to keep the observations in volts (plus it’s also less work.) I made a variant of your spreadsheet with that math here: Teletype Calibration 2021/12/24 (voltage based) - Google Sheets – using this I was able to get my outputs within +/-0.001V or +/- 3 cents. (I previously had two outputs off nominal by 15-20 cents in opposite directions, so a third of a semitone offset between those two outputs.)
  2. The diff above applies the scaling and then applies the CV.OFF offset. That is sensible if you plan to use CV.OFF as an additional post-calibration tweak, but I think more people use CV.OFF as a musical shift, which would imply applying the CV.OFF value before the calibration scaling.
  3. DEVICE.FLIP renumbers the ports at the very last step going to the DAC, so if you calibrate your TT this way, do it in the orientation that you plan to use – you’ll need to reorder the scale/offset indices and recompile if you change orientation!

I think this procedure could be automated with the hypothetical op CV.CAL input mv1v mv3v, where the last two arguments are the observed offsets in millivolts. For example, I initially measured CV 1’s output as 0.996V for 1V and 2.984V for 3V, so I could enter CV.CAL 1 -4 -16 to generate the scale/offset internally. A tricky implementation headache here is how do you prevent people from getting confused and measuring after they’ve calibrated? I think this is probably fixable by having the calibration ops stack. Say we init scale and offset to 1/0, then CV.CAL takes the previous scale/offset into account when calculating the new values, rather than setting them just from the op arguments. That way you could repeat the measure-calibrate process a few times if you made a mistake the first time, or weren’t happy with the results, etc., and you’d eventually get there.


Good feedback! All 3 of your points are totally valid.

I’d not thought about this in a while, and am halfway through building a Deckard’s Dream which is chewing up most of my hacker time for the moment. I’d love to see someone take the baton and implement those CV.CAL ops and the improvements above!