(Teletype) IN / PARAM Calibration (Done)

To permit the calibration of digital values produced by ADC operations, teletype will be getting Calibration Operators.

Tentative Operator Set

  • IN.CAL.MIN - reads the input CV and assigns the voltage to the zero point
  • IN.CAL.MAX - reads the input CV and assigns the voltage to the max point
  • PARAM.CAL.MIN - same, but for the knob position

Tentative Calibration Procedure


  1. Connect a patch cable from a calibrated voltage source
  2. Set the voltage source to 0 volts
  3. Execute IN.CAL.MIN from the live terminal
  4. Call IN and confirm the 0 result
  5. Set the voltage source to target maximum voltage (10V)
  6. Execute IN.CAL.MAX from the live terminal
  7. Call IN and confirm that the result is 16383


  1. Turn the PARAM knob all the way to the left
  2. Execute PARAM.CAL.MIN
  3. Call PARAM and confirm the 0 result
  4. Turn the knob all the way to the right
  5. Execute PARAM.CAL.MAX
  6. Call PARAM and verify that the result is 16383

I’m tentative to see this implemented as a set of operators. It seems much more useful as kind of setup mode. I don’t see a creative use-case for these commands that isn’t already covered elsewhere. I don’t think it makes sense to mixup ‘settings’ parameters with operators (it muddies the language). This feels more like a ‘factory calibration’ step which should only ever be done once.

That said, I do appreciate the idea that the PARAM knob would give exactly 0 at CCW and 16k at CW. I’ve never found the mid-range accuracy to be an issue, but the edge-cases are important in the context of fixed-point math, where they can cause weird rounding errors.

Nevertheless, if you continue working on it, here’s a few points to consider::

  • I’d recommend calibrating at 5V(or 2V?) instead of 10V. When you get near the limit of the ADC input there’s a number of ways the voltage can be clipped before or during the analog-digital conversion. The result would be 10V input would be read as exactly 10V, but the linearity of the rest of range (read: the most useful part) could be compromised.

  • Very few people have a calibrated voltage source that is accurate to 14bit, thus they could end up with accurate 0V & 2/5/10V, but worse volt/octave tracking after the calibration. This was even a problem with the Just Friends calibration and that only has 12bit ADCs! I’d suggest gathering info about what people would use for the calibrated voltage source, and design a procedure that maximizes the accuracy for that context.

Will try to have a proper think about this later if I get time.

But before I forget, how about a method to erase the existing calibration data (PARAM.CAL.RESET). I know it can be done by typing in the correct values for min and max, but I suspect that will be too much cognitive overload for most people (myself included). Also having a reset OP signals that it’s a safe thing to experiment with.

1 Like

Mordax Data can be a voltage source. It also has a calibration procedure. Who wins?

As a TT outsider: risky to consider that max of PARAM will always incontrovertably be 16383. There’s wiggle and mechanical issues and all sorts with pots.

Also: it feels like operators are for musical/logical functions, and this feels like Something Else; whilst I’m sure there are interesting creative misuses of this, I can also see lots of false positives - accidentally running this in a loop and getting in a knot. I think @Galapagoose is about right.

I also know that whilst most TT users are fairly advanced Euro users, the number of times I’ve read about people calibrating modules by assuming the output of their MIDI-CV converter is accurate and that C2 is a neat 2V is alarmingly high. Making this an op feels like a bit of a red rag.

Are you imagining this replacing factory calibration data (if there is any), or acting as an offset on whatever the ‘hardwired’ calibration is?

1 Like

There is none currently, and yes.

That’s how we got here! From the config menu thread:

Points about accuracy and 5V vs 10V taken.

@sliderule - I have a similar function implemented for the IN and PARAM on the TXi expander. (I do not calibrate the units on shipping, btw.) You can set 2 points for the PARAM knob and 3 for the IN jack as, on the TXi, it is bipolar.

This is not to be confused with the MAP functions which allow you to remap the range of the inputs (you can use the INIT operator to reset the mappings without wiping any calibration settings).

You can find them in the documentation here (with examples):


TI.PARAM.CALIB x y	TI.PRM.CALIB	calibrates the scaling for PARAM knob x; y of 0 sets the bottom bound; y of 1 sets the top bound
TI.IN.CALIB x y		calibrates the scaling for IN jack x; y of -1 sets the -10V point; y of 0 sets the 0V point; y of 1 sets the +10V point
TI.STORE d			stores the calibration data for TXi number d (1-8) to its internal flash memory
TI.RESET d			resets the calibration data for TXi number d (1-8) to its factory defaults (no calibration)

I guess we need @tehn’s weigh-in to determine what to do here:

  • Dedicated .MIN and .MAX, or
  • Parameterized min/max argument on CAL


  • Auto flash storage
  • Manual flash storage

I’m for dedicated / auto for simplicity.


Please don’t feel constrained by what I’ve done in the past for the expanders - just wanted to share my experimentation/implementation. The functionality clearly was developed in a more agile manner.


Where appropriate, I do want to make sure that we bring the command structure for the TELEX along for the ride. I’d hate to have similar features operating one way on the TELEX and in another way completely on the Teletype. That would be madness!! :wink:


This feature has been implemented in 2.2.0-alpha.6

Implementation notes:

  • Separate .MIN and .MAX for both IN.CAL and PARAM.CAL
  • Values are both set and returned to the console
  • Auto-saved to flash


I do think being able to easily reset the calibration back to default will be a useful safety net.

1 Like

Done, tested, and in alpha 7 build.

I’m actually finding CAL very useful for IN. Hook up an offset source, calibrate, and BOOM, the knob works perfectly.

This plus IN/PARAM .SCALE makes scripting way way better. Less code dedicated to parsing knobs. Knob setup is now a patch setup task instead of a scripted event.


Testing IN.CAL.MIN and IN.CAL.MAX I get zero on MIN with a 0V signal.

On MAX I send a 10.0V signal and I get 16382, so I nudged it up to 10.01V and finally 10.10V and still I max out at 16382. It’s not until I get to 10.20V that IN delivers 16383.

In between I also tried PARAM.SCALE.RESET and IN.SCALE.RESET

I recalibrated my MORDAX Data just to be sure and tried again only to get the exact same results.

Did you follow the calibration procedure above?

The goal of this feature is to accurately map known ranges to 0-16383. After calibrating, sending 10.00V from your Data should produce an IN value of 16383. Is this not happening?

Correct, after running the calibration procedure listed above multiple times with 10.00V I get 16382, but at 10.20V and running IN.CAL.MAX I see 16382 after typing IN.

I rebooted my rack and same results.

I tried a second voltage source, same results.

Then this got strange, I turned up the voltage source to 10.20V and got a correct reading. Tried 10.15V and still correct so I jumped to 10.00V and again I get 16382, but now when I turn the voltage to 10.05V I’m getting a reading of 16383!

As I do not have a calibrated source available, I was not able to truly test this feature. I just threw a 10V offset from my rack and it worked.

Does the calibration procedure work properly for PARAM for you?

Yes it works for PARAM. And I too have to use a 10V offset. First one was from Maths and the second was Maths and O/A/x2 being used across different inputs/outputs of the MORDAX in case their were variations in the accuracy across jack ports.

I have seen that at overage of the ADC limit, there is a little reflection. Maybe your hardware can’t actually take 10V? I wouldn’t know.

There is a conversation over at Orthogonal Devices that was dealing with differences in input voltages and Brian Clarkson has to update the firmware on the ER-301 to bring it inline with his ER-101. The ER-301 was dealing with 10.24V on the +1 side of voltages while +1 on the ER-101 was operating at 10.00V on the +1 side of things. I’m sure there was more to it, but I thought the thread might be helpful:

did anyone ever asked for an implementation of floating point numbers into teletype?
this could solve many CV-related issues, we could write our desired calibration routines ourselves…