Auto-Calibrating a VCO with 1 volt/octave tracking

Hey all :wave:,

I was hoping to get some feedback on this code I have been working on. It is my first attempt at any “DSP” type programming and although it works I am kind of curious if there are any other methods/approaches for solving this same problem. I couldn’t find anything online about it digitally calibrating analog oscillators - at least not for synthesizer / music applications.

You can take a look at the code here, but essentially it goes like this:

1. Use a DAC to send a 1 volt per octave signal to a VCO
2. Using an ADC, sample the frequency of that VCO
3. Compare the sampled frequency of the VCO to a predetermined target frequency
4. Adjust the output voltage either up or down depending on whether the sampled frequency is higher or lower than the target frequency
5. Repeat steps 1 to 4 until the input frequency is equal to the target frequency
6. Once the input frequency is equal to the target frequency, store that value and move on to the next pitch to be calibrated
7. Repeat

Heres a little demo video of me going through the calibration routine.

I think I can get the routine to execute a little faster. It currently takes about a minute and a half to finish calibrating one oscillator, and since my design controls 4 oscillators the routine starts to drag on a bit. It would be cool if I could get the total time per oscillator down to like… 1 second or something.

One solution to speed things up - which I have yet to try - is to somehow determine the exponential slope of the VCOs 1 v/o tracking, then I could just run the calibration routine for one octave (or even one semitone? :man_shrugging:) and apply that slope to the remaining 16-bit values. It should work the same way right? I need to study the 1v/o tracking circuits a bit more.

Any ideas? Optimizations? Alternative approaches worth researching?

1 Like

I know nothing about DSP coding, but I know this is a feature of the ornament and crime module, and that’s open source. Maybe some interesting thoughts in this file?

1 Like

ohhh very nice - I didn’t know this was a feature of the OC! Going to have a look now.

1 Like

just a note I vaguely remember reading about it…I’ve never used it, so I may be totally wrong and made it up :sweat_smile:

So to clarify, you have an DAC capable of sending a certain voltage range, to an oscillator, and an ADC capable of reading the waveform returned by that oscillator; you can accurately determine the frequency of the waveform with the ADC?

The typical process here, I believe, is to assume a functional form that represents the voltage Vs frequency response curve of the oscillator. Usually you assume it offers V/Oct but deviates in two ways: first 0V may not represent your reference pitch (maybe C0, maybe something else: this isn’t standardised); this is an offset parameter. Second, it may not scale at precisely 1 Oct/v; this is a scaling parameter. Typically these two parameters are dependant in some way (when calibrating in hardware) so you calibrate the offset, then check the voltage at the edge of the tracking range and adjust scaling then repeat (iterating toward a state where 0V and you other scaling extremes give the expected pitch).

In your case I think the situation is simpler. If you know the overall tracking range of the oscillator (in V and Hz) I’d take 3 measurements: frequency at the bottom end of the range, frequency at the 0V reference and frequency at the top end of the range. You have a chosen functional relation between voltage and frequency, and that 0v = some pitch. My approach would be to match a quadratic (3 free parameters) to voltage Vs pitch (in octaves). This quadratic will predict the pitch that any given voltage will return. Solving that quadratic (closed form) gives you a pitch to voltage mapping: this takes 3 readings per oscillator. I’d recommend taking an additional two, in between to sanity check your model, but this should work well enough, and be a little more accurate than the linear approach I mentioned you’d do in hardware (by hand).

You may choose the functional relationship as you wish, with more or fewer free parameters and hence take more of fewer readings over the range then match errors. You might also want to tweak to ensure 0V exactly matches you reference pitch, but at a high level, this should take on the order of a few seconds per oscillator I’d think.

EDIT: all of this calibration assumes sufficient DAC and ADC resolution. For the DAC, consider you need a 10V range and want 1c accuracy, you need to discern 10 X 12 X 100 possible values, so 12000 = 13.6 bits, so you’d need at least a 14 bit DAC to set pitch. Secondly measuring pitch needs to be done accurately enough, so ensure you have the temporal resolution to do this.

2 Likes

Reading the code it seems like a similar approach for sure. I just wish there was some comments here and there! Going to have to dig to find the frequency sampling block

Do you think you could expand on this a bit? My math skills are quite poor, but I believe this is the approach I would ideally use since it only samples the signal 3 times. In my current implementation I sample the signal up to 20 times for every note across 6 octaves. So thats like, thousands!

What are the “3 free parameters”?

Depending if your pitch reference is your lowest note or I the middle of the expected range you would take either of two approaches, but in either case you’d take 3 measurements initially. You assume there is a function of some form that represents the relationship between voltage you send and the pitch the oscillator plays. Assuming you are working with a v/Oct oscilator then I’d thing of this function as f(voltage) -> note in fractional octaves wrt a reference. Oscilators often have a reference point of 0V = C0 but sometimes 0V = C3 (the former expects positive pitch voltages, the latter, positive and negative). Let’s pick A4 as our reference; the note has a frequency of 440hz, so if we measure a pitch P then ln(P/440)/ln(2) = pitch in octaves relative to that reference.

So if we know the oscillator tracks over some voltage range (or is expected to) we take N measurements, uniformly distributed in voltage across that range (let’s go with 3: min, max, (Min+max)/2). We then assume a function exists mapping voltage to pitch like av^2 + bv + c = pitch (with pitch as I defined earlier). You now have a function with 3 unknowns (a, b & c) and 3 known values that lie on the curve (your 3 readings). You can therefore solve the equation for your unknowns providing your function that models the oscillator frequency response. Note that you’ll want to be able to make an inverse function (solve the equation) so a quadratic seemed obvious as there exists a closed form general solution.

You might want to pick a different function, to work directly in Hz, to take extra measurements in between to compare to your function, or take extra measurements and use all the measurements to make a line of best fit. In any case, in the end you’ll have an equation which represents the response of the oscillator. You then make the inverse function which now maps from a musical pitch referenced to A4 (or however you chose to represent pitch in the function), to a voltage needed by the oscilator to play that pitch.

Your function could be exponential using Hz, quartic or in whatever functional form you chose. You may also try and find the inverse function directly (it just seemed more intuitive to me the way I discussed). In any case it’s just a matter of matching a curve to a set of samples points: a well studied problem.

EDIT: note, the quadratic with 3 samples will precisely hit those 3 values, if there is a “must match” pitch reference (such as a4), you may chose to carry out this process, use it to predict the voltage at A4, then replace the nearest of the 3 samples reference points with this 4th sample and redo the calculation. This will allow for an exact match at that point. Also, if you use error minimisation methods, you may want to weight points in the “most important” pitch range more highly to avoid sacrificing that for notes on the extremes. Similarly testing at the maximum extents may not prove as accurate across the range as going to the 15th and 85th percentiles or similar. This leaves you extrapolating at the far ends, but this is okay.

I would like to try this approach. It will take me some time to wrap my head around the math but it will be worth it.

From what I gather, I need to predict the VCOs 1 v/o tracking function based on a few samples, and then apply that same function to my DAC values.

You mention there will be some error - are you saying the formula won’t be accurate for all notes across all octaves? If so, how?

Error meaning: there exists an actual function that maps voltage into the oscillator pitch input to output waveform frequency (likely this changes with environmental factors). The function is broadly linear in V/Octave space, so in theory a linear function (gradient and offset) is enough to predict what frequency a voltage would give (and hence which voltage is required to result in a given frequency). In reality devices deviate from this theoretical V/Octave tracking, meaning the “real” function representing the device’s response isn’t linear in that space but takes some other (more complex). This means that, except for the places you sampled, which exactly match the function, the predicted frequency at a given value and the true frequency at a given value will deviate. This deviation is error: a difference between the pitch predicted by your linear (or quadratic) model and the pitch it actually plays.

You have several solutions to dealing with error. First of all I picked a function with as many free parameters as I took samples (quadratics have 3 free parameters to set, so I took 3 samples); I could pick a higher order polynomial, or a non polynomial function (or a function mapping to Hz rather than octaves); whichever function has a shape best matching the true response would give less error, meaning the true response would deviate less from the predicted response.

Another option is to keep the same functional form (a quadratic) but to take more samples. In this case, instead of having 3 values and 3 free parameters, we have 3 free parameters and more than 3 values. We therefore go through a process of error minimisation: finding a set of parameters than makes the error the function exhibits across the sample set, to a minimum. This is called making a line of best fit and is a very common statistical method. As with the previous method, it assumes a functional form, but it uses more information (more data points) to figure out the appropriate values of your free parameters.

Another method (closer to what you were doing I presume) is to define the function “piecewise”. Rather than assuming a quadratic relationship across the range, you break the range into subranges. This could be done as a set of linear estimates. This has an intersting characteristic: assuming the function’s gradient is monotonically increasing or decreasing, you can sample both ends then the mod point, and check the difference between your model and reality. If they are “close enough” your linear approximating was good enough, if not you split the range in two: this is an interval bisection approach and will allow you to put a bound on the error and keep dividing until your piecewise linear function matches the true function well enough.

I like the quadratic approach (you could try cubic or quartic too, but if you need to do a quartic a non polynomial might be better) as it allows you to take a small set of samples and define a function. It would be easy enough to compare the reality to the model, either at calibration time or as the oscillators play, to check how well your model is working. My suggestion would be to go the quadratic root and see how it works: it’s not too complicated, it should be quick, and it may suffice for your purposes. If it proves insufficient then is iterate.

1 Like

going to have to do some research on this!

There’s an app, named VCO Tuner (cleverly enough) that does something similar to this. It uses midi, with the assumption that you have a decently accurate midi to cv converter of some sort, and listens to the output of the vco via your audio input. It basically sends octaves or scales over a selected range (this is user customizable), and repeats them over and over, and displays the deviation from the desired pitch on the screen. So you can adjust the tracking on the fly. It has a few other features also. Very handy app. There’s no details about who wrote it included, it was kind of a one shot deal from a guy who posted on muffwiggler a few years ago.

The MW thread - https://www.muffwiggler.com/forum/viewtopic.php?f=17&t=164286
Get it at github - https://github.com/TheSlowGrowth/VCOTuner/releases/tag/v0.2.4

1 Like

You can also check out the code for the Mutable Instruments Anushri monosynth, which has this feature: https://github.com/pichenettes/anushri/tree/master/anu

1 Like