Accurately Multiplying an External Clock with an MCU

I am having a real hard time understanding how to do external clock multiplication on an MCU :man_facepalming:

For my project I have a 32 step sequencer type thing going on. Now, it would be fairly easy for me to write some interrupt code to advance the sequence every-time a trigger/pulse is detected on a digital input pin - but what I would actually like to do is generate a higher resolution internal clock from the external clock. A simple use case would be to convert an analog clock signal to a MIDI clock signal which operates at 24 pulses per quarter note.

The sudo code for my current approach looks something like this:

#define PPQN    24

uint32_t lastClockTimeStamp;
uint32_t newClockTimeStamp;
uint32_t clockPeriod;                   // ie. PW

void extClock() {
  lastClockTimeStamp = newClockTimeStamp;
  newClockTimeStamp = timer.read_us();
  clockPeriod = newClockTimeStamp - lastClockTimeStamp;
  timer.attachInterrupt(clockPeriod, tickClock());
}

int currTick = 0;
int currPosition = 0;
int currStep = 0;

void tickClock() {
  currTick += 1;
  currPosition += 1;

  // when currTick exceeds PPQN, reset to 0
  if (currTick >= PPQN) {
    currTick = 0;
    currStep += 1;
  }
  if (currPosition >= totalPPQN) {
    currPosition = 0;
    currStep = 0;
  }
}

I use an Interrupt Input pin to detect the external clock signal. For each rising edge on this pin, the extClock() function executes and calculates the “period” of the external clock (from last recorded clock to the current time of execution).

After the period is determined, I divide the period by my desired PPQN value and then set an internal interrupt routine to execute the tickClock() function.

For example, if the the external clock has a frequency of say… 0.96s, then my internal clock would be triggered every 0.96s/24 = 0.04s.

Now, this all makes sense to me however after hooking things up to a scope it turns out I have no idea what I am doing!

External Clock Signal = yellow (I am colour blind :upside_down_face:)
Internal Clock Output (post clock multiplication) = Blue (I am colour blind :upside_down_face:)

You can see in this picture that my clock multiplication is too fast :man_shrugging:.

What am I doing wrong? Is there a different approach to clock multiplication? After looking at some other open source projects like OC, MI, the RCD, etc. I noticed there is a lot of bit masking going on, but its a bit hard to decipher how those clocks work.

I’m not familiar with the specifics of what you are doing, but I can comment in general. Firstly, can you confirm that it’s your intention to do clock multiplication and not clock division? In particular, clock division is a.nuch simpler process where you just count the rising edges and send a tick where counter mod N = 0 (where N is the division factor).

As for clock multiplication, using one clock to generate clock events for your clock on a per tick basis is unreliable for several reasons. First of all the timing of external clock ticks, interrupt handling and the arrival of ticks will have some jitter meaning the time between and two ticks will vary slightly. Similarly your use of a timer to trigger your tick generation will also be subject to interrupt and timer jitter. This results in an output clock that’s even less stable than the input clock. You could improve this a little by taking a running average of the input clock period but the interrupt timing will still cause jitter, and phase drift.

The more conventional approach is to create a mechanism to send a clock at a certain rate (using a mechanism that minimises jitter). What this is really depends on the platform so I can’t comment specifically on this but it may just be checking the time in some sort of main program loop. You keep track of when the next tick should happen based on when the current one happened and send the tick when the timer is >= that time. This is driven by either a user set clock rate, or an inferred clock rate from an external source. The clock rate from the external source can use the interrupt mechanism, but instead of using one interval, it uses a running average over some period of time or number of ticks. The windowing function trades off clock stability against responsiveness to changes in clock rate.

Such a scheme will allow any multiple or divider (integer or not) meaning the clocks will not have the same phase. You will likely want a way to keep this phase locked in the integer phase such as applying a correction factor to the predicted time of the next tick. That is, to adjust the timing of your clock tick that should fall on the external clock tick by some percentage. This should be something of a no-op when the clocks are locked but will matter when the external clock rate changes.

Thanks for your reply :+1:.

So I do want to do clock multiplication. If you check out this video of the Temps Utile they seem to be doing some high multiplications on what appears to be a quarter note. I looked at the code but I couldn’t quite figure out what was going on.

You could improve this a little by taking a running average of the input clock period but the interrupt timing will still cause jitter, and phase drift.

so i think this is the best option thus far, but I would have to figure away to reset the clock to correct the inevitable “phase drift”. Maybe if I use some kind of threshold value to compare the input clock and the multiplied clock to make the mult clock either “catch up” or “slowdown” for the input clock :thinking:.

Taking an average is a good place to start for now though. I will come back later if I come up with anything.

You might try implementing a Phase Locked Loop and attempting to keep your internal software clock at exactly 1:1 for both frequency and phase. Then it’s relatively simple to multiply your internal clock while not drifting in phase. The potential for jitter still exists but drift should be eliminated.

1 Like

Love threads like this! I haven’t quite wrapped my own head around it but I wonder if looking at the 4MS Shuffling Clock Multiplier source might help?

For integer multiplication my thought was to maintain a true phase offset between the next predicted external clock and what we predict will be the equivalent internally generated tick. In addition to the true phase offset we’d maintain a current phase offset (initially 0) which we apply to the generated ticks. We then gradually move the current offset toward the true offset (maybe a weight average, maybe at a fixed rate, maybe at a rate proportional to clock, or other). This will introduce a little jitter, but should make them converge to being in phase and return to phase after the incoming clock changes rate.

You might try implementing a Phase Locked Loop

So I have seen this phrase come up a lot in my research. Just reading that article makes me think its is @chalkwalk is trying to suggest?

I don’t understand it enough yet.

thats actually the first place I looked! But I am having a hard time understanding it without explanation

It’s pretty simple if you think modularly. Imagine taking the difference between your signal and your clock. If your VCO signal is your clock there is no difference, perfect! But as your clock and your signal get more out of phase, the difference between them gets bigger. Take that difference as a CV and feed it to your VCO’s pitch input. Now your VCO will speed up or slow down until it locks into phase with your clock.

There’s some extra steps like filtering the CV to make sure your VCO doesn’t over- or undershoot, but that’s basically the concept!

1 Like

My concept of incremental phase offset is exactly analogous to speeding up or slowing down the current clock to match phase, just viewed from a different frame of reference. Thinking in terms of increasing or decreasing the tempo to get there may be an easier way to look at it. Extending the concept a little, it should also work for any rational multiple (not just integers>0).

That particular PLL implementation is tricky in that it relies on a complex sinusoidal input in order to accurately detect phase. This is not available in most situations (e.g. non-sinusoidal/pulsed inputs, or audio, which has only the real component of the sinusoid). It turns out it is not trivial to detect phase from the real component of a signal alone, even if that signal is perfectly sinusoidal - and the problem gets worse the less and less sinusoidal the input is.

The PLL is indeed the technically correct answer to this problem, but the nuance lies in the detection algorithm and the correct discretization of the PI controller in the feedback loop. A proper treatment of the subject requires a decent background in DSP, and something like an orthogonal signal generator or other phase detection component in front of the VCO and feedback loop.

The main tricks have to do with

  1. how precisely you need to detect and track the input clock phase (latency, phase/rising edge sync, etc),
  2. how fast you want to adjust to changes to the clock (or how well you want to responsively track a varying or random clock and what you mean by “responsively” - by definition this will be a predictive follower which means until two pulses have arrived it cannot know the clock rate of the last period, and it will simply predict the next period from the past data, so if the next period is significantly different there will be error until it resynchronizes somehow), and
  3. how much staying “in phase” matters to you during these periods of change.

Immunity to noise is generally inversely proportional to responsiveness and precision, though this largely depends on how you define (and exclude or detect) “noise”. Excluding signals well outside your range of interest is easy, but if you need to exactly track the precise rising edge of a signal, let’s say, you’ll fire on even extremely short bursts of (potentially erroneous) high level pulses. Lowpass filters to exclude HF introduce either latency (if linear phase) or frequency-dependent phase offsets, etc. etc.

Long story short, the more these things matter, the more advanced and careful your implementation needs to be. The less they matter, the simpler (and in some cases more robust) it can be.

2 Likes