a thread for certain people. anything to do with math/numbers/etc.

i just found out that this series makes for great-feeling acceleration for arc encoders:

and then here’s an introduction to the mystical component:

a thread for certain people. anything to do with math/numbers/etc.

i just found out that this series makes for great-feeling acceleration for arc encoders:

and then here’s an introduction to the mystical component:

17 Likes

So is that what the arc is doing ‘internally’ to manage the delta step size or is this something ‘app side’?

on the app side. acceleration is not always desired, so it’d be bad design to be forced.

I think the triangular numbers are stalking me. They keep showing up in strange places.

at first glance looks like a numerical representation of a coch curve…

I have been fascinated for years by the music of Change Ringing, which is built on the mathematics of permutations. A complete extent consists of playing all possible permutations of *N* bells before repeating the initial permutation (usually a descending scale). There are many possible orderings of such sequences of permutations, and different Change Ringing methods involve different algorithmic methods.

As typically played, a complete extent of *N* bells, has:

- N! permutations, each N notes long
- N*N! notes, but as there is usually a rest every 2 permutations, it takes (N+1/2)N! beats to play
- There are (N!)! possible extents (permutations of the permutations), but the mechanics of tower bell ringing (not carallon, think giant bells on wheels, one person per rope to swing them) there are several dozen popular methods of playing through.
- For longer sequences, the ringers each perform the algorithm in their heads in parallel.
- For modest
*N*, things get big and long quickly: Just 6 bells, played at 120 bells per minute (bpm!) will take ~40 min. to play the full extent.

I have used Change Ringing as the melodic material in a number of works. Given electronic means, one can play them very fast, in different registers, and (most importantly) overlapped. Compositionally, I’ve found it to be a rather rich formalism in which to constrain myself.

Here’s a performance of my *PlainChanges 2*, on robot bass and Tesla coil:

(see PlainChanges 2 on github for the code and other links)

6 Likes

Here’s a quick 'n dirty way to make sine oscillators (also equal power fades) on blackfin without using a lookup table. For positive half of the cycle:

cos(pi * x / 2) ~= 1 - x^2

The blackfin has this neato 32 bit fractional multiply instruction which makes this hack somewhat appealing on that platform - on other fixed point processors without flloating-point/fractional maybe it’s not quite so tasty. In fact not entirely sure whether it *really* works out any more efficient on aleph than doing the whole lookup table song & dance (especially once fleshed out to include the negative half of the cycle) but it’s used lots in the code for grains anyway…

1 Like

Was thinking hard about fract math on blackfin yesterday afternoon/evening & this morning. Kind of had an epiphany about how to make sense of the different fractional multiplication primitives with different fract base inputs/output. Promise I will condense these aleph-centric ‘how-to’ posts into a series of blog entries or a pdf at some point. Bit heavy for a forum & not very ‘discussional’, but some aleph users at least professed an interested in aleph DSP. Apologies, re-reading this post it’s not very ‘playful’. In summary I condense a general method for multiplication in any fractional base on the bfin.

I banged my head against this *a lot*! After muchos random hacking, it still wouldn’t stick to my brain, so I would like to describe the simple(st) way of thinking about it! IMO pretty much essential to get one’s head round this in order to author new DSP blocks. @test2, @zebra I wonder if you guys agree with this analysis!

Anyway to explain the problem & define terminology:

when you call mult_fr1x32x32in bfin codebase on 2 signed 32 bit numbers (aka fract32), the fixed-point integer FR32_MIN (aka 0x80000000) represents -1.00000000 and FR32_MAX (aka 0x7FFFFFFF) represents 0.9999999995343387. The C type is called fract32, but let’s pick a more instructive notation. Since it’s a signed type, the sign already gobbles up one bit of accuracy. Therefore the ‘raw’ fract32 type (as interpreted by the mult primitives) has 0 bits before the decimal place & 31 bits after the decimal place. So we’ll call it sFract0.31. Or simply fract0.31 because there are no unsigned fract math instructions on bfin, so we’d never have much use for uFract0.32.

All well and good to use fract0.31 for applying a fader/volume coefficient from 0 -> 0.9999999995343387. But what if you want the range of fract32 to represent, for example, -2.0 -> 1.9999999990686774, enabling a 6dB volume boost (this should be called fract1.30, because 1 bit before the decimal place, 30 bits after). Well the easy way to do this is:

fract32 a; //(an audio sample from -1.0 -> 1.0 in fract0.31)

fract32 b; //(a fader value from 0 -> 2.0 in fract1.30)

fract32 c; //(an audio sample in fract0.31)

c = shl_fr1x32(mult_fr1x32x32(a, b), 1); // (an audio sample from -1.0 -> 1.0 in fract0.31)

and, if we wanted a maximum of 24dB gain (which means multiply by 16, or shift-left 4), then

fract32 a; //(an audio sample from -1.0 -> 1.0 in fract0.31)

fract32 b; //(a fader value from 0 -> 2.0 in fract4.27)

fract32 c; //(an audio sample in fract0.31)

c = shl_fr1x32(mult_fr1x32x32(a, b), 4); // (an audio sample from -1.0 -> 1.0 in fract0.31)

So introduce the notion of a ‘fractional type’, taking inspiration from the fract math language feature in Haskell-ish HDL ClaSH. The three cases discussed so far have type signature:

fract0.31 * fract0.31 -> fract 0.31 (requires no shifting because fract primitives are in the same base)

fract1.30 * fract0.31 -> fract 0.31 (requires shift-left 1 because there’s 1 bit left of the decimal point on inputs)

fract4.27 * fract0.31 -> fract 0.31 (requires shift-left 4 because there are 4 bits left of the decimal point on inputs)

There’s a pattern here! Convince yourself the following is correct:

fract4.27 * fract6.25 -> fract 2.29 (requires shift-left 8)

fract4.27 * fract6.25 -> fract 10.21 (requires shift-left -2 or shift-right 2)

Same rule applies for the multiplication primitive mult_fr1x32, which multiplies two fract16 values to give a fract32. It has the fundamental type-signature:

fract0.15 * fract0.15 -> fract0.31 (without shift)

so, for example:

fract0.15 * fract5.10 -> fract0.31 (requires shift-left 5)

So for all these cases (apart from the one example with *negative* shift-left) there is a quantisation error the number of post-shift bits. In order to do mult_fr1x32x32 with maximum possible precision:

fract32 mult_fr1x32x32_autoscaling (fract32 a, fract32 b, short radix_imbalance) {

a_radix = norm_fr1x32(a); // returns the number of bits shift_left to ‘normalise’ a to fill all available bits

b_radix = norm_fr1x32(a); // ditto for b

return shl_fr1x32(shl_fr1x32(a, a_radix), shl_fr1x32(b, b_radix), radix_imbalance - a_radix - b_radix);

}

fract32 mult_fr1x32_autoscaling (fract16 a, fract16 b, short radix_imbalance) {

a_radix = norm_fr1x16(a); // returns the number of bits shift_left to ‘normalise’ a to fill all available bits

b_radix = norm_fr1x16(a); // ditto for b

return shl_fr1x32(shl_fr1x16(a, a_radix), shl_fr1x16(b, b_radix), radix_imbalance - a_radix - b_radix);

}

1 Like

it’s `norm_fr1x32`

and `norm_fr1x16`

that really bring the magic on blackfin (it’s a thin wrapper for `SIGNBITS`

instruction.) thanks for reminding me to use them (here as well as on github, to which i should respond shortly)

parabolic approximation of sine is handy for sure!

this old comp.dsp post has some good “extended techniques” - like simultaneously producing a quadrature pair, minimizing wobble, optimizating for continuous differential.

1 Like

Awesome will try and get those tricks in our fract toolbox when I get a minute…

Another unsolved problem (at least to me) is what system/rule do you use for handling fract division with correct radix. Yea I know runtime division is highly inefficient on DSP, but using it quite a bit. See here:

https://github.com/rick-monster/aleph/blob/dev/dsp/osc_polyblep.c#L8-L12

That’s obviously not very optimal! Especially as I was still using the ‘16.16’ (aka fract15.16 in my above notation). Should it just use lookup tables somehow? Much like OO & the black heart of bees net, lookup tables irrationally freak me out a bit…

What is the fool-proof recipe for handling integer divides of integer fract quantities in different radix?

Umm yea let me try and google it now these questions are reasonably well-formed in my brain!

basically every time I try and port float code with a divide to the bfin I end up getting hopelessly mangled on radix/quantisation errors for an hour while I remember how everything goes…

1 Like

https://youtu.be/ShmAg3B7-C4

This aleph patch is based on the 11-limit tonality diamond. It’s a harmonic concept based on ‘reducing’ odd harmonics into the octave & ‘increasing’ odd subharmonics into the octave, also every ratio of odd harmonic & odd subharmonic.

Plenty of neat scales & runs can be discovered nestling among these ancient ratios on patient exploration of the tonality diamond…

1 Like