Developing Supercollider Plugins For Norns- Where to Start?

i think there are maybe too many open questions at once to debug this port in this “getting start” thread.

the braids code is designed to run in a 16-bit fixed-point numerical system.

midi_pitch is not a red herring, braids oscillatior takes linear pitch in the midi range. but it also expects this value in a fixed-point fractinoal format. it appears to be in 8.7 signed (highest bit is sign , next 8 bits are the integer note, and bottom 7 bits are a fraction,) which makes sense because it wants to represent the MIDI note range with as much fractional resolution as possbile without doing sign casts.

so after converting frequency->midi, you want to convert floating point in [0.0, 127.] to fractional [0x0000, 0x7fff] format. 60.0 should map to 0x1e00, 60.5 to 0x1e40. remainder of exercise left to reader. i meant this kind of conversion to be implicit starting from a floating point number and producing fractional int16. (the ellipses in my sample fragment.)

like i said, i think a “proper” port of braids to a floating-point platform would involve replacing as much as possible of the fixed-point math. this is more effortful but the end result will be cleaner, more performant, and less error-prone. this is a lot to start with for your very first UGen.

sorry for the salt. (and you’re right, i posted the conversion in the wrong direction - i should not be in such a hurry- and i see that you did try a shift that should give you integer pitch but discards the fractional part.) so i’m not exactly sure what’s going wrong.

again, i’d first just try a ramp or sine to make sure you are getting the freq input correctly. then isolate the problem of converting to braids format.

if it was me, i would probably start a braids port by doing it in a non-realtime environment. as @junklight says, it’s good to be able to visualize the DSP state. (e.g. wrap braids in a command line tool, use something like cnpy or just stdout to get it into numpy or something, etc.)

No worries! I’m still super new to non-pd related DSP and none of that was in the context of writing any signal generator code.

Sorry for flooding the thread a bit – I realize trying to port something that generates signals in a fixed integer format is obviously going lead to problems, there’s a separate issue of the look up tables being having been generated at 96khz and norns SC running at 48khz (which realistically is probably causing other problems…). Going to shelf the macro part of braids for now.

So my current non-braids specific question:

What is the input buffer? I can glean from the docs that calling IN(0) gets a single block of audio rate input, but like if I’m making a simple UGen “440” as its frequency what’s the difference in the values of the input buffer vs passing in an actual signal? Like I think part of the issue I’m having is conceptualizing this as a static “i pass in this frequency and it is unchanged”, and I’m having a hard time understanding how you get the frequency from the input buffer.

1 Like

@zzsnzmn I don’t want to sound defeatist, but have you considered having a go at porting the Plaits code, rather than Braids.

AFAIK, Plaits is written for floating-point (as the microcontroller used in the Plaits hardware has an FPU, unlike Braids MCU).

While it doesn’t have all the Braids algorithms, it’s still got a pretty good selection, and I think it’ would be easier to deal with, in terms of porting.

TBH I had only started with braids because the Mi4PD code seemed accessible and I’m passingly familiar with PD. I’ve gone through some of the Puckette book, I’m still suuuuper green at all this. Starting from square one of “this is how buffering works” for ins and outs is probably advanced enough of an exercise. If any of you have good required reading for this I’d be happy to check it out.

I’ll see if I can put something together like that in the next couple days

(I should take my time and check my recollections)

agree it is confusing… And now I’m looking at recent additions to the example-plugins repo, which honestly don’t help to clarify much

1 Like

I’m sort of answering some of the questions I had above.

What I was trying to figure out is what’s the difference between a a call like {}.play and {}.play.

I ended up making a simple plugin whose next call for is implemented like so:

void MyUGen_next_a(MyUGen *unit, int inNumSamples)
    // get the pointer to the output buffer
    float *out = OUT(0);

    // get the pointer to the input buffer
    float *freq = IN(0);
    for (int i=0; i < inNumSamples; i++) {
        out[i] = freq[i];


Then in an sclang shell you I called {}.play which ends up printing out (unsurprisingly):

UGen(MyUGen): 440
UGen(MyUGen): 440
UGen(MyUGen): 440

Passing in an audio rate argument, the in buffer does end up actually being the input buffer meaning that the following two end up filling the out buffer with the same values:

{, 1)).poll}.play
UGen(MyUGen): 0.871178
UGen(MyUGen): 0.871158
UGen(MyUGen): 0.871139
UGen(MyUGen): 0.87112
{, 1).poll}.play
UGen(SinOsc): 0.871178
UGen(SinOsc): 0.871158
UGen(SinOsc): 0.871139
UGen(SinOsc): 0.87112

So! I guess if you set the frequency to a float value instead of a signal it will remain static for all values in the ‘in’ buffer. I’m sure there are exceptions to this rule, but hey it feels a little less magic right now.

Yes the immediate answer is that you can always use the input buffers. I forget exactly what mechanism does this for literals. For variables and arguments, value is wrapped in a Control ugen under the hood.

You can also observe / discover some things by calling .dumpUGens on a SynthDef.

1 Like

I ported some mutable code over to supercollider in the past, but haven’t made them public in the end, since there are some missing piecies. porting plaits was actually quite straightforward, see attached ugen class. hope this is a helpful starting point.


this is specifically for PD… does PD change the block size per render call on ‘mobile platforms’

anyway, .what I do is start with a buf size that would normally be the largest i think is likely to be the case.
IF I find i need to extend it, then I extend it… but I don’t reduce it. (actually iirc, in this specific case, brds is chunking it into blocks of max 24, so its a bit of a moot point)
my aim here, is to not start creating very large buffers, as on a slow pc/rPI perhaps they set the buffer to 1024… and I really dont wanna allocate that ‘just in case’

for sure this is not ideal (nor ‘best practice’) but I seem to remember when I looked at this, the issue was PD only tells you the block size, during the render call … not during setup, I guess because setup is only called when the object is created, so if the audio buffer is changed on the fly, then it doesn’t get called again.

so if you only find out on the render call, you really dont have a lot of choice other than to reallocate when you find out - in the unlikely event that it will change.

but for sure, no arguments that its important not to do time-unbounded ops on the audio thread.

in the specific case of puredata, my understanding is that it does always double buffer to DEFDACBLKSIZE. (also true for libpd objc wrapper, but uptream puredata AudioUnit wrappers are still stubs.)

yes, but given a patch can use block~ to change the blocksize, I don’t think thats something you can use in an external… it needs to use the size of the buffer supplied in the render call.
anyway… this is all a bit off-topic … (the brds~ code is now ‘fixed’)

back on topic, porting to SC…
as @toneburst points out, its probably better to port Plaits rather than Braids now.

also you might want to consider a different direction,

on axoloti, rather than port ‘Braids’ as a single module - Johannes, created a number of modules which represented the individual oscillators, so that they could be re-combined in different ways. ( * )
I didn’t do this… for the simple reason for Orac I wanted a single module for the user to include.
but in a programming environment, I think the break down approach is a little more interesting/flexible.

( * ) Emile writes such clean / great code this was actually very simple

Yeah, again sorry for the derail. I picked braids for a few reasons, but didn’t realize how much of a pain the int -> float conversion would be. I figure a few exercises that would be helpful/illustrative of how to work with real time audio buffers:

  1. risset tone generator
  2. clipping distortion (basically just up scale and and then clamp)
  3. a simple delay

at this point i’m less interested in having a recreation of plaits/braids and more interested in getting a better handle of real time DSP.

yes, well, i certainly did not ask for an interrogation of pure data’s block scheduler internals with my pretty basic and (i thought) non-controversial observation. and i’m no expert in PD anyways :slight_smile: .thank you for informing me about block~. fwiw, i agree that always re-allocating upwards seems safest if you absolutely do need to allocate.

strategies for managing buffering layers for algos, definitely seems on topic for any plugin format. it’s a common requirement (any time you want to resample / interpolate / delay / &c) and the general solution is some kind of ring buffer. (i don’t see why the MI oscillators actually need their own output buffer, except that it’s less work than updating each FooRender() function to convert to float… ok right, moving on. )

like, here is an external i just made last week for fun. it is a chaotic oscillator that requires 2 layers of internal buffering: one for signal analysis (it works by weighted-averaging of a chaotic map history), and one for interpolation.


implementation-wise, i’ve found the above structure to be widely useful for a broad class of oscillators and colored-noise generators: specify an update rate (frequency of “highest harmonic”,) a generator function that is called at the update rate, and a signal interpolation method.

i have a bunch more similar weird oscillators using same pattern; when i get a minute, i will wrap them in SuperCollider UGens as well. that would be a good opportunity to work on a good UGen template.

this discussion has made me realize that i was being a bit too glib in simply recommending a look at example-plugins and the official guides. there seems to be room for a clean UGen template that uses only the stuff relevant for a direct implementation of an audio algorithm.

given infinite time it sure would be nice to have a generic C++ wrapper system that targeted the plugin formats of pd/max/sc. (Faust is nice, but it’s not always the most expedient wray to express something.)

ok, these are fun questions:

going backawrds,

  1. clipping distortion is the simplest here, at base it is as you say:
y = min(1, max(-1, x * gain)) 

but of course hard clipping introduces infinite bandwidth expansion, and in general one uses some “soft clipping” shape to constrain the generation of harmonics. if the clipping function is a polynomial then the poly order constrains the order of harmonics, which is nice.

whatever function you use, static waveshaping is nice and simple because it requires no signal history at all.

here’s a kind of “bestiary” of waveshaping functions i’ve (mostly) collected or (occasionally) made up over the years, which include old standbys like tanh. they are expressed as python code, sometimes with multiple parameters and sometimes just gain. (in a plugin, some would be much to expensive to compute directly and i would use a LUT.)

some of these produce folding at higher parameter values.

def shaper_tsq(x, t):
    # two-stage quadratic
    # t is softclip threshold in (0, 1) exclusive
    g = 2.0
    ax = abs(x)
    sx = np.sign(x)
    t_1 = t - 1.
    a = g / (2. * t_1)
    b = g * t - (a * t_1 * t_1)
    if ax > t:
        q = ax - 1.
        y = a * q * q + b
        return y * sx / 2
        return x * g / 2

def shaper_bram(x, a):
    ax = np.abs(x)
    return x * (ax + a) / (x * x + (a - 1) * ax + 1)

def shaper_bram2(x, a):
    ax = np.abs(x)
    sx = np.sign(x)

    if ax < a:
        return x
    if ax > 1:
        return sx * (a + 1) / 2

    return sx * (a + (ax - a) / (1 + ((ax - a) / (1 - a)) ** 2))

def shaper_cubic(x, a):
    g = 2 ** a
    x = x * g
    y = ((3 * x / 2) - ((x * x * x) / 2))
    return y / g

def shaper_expo(x, a):
    sx = np.sign(x)
    return sx * (1 - (np.abs(x - sx) ** a))

def shaper_sine(x, a):
    # param = logarithmic pre-gain
    x = x * (2 ** a)
    return np.sin(np.pi * x / 2)

def shaper_reciprocal(x, a):
    # param = pregain
    x = x * (2 ** a)
    return np.sign(x) * (1 - (1 / (np.abs(x) + 1)))

def shaper_tanh(x, a):
    # param = pregain
    x = x * (2 ** a)
    return np.tanh(x)

def shaper_ulaw(x, a):
    ax = abs(x)
    sx = np.sign(x)
    return sx * np.log(1 + a * ax) / np.log(1 + a)

def shaper_alaw(x, a):
    ax = abs(x)
    sx = np.sign(x)
    denom = 1 + np.log(a)
    if ax < (1 / a):
        return sx * a * ax / denom
        return sx * (1 + np.log(a * ax)) / denom

# including to show useful ranges
test_shaper(shaper_bram, [1, 2, 3, 5, 7, 8, 9, 10])
test_shaper(shaper_bram2, [0.999, 0.8, 0.7, 0.5, 0.3, 0.15, 0.05, 0.001])
test_shaper(shaper_tsq, [0.9, 0.8, 0.7, 0.5, 0.3, 0.2, 0.1, 0.001], 1)
test_shaper(shaper_cubic, [-1, -0.5, 0, 0.25, 0.5], 1)
test_shaper(shaper_expo, [1, 2, 3, 4, 5, 6], 1)
test_shaper(shaper_sine, [-1, -0.5, -0.25, 0, 0.25, 0.5], 1)
test_shaper(shaper_reciprocal, [1, 2, 3, 4, 6, 8, 9, 10, 11, 12], 1)
test_shaper(shaper_tanh, [0, 0.5, 1, 2, 3, 4], 1)
test_shaper(shaper_alaw, np.exp(np.linspace(0, np.log(100), 10)))
test_shaper(shaper_ulaw, np.exp(np.linspace(0, np.log(300), 10)))

and here are their transfer functions and spectra. (spectra display is a little wonky, oh well.)

shaper_alaw shaper_bram shaper_bram2 shaper_cubic shaper_expo shaper_reciprocal shaper_sine shaper_tanh shaper_tsq shaper_ulaw

ooh and here’s a wonderful trick from robert bristow-johnson: family of polynomials approximating integral of (1 - x^2)^N by binomial expansion… this gives you a really nice progression of odd-order distortion
(matlab code for the moment i’m afraid)
rbjpoly.m (446 Bytes)

  1. simple delay is also, well, simple. if delay time is a multiple of 1 sample, it is just a peek backwards into a ringbuffer. (see peek method in worb.h linked above for example.) for fractional delays, interpolate between neighboring samples with whatever interpolation is appropriate. (usually linear, or hermite spline, occasionally allpass if you want specific phase distortion effects, like in reverbs and phasers.)

  2. risset glissando is the most complex here. it is basically simple (some sine waves and some rate / amplitude functions) but tuning them can take some time. (i’ll see if i can dig up some old notes on those…)


FWIW, I’ve ported a few of the mutable modules to SuperCollider (among those is Plaits).
Sources are here:

UGens comiled for macos (scroll down all the way):

Sound demos:


That sounds good!

I’ve not really attempted making a plugin myself, having been sucked in to other stuff, since starting this discussion, but when I do get around to it, some kind of basic “insert your audio-rate algorithm here” template for custom SuperCollider UGens would be super useful, I think.

1 Like

Wow, cool!!

How feasible would it be to build these for Norns’ Arm processor?

hm, I guess it shouldn’t be too hard, but I haven’t tired it - so have no idea really.

If I have some free time later I will share the steps needed to get stuff building on Norns, major life event stuff happening today so more than likely I will get around to it tomorrow. It’s pretty straight forward if you’ve use make before.


I compiled some other ugens a long while back. I might give these a go later today if I have time.