Developing Supercollider Plugins For Norns- Where to Start?

Can’t help you with this, sadly, but interested to see how you get on!

And also to see a Braids uGen!

    uint8_t* sync = new uint8_t[inNumSamples];
    int16_t* outint = new int16_t[inNumSamples];

first things first: never allocate in the audio callback

(wow, i see Mi4PD is doing this too… :grimacing:)

i guess this is superficially similar to MI code, but there the render blocks are actually static, declared up here:

(… so these lines, which look like they could be stack allocations of arrays, are actually simple pointer assignments.)

second thing, i don’t see where you’re actually using the frequency input in the audio-rate update function.

i only see it being used as a phase increment in the control-rate update function. it looks like you’re assuming both the update_a and update_k fn’s will be called on each block, but that’s not how it works… only one is called, switched on the rate of the ugen instance (here.)

(i don’t think i would bother with the control-rate option at all, for a braids clone… but at least i would focus on one or the other to start with.)

but yeah, i agree that it would probably be a good idea to start with something simple like a sine or ramp, then drop in the more complex oscillator.

the TrigPhasor code posted near top of this thread might be helpful there.

1 Like

Yeah that’s odd… usually what I do is allocate upfront and only reallocate if block size changes.
(change sr/bufsize is infrequent enough to allow for audio glitch - and would likely happen even if you didn’t reallocate mem ;))

I’ll fix it - and will check to see if i was on drugs when I wrote it :wink:

1 Like

One thing I found incredibly useful when learning all this stuff was to visualise phase and oscillators. I made stuff like this in jupyter-lab

(this is a really simple example - I was actually running polybleps like this to see what they did and make sure they were at the points they needed to be at)

1 Like

I appreciate the tips with all this. From what I can tell, the braids code keeps track of the phase internally:

I updated the code such that the buffer allocation doesn’t happen each time the next ar function is called, and right now by calling set_pitch with that static c2 variable I’m able to generate sound. I’m just not sure how to determine what the frequency is for the signal, I figure I need to do something with the IN(0) buffer but I’m not sure if there’s something else I should be looking at.


So I’m looking at some of the included UGens, and noticed that this line here is calling ZIN0(0) to get a frequency ( ZIN0 is for control rate, so I’m not really sure how to go from one frequency to the buffer of frequencies that you get by calling the audio rate IN(0) – can I naively pop the first value and assume that the frequency for the given block doesn’t change over time?

ZIN(), ZOUT() are not for kr, they are used to set up the LOOP macros. this used to be required but now is not particularly recommended; you see it mostly in older UGen code.

the frequency input can be different per sample. but yes, you can ignore this and just use the first in a block if you want. use IN(i) to get a pointer to the input block, or IN0(i) as a (sort of pointlesss) shortcut to get the first value. (equivalent to IN(i)[0].)

oh, i do see you are calling set_pitch. sorry i missed that. it looks like this expects a MIDI note number in braids:

that’s a bit unfortunate. on braids hardware a LUT is used for efficiency. i would probably rewrite the braids osc internals to take frequency directly, and also to perform fixed->float conversion right at the point where the output buffer is filled.

but if you don’t want to do that, and you just want one frequency per block:

float hz = IN0(0);
float pitch = 440.f * pow(2.0, (hz - 69) / 12);

ach, woops sorry, i meant the opposite of course

midi = 69 + log2(hz/440) * 12;

not trying to drag out this detour, but this seems worth saying in a “beginner” thread: in my experience that’s not a safe assumption. in particular, on mobile platforms, there is aggressive load balancing of audio and other processes, and high variance between frame count in audio callbacks. your DSP algo can’t glitch if frame count changes; such a glitch could occur millions of times in a long audio session, especially on lower-end hardware.

iOS is especially notorious for this. so when making Audio Units, if your algo requires an internal buffer, you can specify a maximum frame count and the OS will chunk larger buffers into multiple consecutive calls. this allows you to use singly- or statically-allocated internal buffer with the max size. JUCE and other frameworks work in a similar way. (e.g. JUCE’s prepareToPlay takes expected frame count but warns you that it will vary by “small amounts.”)

in more managed environments (plugins for high-level languages) i guess you can often get away with assuming a fixed buffer size because SC/puredata/max will double-buffer. (but even in puredata i’m not so sure; it’s a portaudio app and portaudio allows paFramesPerBufferUnspecified flag. so you might find some maniac with a custom build running on android and obsessed with raw throughput.)

anyways, for anyone starting out with this stuff, it seems important to internalize the rule of “don’t do time-unbounded operations on the audio thread.” there are few upsides to ignoring this rule and plenty of downsides.

1 Like

Unless I’m misunderstanding it seems that the midi_pitch name might be a bit of a red herring, as the pitch is initialized as 60 << 7 (

I tried the inverse of what you suggested to get the midi note from IN0(0) but I believe the frequency is actually not the value I think it is :\

float hz = IN0(0) * unit->mFreqMul;
int note = (int) log(hz/440.0)/log(2) * 12 + 69;
unit->osc.set_pitch(note << 7);

i think there are maybe too many open questions at once to debug this port in this “getting start” thread.

the braids code is designed to run in a 16-bit fixed-point numerical system.

midi_pitch is not a red herring, braids oscillatior takes linear pitch in the midi range. but it also expects this value in a fixed-point fractinoal format. it appears to be in 8.7 signed (highest bit is sign , next 8 bits are the integer note, and bottom 7 bits are a fraction,) which makes sense because it wants to represent the MIDI note range with as much fractional resolution as possbile without doing sign casts.

so after converting frequency->midi, you want to convert floating point in [0.0, 127.] to fractional [0x0000, 0x7fff] format. 60.0 should map to 0x1e00, 60.5 to 0x1e40. remainder of exercise left to reader. i meant this kind of conversion to be implicit starting from a floating point number and producing fractional int16. (the ellipses in my sample fragment.)

like i said, i think a “proper” port of braids to a floating-point platform would involve replacing as much as possible of the fixed-point math. this is more effortful but the end result will be cleaner, more performant, and less error-prone. this is a lot to start with for your very first UGen.

sorry for the salt. (and you’re right, i posted the conversion in the wrong direction - i should not be in such a hurry- and i see that you did try a shift that should give you integer pitch but discards the fractional part.) so i’m not exactly sure what’s going wrong.

again, i’d first just try a ramp or sine to make sure you are getting the freq input correctly. then isolate the problem of converting to braids format.

if it was me, i would probably start a braids port by doing it in a non-realtime environment. as @junklight says, it’s good to be able to visualize the DSP state. (e.g. wrap braids in a command line tool, use something like cnpy or just stdout to get it into numpy or something, etc.)

No worries! I’m still super new to non-pd related DSP and none of that was in the context of writing any signal generator code.

Sorry for flooding the thread a bit – I realize trying to port something that generates signals in a fixed integer format is obviously going lead to problems, there’s a separate issue of the look up tables being having been generated at 96khz and norns SC running at 48khz (which realistically is probably causing other problems…). Going to shelf the macro part of braids for now.

So my current non-braids specific question:

What is the input buffer? I can glean from the docs that calling IN(0) gets a single block of audio rate input, but like if I’m making a simple UGen “440” as its frequency what’s the difference in the values of the input buffer vs passing in an actual signal? Like I think part of the issue I’m having is conceptualizing this as a static “i pass in this frequency and it is unchanged”, and I’m having a hard time understanding how you get the frequency from the input buffer.

1 Like

@zzsnzmn I don’t want to sound defeatist, but have you considered having a go at porting the Plaits code, rather than Braids.

AFAIK, Plaits is written for floating-point (as the microcontroller used in the Plaits hardware has an FPU, unlike Braids MCU).

While it doesn’t have all the Braids algorithms, it’s still got a pretty good selection, and I think it’ would be easier to deal with, in terms of porting.

TBH I had only started with braids because the Mi4PD code seemed accessible and I’m passingly familiar with PD. I’ve gone through some of the Puckette book, I’m still suuuuper green at all this. Starting from square one of “this is how buffering works” for ins and outs is probably advanced enough of an exercise. If any of you have good required reading for this I’d be happy to check it out.

I’ll see if I can put something together like that in the next couple days

(I should take my time and check my recollections)

agree it is confusing… And now I’m looking at recent additions to the example-plugins repo, which honestly don’t help to clarify much

1 Like

I’m sort of answering some of the questions I had above.

What I was trying to figure out is what’s the difference between a a call like {}.play and {}.play.

I ended up making a simple plugin whose next call for is implemented like so:

void MyUGen_next_a(MyUGen *unit, int inNumSamples)
    // get the pointer to the output buffer
    float *out = OUT(0);

    // get the pointer to the input buffer
    float *freq = IN(0);
    for (int i=0; i < inNumSamples; i++) {
        out[i] = freq[i];


Then in an sclang shell you I called {}.play which ends up printing out (unsurprisingly):

UGen(MyUGen): 440
UGen(MyUGen): 440
UGen(MyUGen): 440

Passing in an audio rate argument, the in buffer does end up actually being the input buffer meaning that the following two end up filling the out buffer with the same values:

{, 1)).poll}.play
UGen(MyUGen): 0.871178
UGen(MyUGen): 0.871158
UGen(MyUGen): 0.871139
UGen(MyUGen): 0.87112
{, 1).poll}.play
UGen(SinOsc): 0.871178
UGen(SinOsc): 0.871158
UGen(SinOsc): 0.871139
UGen(SinOsc): 0.87112

So! I guess if you set the frequency to a float value instead of a signal it will remain static for all values in the ‘in’ buffer. I’m sure there are exceptions to this rule, but hey it feels a little less magic right now.

Yes the immediate answer is that you can always use the input buffers. I forget exactly what mechanism does this for literals. For variables and arguments, value is wrapped in a Control ugen under the hood.

You can also observe / discover some things by calling .dumpUGens on a SynthDef.

1 Like

I ported some mutable code over to supercollider in the past, but haven’t made them public in the end, since there are some missing piecies. porting plaits was actually quite straightforward, see attached ugen class. hope this is a helpful starting point.


this is specifically for PD… does PD change the block size per render call on ‘mobile platforms’

anyway, .what I do is start with a buf size that would normally be the largest i think is likely to be the case.
IF I find i need to extend it, then I extend it… but I don’t reduce it. (actually iirc, in this specific case, brds is chunking it into blocks of max 24, so its a bit of a moot point)
my aim here, is to not start creating very large buffers, as on a slow pc/rPI perhaps they set the buffer to 1024… and I really dont wanna allocate that ‘just in case’

for sure this is not ideal (nor ‘best practice’) but I seem to remember when I looked at this, the issue was PD only tells you the block size, during the render call … not during setup, I guess because setup is only called when the object is created, so if the audio buffer is changed on the fly, then it doesn’t get called again.

so if you only find out on the render call, you really dont have a lot of choice other than to reallocate when you find out - in the unlikely event that it will change.

but for sure, no arguments that its important not to do time-unbounded ops on the audio thread.

in the specific case of puredata, my understanding is that it does always double buffer to DEFDACBLKSIZE. (also true for libpd objc wrapper, but uptream puredata AudioUnit wrappers are still stubs.)

yes, but given a patch can use block~ to change the blocksize, I don’t think thats something you can use in an external… it needs to use the size of the buffer supplied in the render call.
anyway… this is all a bit off-topic … (the brds~ code is now ‘fixed’)

back on topic, porting to SC…
as @toneburst points out, its probably better to port Plaits rather than Braids now.

also you might want to consider a different direction,

on axoloti, rather than port ‘Braids’ as a single module - Johannes, created a number of modules which represented the individual oscillators, so that they could be re-combined in different ways. ( * )
I didn’t do this… for the simple reason for Orac I wanted a single module for the user to include.
but in a programming environment, I think the break down approach is a little more interesting/flexible.

( * ) Emile writes such clean / great code this was actually very simple

Yeah, again sorry for the derail. I picked braids for a few reasons, but didn’t realize how much of a pain the int -> float conversion would be. I figure a few exercises that would be helpful/illustrative of how to work with real time audio buffers:

  1. risset tone generator
  2. clipping distortion (basically just up scale and and then clamp)
  3. a simple delay

at this point i’m less interested in having a recreation of plaits/braids and more interested in getting a better handle of real time DSP.