Developing Supercollider Plugins For Norns- Where to Start?

Hi All,

I’ve just started playing with SuperCollider, and very quickly got to the point where what I want to do requires a custom plugin.

I’m wondering if anyone has attempted writing an SC plugin for use in Norns, and has any tips on places to start looking for documentation and basics like how to compile for ARM Linux on RPi.

I’m planning to develop primarily on MacOS.

1 Like

here’s an example from the norns repo

which is also built by norns waf script

the general guide is pretty good
https://doc.sccode.org/Guides/WritingUGens.html

and there are heavily commented official examples here:

1 Like

How do you compile it? Do you do it on the norns, or use a cross-compiler?

on the norns is easiest for me.

we had more custom SC stuff before norns 2.0, left that one in there exactly for this reason (as a template for custom ugens if people are interested)

Is there a recommended cross-compilation solution for norns development? I’ve been using the one generated by buildroot, but it has the disadvantage of only working properly with the generated buildroot image.

1 Like

I meant to ask actually - I was wondering if the code the your old soft cut Ugen is still around? for Islands I need to have the looper inside of supercollider so I can have the sequencer playing independently of the looper - I was just going to make a looper in straight supercollider but if the old soft cut UGen is still kicking around that might be a better solution?

eh… i don’t think it’s worth resurrecting. it would be better to make a new wrapper of the current softcut library. { https://github.com/monome/softcut-lib }

id say just go for it in supercollider. if you don’t need the varispeed write+preserve, it can be simple and efficient.

that said… this issue has been hanging around for 3 years so… maybe it would be nice to get a little proactive with just making an xfaded playback unit:
{ https://github.com/supercollider/supercollider/issues/2600 }

fair enough - for the current version I am only using the same speed so I’ll just go with the Supercollider one I’ve got working on the desktop…

Thanks guys, that’s helpful. Are the 3rd-party .sc3 plug-ins from the official SuperCollider repo available in Norns?

Yeah. All the stuff in the supercollider documentation that says “+classes” is installed

1 Like

Great information in this thread ty~ I found it’s pretty straight forward to get it set up.

The main snag I ran into was once I built an .so file from the example repo you need to place it and the sc class file in ~/.local/share/SuperCollider/Extensions/MyThing/classes/MyThing.sc and ~/.local/share/SuperCollider/Extensions/MyThing/plugins/MyThing.so.

1 Like

I would suggest using Faust (except for granular, FFT, demand rate stuff). Faust will compile to SC ugen as well as pd object vst/LV2/au plugin etc.

1 Like

I’m trying to port the braids code to a UGen and I’m having trouble figuring out if I’m filling the ‘out’ buffer correctly. I’m using the Mi4Pd repo as my starting point (ty TechnoBear!).

I figured I’d try and start by passing in a static frequency, but I’m getting weird audio glitches when I run it now. Here’s my “get next blocks” function: https://github.com/zzsnzmn/zzgen/blob/master/ZZGen/ZZGen.cpp#L70-L109

I’m thinking I might start with re-creating a basic sine wave, because I’m clearly not handling phase/frequency right.

1 Like

Can’t help you with this, sadly, but interested to see how you get on!

And also to see a Braids uGen!

    uint8_t* sync = new uint8_t[inNumSamples];
    int16_t* outint = new int16_t[inNumSamples];

first things first: never allocate in the audio callback

(wow, i see Mi4PD is doing this too… :grimacing:)

i guess this is superficially similar to MI code, but there the render blocks are actually static, declared up here:

(… so these lines, which look like they could be stack allocations of arrays, are actually simple pointer assignments.)


second thing, i don’t see where you’re actually using the frequency input in the audio-rate update function.

i only see it being used as a phase increment in the control-rate update function. it looks like you’re assuming both the update_a and update_k fn’s will be called on each block, but that’s not how it works… only one is called, switched on the rate of the ugen instance (here.)

(i don’t think i would bother with the control-rate option at all, for a braids clone… but at least i would focus on one or the other to start with.)


but yeah, i agree that it would probably be a good idea to start with something simple like a sine or ramp, then drop in the more complex oscillator.

the TrigPhasor code posted near top of this thread might be helpful there.


1 Like

Yeah that’s odd… usually what I do is allocate upfront and only reallocate if block size changes.
(change sr/bufsize is infrequent enough to allow for audio glitch - and would likely happen even if you didn’t reallocate mem ;))

I’ll fix it - and will check to see if i was on drugs when I wrote it :wink:

1 Like

One thing I found incredibly useful when learning all this stuff was to visualise phase and oscillators. I made stuff like this in jupyter-lab

(this is a really simple example - I was actually running polybleps like this to see what they did and make sure they were at the points they needed to be at)

1 Like

I appreciate the tips with all this. From what I can tell, the braids code keeps track of the phase internally: https://github.com/zzsnzmn/zzgen/blob/master/mi/braids/analog_oscillator.cc#L206-L270

I updated the code such that the buffer allocation doesn’t happen each time the next ar function is called, and right now by calling set_pitch with that static c2 variable I’m able to generate sound. I’m just not sure how to determine what the frequency is for the signal, I figure I need to do something with the IN(0) buffer but I’m not sure if there’s something else I should be looking at.


EDIT:

So I’m looking at some of the included UGens, and noticed that this line here is calling ZIN0(0) to get a frequency (https://github.com/supercollider/supercollider/blob/f806ace7bd8565dd174e7d47a1b32aaa4175a46e/server/plugins/OscUGens.cpp#L1083-L1105). ZIN0 is for control rate, so I’m not really sure how to go from one frequency to the buffer of frequencies that you get by calling the audio rate IN(0) – can I naively pop the first value and assume that the frequency for the given block doesn’t change over time?

ZIN(), ZOUT() are not for kr, they are used to set up the LOOP macros. this used to be required but now is not particularly recommended; you see it mostly in older UGen code.

the frequency input can be different per sample. but yes, you can ignore this and just use the first in a block if you want. use IN(i) to get a pointer to the input block, or IN0(i) as a (sort of pointlesss) shortcut to get the first value. (equivalent to IN(i)[0].)

oh, i do see you are calling set_pitch. sorry i missed that. it looks like this expects a MIDI note number in braids:

that’s a bit unfortunate. on braids hardware a LUT is used for efficiency. i would probably rewrite the braids osc internals to take frequency directly, and also to perform fixed->float conversion right at the point where the output buffer is filled.

but if you don’t want to do that, and you just want one frequency per block:

```
float hz = IN0(0);
float pitch = 440.f * pow(2.0, (hz - 69) / 12);
...set_pitch(pitch);
```

ach, woops sorry, i meant the opposite of course

midi = 69 + log2(hz/440) * 12;

not trying to drag out this detour, but this seems worth saying in a “beginner” thread: in my experience that’s not a safe assumption. in particular, on mobile platforms, there is aggressive load balancing of audio and other processes, and high variance between frame count in audio callbacks. your DSP algo can’t glitch if frame count changes; such a glitch could occur millions of times in a long audio session, especially on lower-end hardware.

iOS is especially notorious for this. so when making Audio Units, if your algo requires an internal buffer, you can specify a maximum frame count and the OS will chunk larger buffers into multiple consecutive calls. this allows you to use singly- or statically-allocated internal buffer with the max size. JUCE and other frameworks work in a similar way. (e.g. JUCE’s prepareToPlay takes expected frame count but warns you that it will vary by “small amounts.”)

in more managed environments (plugins for high-level languages) i guess you can often get away with assuming a fixed buffer size because SC/puredata/max will double-buffer. (but even in puredata i’m not so sure; it’s a portaudio app and portaudio allows paFramesPerBufferUnspecified flag. so you might find some maniac with a custom build running on android and obsessed with raw throughput.)

anyways, for anyone starting out with this stuff, it seems important to internalize the rule of “don’t do time-unbounded operations on the audio thread.” there are few upsides to ignoring this rule and plenty of downsides.

1 Like

Unless I’m misunderstanding it seems that the midi_pitch name might be a bit of a red herring, as the pitch is initialized as 60 << 7 (https://github.com/zzsnzmn/zzgen/blob/be7b73c0d3551e376e5b9198fe43f2bd3aadfbfe/mi/braids/analog_oscillator.h#L77).

I tried the inverse of what you suggested to get the midi note from IN0(0) but I believe the frequency is actually not the value I think it is :\

float hz = IN0(0) * unit->mFreqMul;
int note = (int) log(hz/440.0)/log(2) * 12 + 69;
unit->osc.set_pitch(note << 7);