well, you just gotta be able to run at arbitrary sample rates.
the current SR is available to all subclasses of Unit (like your new ugen).
the interface is C-like in that a MyUnitSubclass* pointer is passed to constructor and process functions. so you can use e.g. MyUnitSubclass->mRate->mSampleRate ( mSampleDur and mRadiansPerSample also being available.)
technically i guess one should allow for changing samplerates between blocks, but practically i think it’s OK to just handle it in the constructor.
here’s example of a ugen checking SR at construction:
and in the update func:
(oh, sometimes you see the SAMPLERATE macro, which is just shorthand for unit->mRate->mSampleRate. i think current best practice is to just spell it out instead - certainly IMO makes things less confusing.)
hope that helps!
[ed] guess i didn’t address what is possibly the “meat of the inquiry” as they say: “how do i change my code to deal with different SR.” but there is no one answer to this as it depends entirely on what DSP you are implementing. (some things can/should be formulated independently of SR - for exampe in SC vanilla ugens some things like Phasor take a normalized frequency and require you to scale it by the SR in the synthdef if you need to.)
I’ve done some DSP-related work in the past (largely tinkering with others code, I have to say), but always on a platform where the sample rate is fixed, so never had to deal with this particular issue before.
I guess for something like a filter, with a single cutoff parameter, you’d simply scale the cutoff value according to samplerate?
yeah more or less, digital filters and oscillators both (often) ultimately operate in terms of normalized frequency. as in the 2 linked examples…
(first is an oscillator so it converts hz to rad/sample. second is a one-pole lowpass filter, but with the extra wrinkle that it calculates its coefficient convergence time rather than cutoff frequency. in any case yes, SR is (often) a (reciprocal) scaling factor within the coefficient calculation. (emphasis added because you still need to recompute the coefficient for e.g. BLT-derived filters, as pointed out below.))
Generally, no. It depends on the discretization method, but at least for the bilinear transform and its warped variants, the results won’t be the same as re-deriving the coefficients for a different sample rate.
i’m glad you pointed out the BLT which is a main reason i said “more or less.” and there is no substitute for understanding the particular algorithm you’re implementing in sufficient detail to alter it.
but i gotta add… when it comes to implementations, for example supercollider’s RPLF, SR and cutoff can be baked together into a normalized frequency before pre-warping. this is pretty normal… its only if you try and do stuff like memoize the tan computation that you get into trouble, and if OP is adapting an algorithm that accepts normalized freq, then they don’t really need to understand every detail that folllows.
in any case, certainly agree that it’s not a good general assumption. (and in fact here is no good general answer to the question, it totally depends on what you are implementing.)
i should add that you may indeed encounter implementations (particularly in firmware) where the SR is assumed or specified at compile time only. (for example this can be used to pre-calculate coefficient tables for discretized filters to be more efficiently modulated.) we even do this in norns/crone a little bit since we control the whole HW/OS stack. again, understanding is needed.
can mean a lot of things, none of them obvious (to me.)
Yeah, total agreement from me that sample-rate invariance is a thorny subject, which is partially why we have all these different discretization methods (for example the impulse invariance transform is more suitable than BLT for modal synthesis-type resonators).
This is especially true beyond the confines of classical linear time-invariant filter theory, e.g. nonlinear and adaptive filters, including dynamic range compressors. In arbitrarily complex cases, I believe the standard approach is breaking the DSP graph into small units where sample-rate invariance is defined, discretizing individually, and hoping for the best. And, of course, doing lots of tests, automated and manual, on different SRs.
The uGen I want to make is quite niche. In its basic form, the filter and oscillator functions have to be calculated at a fixed 8KHz (or at least behave as if they were in terms of their output).
I’m not especially bothered about interpolating the output up to the actual sample-rate of the system in the most “correct” way, initially at least (it’s all very low-res anyway). I am intrigued to know how I might go about that, in the longer term, if that would smooth out some of the rough edges. That’s a query for another time though.
Following advice, I managed to get the filter and oscillator code functioning in a custom object on the Axoloti platform a while back, but based on that platform’s fixed SR and buffer-size.
Am I right in thinking that the Norns/Shield/Fates hardware (and presumably SuperCollider uGens running on it) do in fact also run at a fixed samplerate and buffer size?
resampling on the fly is not actually very hard and might be a good option for this odd situation.
i would look at SmoothDecimator from sc3-plugins for an example.
you would set the increment variable inc to the ratio of your target samplerate to the server samplerate (target SR must be lower.) and perform whatever processing you want on the input just when it’s written to the buffer:
(L244 in link) buffer[buffer_pos] = DoMyCoolProcessing(in[pos]);
yes all the audio on norns runs at 48k with a 128-frame buffer.