ZIN(), ZOUT() are not for kr, they are used to set up the LOOP macros. this used to be required but now is not particularly recommended; you see it mostly in older UGen code.
the frequency input can be different per sample. but yes, you can ignore this and just use the first in a block if you want. use IN(i) to get a pointer to the input block, or IN0(i) as a (sort of pointlesss) shortcut to get the first value. (equivalent to IN(i)[0].)
oh, i do see you are calling set_pitch. sorry i missed that. it looks like this expects a MIDI note number in braids:
that’s a bit unfortunate. on braids hardware a LUT is used for efficiency. i would probably rewrite the braids osc internals to take frequency directly, and also to perform fixed->float conversion right at the point where the output buffer is filled.
but if you don’t want to do that, and you just want one frequency per block:
```
float hz = IN0(0);
float pitch = 440.f * pow(2.0, (hz - 69) / 12);
...set_pitch(pitch);
```
ach, woops sorry, i meant the opposite of course
midi = 69 + log2(hz/440) * 12;
not trying to drag out this detour, but this seems worth saying in a “beginner” thread: in my experience that’s not a safe assumption. in particular, on mobile platforms, there is aggressive load balancing of audio and other processes, and high variance between frame count in audio callbacks. your DSP algo can’t glitch if frame count changes; such a glitch could occur millions of times in a long audio session, especially on lower-end hardware.
iOS is especially notorious for this. so when making Audio Units, if your algo requires an internal buffer, you can specify a maximum frame count and the OS will chunk larger buffers into multiple consecutive calls. this allows you to use singly- or statically-allocated internal buffer with the max size. JUCE and other frameworks work in a similar way. (e.g. JUCE’s prepareToPlay takes expected frame count but warns you that it will vary by “small amounts.”)
in more managed environments (plugins for high-level languages) i guess you can often get away with assuming a fixed buffer size because SC/puredata/max will double-buffer. (but even in puredata i’m not so sure; it’s a portaudio app and portaudio allows paFramesPerBufferUnspecified flag. so you might find some maniac with a custom build running on android and obsessed with raw throughput.)
anyways, for anyone starting out with this stuff, it seems important to internalize the rule of “don’t do time-unbounded operations on the audio thread.” there are few upsides to ignoring this rule and plenty of downsides.