scripts are basically just chunks of lua code. there is no “at once.” the norns menu system makes some assumptions about scripts being self-contained in specific ways (for example: simple callback model for handling UI events, rather than some kind of hierarchal responder model.) this is basically for convenience given that norns hardware has a very constrained UI and that we want the programming environment to be simple and predictable. (IOW, program state can be entirely determined by looking at one script and its dependencies.) but you can have reusable / layered / modular software design within the lua component of the system. for example brian has posted above about how you can include a lua module that enables UI parameters for softcut manipulation, within a script that uses an Engine.
the “Engine” structure is monolithic by definition - an Engine defines the current OSC protocol that can be used by the scripting layer to control user-created audio processes. but an engine can also be modular / dynamic / &c (see R.) new norns architecture doesn’t assume that engines are implemented in supercollider.
there is quite a bit of information about how to write SC engines if you look around on the forum and not just in the 2.0 beta thread.
my feeling is that there is a lot to learn in vanilla supercollider (which is still the main way for users to extend the audio capabilities of norns) without dreaming about exotic extensions that seek to emulate other hardware.
that said: in general, stuff that works on x86 supercollider “should” also work on norns. you may have to fiddle with it in the same way as any other linux system.
that is sort of a traditional sampler kind of construct. (IOW: audio is loaded into RAM; then incoming MIDI notes trigger playback at various rates.)
i just mean that you can give softcut very short loop timing and it will a) respect it [*], b) crossfade in a way that doesn’t produce noticeable harmonic distortion, c) apply rate modulation with sufficient accuracy to be used for pitched material. if you’ve ever tried to implement a (good) wavetable oscillator with peek~ and buffer~ then this will be a breath of fresh air. (e.g. you don’t have to do anything special to prevent bandwidth expansion from the loop endpoints.)
but “live wavetable oscillator” doesn’t have a traditional meaning - i guess what i have in mind is modulating pitch by loop length rather than by phasor rate. maybe i should say “tuned delay line as resonator,” except you can arbitrarily scrub around in the buffer, freeze buffer contents, modulate the phasor, &c &c.
i don’t mean that softcut itself has fancy features for, say, multisampling a given slice of audio at runtime. (instead it has a sort of quick+dirty anti-aliasing function in the form of a SVF that can be modulated by the phasor rate, per-sample.)
([*] there are some subtleties here. by default, fractional sample phase is not carried over between loops. you don’t want to do that when trying to “launch clips” with maximal precision, but you might want to if the loop times are short enough to act as resonator/oscillator. could add “phase carry” as a runtime option. without it, pitch defined by loop length will effectively be quantized to 1/samplerate.)
but look, there are a lot of features in softcut. i spent a lot of time on this. it has more functions and applications than we can easily demonstrate. (i would love to make a thorough demonstration suite myself, but realistically will never have time.) i encourage people to investigate present capabilities before we talk about what things to add. (and @speakerdamage I recognize that you’re doing exactly that; this is a general request.)