do a git pull from ~/norns, reported above and fixed
scripts are basically just chunks of lua code. there is no “at once.” the norns menu system makes some assumptions about scripts being self-contained in specific ways (for example: simple callback model for handling UI events, rather than some kind of hierarchal responder model.) this is basically for convenience given that norns hardware has a very constrained UI and that we want the programming environment to be simple and predictable. (IOW, program state can be entirely determined by looking at one script and its dependencies.) but you can have reusable / layered / modular software design within the lua component of the system. for example brian has posted above about how you can include a lua module that enables UI parameters for softcut manipulation, within a script that uses an Engine.
the “Engine” structure is monolithic by definition - an
Engine defines the current OSC protocol that can be used by the scripting layer to control user-created audio processes. but an engine can also be modular / dynamic / &c (see
R.) new norns architecture doesn’t assume that engines are implemented in supercollider.
there is quite a bit of information about how to write SC engines if you look around on the forum and not just in the 2.0 beta thread.
my feeling is that there is a lot to learn in vanilla supercollider (which is still the main way for users to extend the audio capabilities of norns) without dreaming about exotic extensions that seek to emulate other hardware.
that said: in general, stuff that works on x86 supercollider “should” also work on norns. you may have to fiddle with it in the same way as any other linux system.
that is sort of a traditional sampler kind of construct. (IOW: audio is loaded into RAM; then incoming MIDI notes trigger playback at various rates.)
i just mean that you can give
softcut very short loop timing and it will a) respect it [*], b) crossfade in a way that doesn’t produce noticeable harmonic distortion, c) apply rate modulation with sufficient accuracy to be used for pitched material. if you’ve ever tried to implement a (good) wavetable oscillator with
buffer~ then this will be a breath of fresh air. (e.g. you don’t have to do anything special to prevent bandwidth expansion from the loop endpoints.)
but “live wavetable oscillator” doesn’t have a traditional meaning - i guess what i have in mind is modulating pitch by loop length rather than by phasor rate. maybe i should say “tuned delay line as resonator,” except you can arbitrarily scrub around in the buffer, freeze buffer contents, modulate the phasor, &c &c.
i don’t mean that
softcut itself has fancy features for, say, multisampling a given slice of audio at runtime. (instead it has a sort of quick+dirty anti-aliasing function in the form of a SVF that can be modulated by the phasor rate, per-sample.)
([*] there are some subtleties here. by default, fractional sample phase is not carried over between loops. you don’t want to do that when trying to “launch clips” with maximal precision, but you might want to if the loop times are short enough to act as resonator/oscillator. could add “phase carry” as a runtime option. without it, pitch defined by loop length will effectively be quantized to 1/samplerate.)
but look, there are a lot of features in softcut. i spent a lot of time on this. it has more functions and applications than we can easily demonstrate. (i would love to make a thorough demonstration suite myself, but realistically will never have time.) i encourage people to investigate present capabilities before we talk about what things to add. (and @speakerdamage I recognize that you’re doing exactly that; this is a general request.)
Ported FM7 to my device. I’ll make a PR tomorrow as my ssh keys are not with me right now.
cool, thanks for the explanation.
speaking for myself, I feel like maybe I’ll be in a position to ask for new softcut features in 2020 at the earliest. thanks so much for all of the time you’ve put into this!
I vaguely remember someone mentioned the new DSP system would allow for the TAPE mode to record a signal coming into the inputs, rather than only the outputs. I tried to do this and the recording was silent.
Only if it is already routed through the monitor path. There was a desire expressed for a quad or dual stereo setup recording all the things. Haven’t done this. Stereo output capture from scratch is tricky enough and already needs fixing. Working on it
How do I route it to the monitor path?
these functions in the
can’t recall for certain but don’t think this has changed in 2.0
Whoa! Assign a midi cc to it and you’re in dub heaven. It’s awesome.
Works a treat, thanks!
Can softcut scripts be edited? In maiden? Looking forward to the chance to play around with this.
This needs to be entered into the bottom section of maiden. The section with the “>>” in it, not in the normal area where you would enter Lua code for a script.
BTW: This sounds particularly lovely with the KarplusRings engine. Ex:
cut1rate = 0.48 damping = 0.32 brightness = 0.24 lpf freq = 3708 bps freq = 139 bps res = 0.59
I thought it could be entered in the REPL but I thought about integrating it in scripts too. Do you think it’s not possible? (I need to ruin Kayan by trying)
And since softcut gets mixed in the main norns mixer, does this mean it is not cc-controllable (I was about to say cv-controllable, damn habits)?
correct. i meant in the REPL.
the REPL is an incredibly powerful facility of norns— lua runs as an environment you can interact with dynamically.
you can add
halfsecond to any script by simply doing
hs = require 'halfsecond' then
hs.init() at the end of your
you can make any partial script (softcut/etc) and just include it in the
dust folder, where it can then be called by other scripts.
ie make a file
and you should (i’d better double-check this!) be able to include it:
other.lua returns a value, you can do
x = require 'other' which is how halfsecond works
halfsecond is included in
/norns/lua/softcut/ as a sort of library function.
I added ‘halfsecond’, and the settings above into the strum script. I really like how the delay hangs around… Ex:
When switching to MIDI, AAS’s Chromaphone is playing and the trailing delay from half second shines on.
Not the most expensive clock in the world, but…
Looking forward to trying it.
Pinged filters Euclidean drums (as suggested by @tehn yesterday) with halfsecond.
When I made the video in the previous post, I used the encoder to manipulate the delay because when I put midi-cc mapping to on nothing happened.
Now I’ve just started my Norns, connected to wifi, playfair was still playing as it was the latest-opened script, pasted the require halfsecond snippet into maiden, went to the midi-cc mapping page, and these errors appear in maiden (they repeat over and over):
/home/we/norns/lua/core/menu.lua:561: attempt to perform arithmetic on a nil value (field ‘?’)
/home/we/norns/lua/core/menu.lua:561: in field ‘penc’
/home/we/norns/lua/core/menu.lua:142: in function ‘core/encoders.callback’
/home/we/norns/lua/core/encoders.lua:57: in function ‘core/encoders.process’
/home/we/norns/lua/core/menu.lua:589: attempt to compare number with nil
/home/we/norns/lua/core/menu.lua:589: in field ‘redraw’
/home/we/norns/lua/core/menu.lua:671: in field ‘event’
/home/we/norns/lua/core/metro.lua:165: in function </home/we/norns/lua/core/metro.lua:162>
FM7 with 2.0 syntax PR is up. I’m suspicious I didn’t do this in the correct repo but I’m a little unclear how the transition from the “stock scripts” and the “community” scripts in the
we repo will happen.
So this may depend on how you want to handle it. You can submit to the new community repo “we” or you can host it on your own github and not have do the PR process. Same goes for the FM7 engine (since the dust folders get scanned for engines as well) - but if you did that you’d want to get the gang to remove the engine from the main norns repo.