I guess this fits best in ideas? (mods, feel free to move this if there is a better thread)
How does everyone approach norns in a live situation/set up? Recently (pre crow), I’ve been using norns as a sampler for my modular and poly synth creating textures using cranes or MLR as well as doing live looping with Otis. Now that Crow exists in my rig, I’m having a hard time choosing what norns should be doing in my rig. I’m sure this has been asked before or touched on but will there ever be a way that I could use something like Boingg or Awake to sequence my modular through crow, as well as switch to Cranes or MLR to process that audio coming into norns? Is that too processor hungry for the device, or would there potentially be a way to combine scripts together (in the future?).
it definitely shouldn’t be too processor intensive. less concepts runs sequencing, an engine, and manipulates two softcut buffers.
as far as running two scripts together, i think this is a great intermediate challenge for someone learning to script on norns. it’s definitely approachable to take two scripts and generalize one into an abstraction (check out the hnds and tlps bits of @Justmat’s Otis, for example). then you just need to choose a way to switch what draws to the screen / grid (should it be a button combo or an encoder turn or whatever works for you).
i will be cleaning up the code of cranes over the next month to make it wayyy easier to decipher wtf is going on inside of it, but it should be an achievable goal for you to make a boingg abstraction — i do think that it’d be DOPE for folks to wrap scripts into abstractions as part of their projects tho, so we could see more hybrid combos
i’ve meant to do something like this before, just never got around to it. so, i just whipped up an otis + boingg thing, that i’ll drop in a gist tomorrow. grid controls boingg, sending either cv via crow outs 1 and 2 or i2c to JF, norns runs otis as usual.
yeah, for sure! the way i’ve been doing this in experiments is to just use a variable to track which “app” is shown on the screen or grid. so, inside the single grid redraw function, you frame things with an ‘if’ statement. so if variable app_focus (for example) == 1 then draw this on the grid, elseif app_focus == 2 then draw this other thing on the grid.
hardcore limitation to keep in mind is that only a single engine can be loaded at a time. but you can run multiple sequencers and softcut fun at the same time across many screens and grid scenes.
I’m at work right now so I can’t make a full reply to all of this but I just wanted to say thank you to both of you (@dan_derks/@Justmat). Kinda glad I brought this up. Also I do feel guilty that I don’t know how to make any scripts or even add basic functionality and then I get to use them without doing any work. What if we had a way to donate to our generous script contributors?
i’ve read elsewhere on here that the issue with donations is that there are some silent heroes who have done an enormous amount of the unglamorous backend work for any of the scripts to be possible… so i’m guessing the heartfelt thanks approach is probably still the most fair approach for now
(mod note: started a new thread for this topic as it could become a longer topic)
combining scripts is not necessarily hard, but it also requires quite a bit of attention to the inner-workings of both scripts you want to combine.
what @dan_derks suggested is correct in terms of tracking some sort of focus variable and switching your enc, key, redraw, and grid(…) functions.
but there are quite a few other things that will possibly get messy:
if both scripts have same-named variables or tables
if both scripts are trying to control or interpret the same MIDI ports or softcut or (etc)
DEFINITELY if both scripts are using an ENGINE this will not work, unless it’s the same engine, but that still may be difficult to navigate depending on the context.
but others would be very simple to merge. i could see a simple tutorial where:
script 1 takes input into softcut and makes a parameterized echo effect with custom screen/enc controls
script 2 is a sequencer that loads an engine and has a full interface
combining these two and then assigning a “switch view” control (such as ENC 1, given both scripts didn’t use it) would be very trivial.
a more in-depth option for “modular” recombinatory (that’s not a word i guess) would require following some very specific conventions to ensure that things worked together consistently— a task which might be hard to achieve. essentially scripts would need to be written as libraries with very thin “launcher” wrappers (and there could be wrappers to combine a stack of scripts, if they are compatible)
but again the current approach to DSP using supercollider right now is limited.
and some of these ideas end up being difficult to reconcile from a design perspective given the goal of easy accessibility for norns scripting. it’s important to keep this goal in mind— but there may be an elegant solution for combining/layering scripts.
since otis already uses an engine, i had to remove boinggs audio output engine. the choices are now either cv out via crow, or sending i2c to JF. there is no documentation, but it functions just like otis with the addition of a 4th page. on the 4th page the controls are…
key1 = alt
enc1 = nav
enc2 = select note
enc3 = adjust note
alt + enc2 = tempo
key2 = start ball
key3 = stop ball
nb: you can use a grid to control boingg while focused on the other otis pages.
Edit: just realized that the include lines will only work if the file is saved in your Otis folder. I’ll update the gist with the appropriate path tomorrow!
edit the second: script is now updated with the correct include paths
This is something I’m interested in exploring, especially post-Crow. I’ve been thinking about a project that combines sequencing with the grid alongside tiny utility modules similar to O_c hemispheres, for instance.
I think it would be cool to approach that architecturally as little modular chunks that can be instantiated as needed, so that the individual pieces can see some reuse instead of being mired deep in my script.
My experience so far is that when I try to spin something out into a stand-alone class on Norns, it ends up feeling logical to have one class that handles the real work, and one that deals with the UI-facing stuff like params and drawing to the screen.
Thinking this out a bit more… getting these things to be totally friendly to the non-programmer gets a bit more complicated. We need to make sure each module responds to all the same functions for instance. In some ways the hardest part is representing it through the UI!
Poking around with simultaneous functionality (simple proof of concept below). Not sure why I though cpu might be an issue, especially for my use cases; 5 voices of softcut and midi recording / manipulation and I’m seeing < 8% cpu (without reverb or engines etc).
Working on a linear menu flow for access to transfer function params between ‘scripts’ to adjust influence [ie script 1 <> transfer fxn1 <> script 2 <> transfer fxn2 <> script 3 …], but not sure where I’m going yet. My limited coding ability should help me narrow down the options
Hi Mat, i’m having a problem loading the oats and earthsea/bounds combo.
i get an init error with a missing decimator engine. Where is that engine found?
Is there a special place these scripts need to be placed?
WARNING: Cannot call free on a Buffer that has been freed
FAILURE IN SERVER /n_free Node 15929 not found
FAILURE IN SERVER /n_free Node 15930 not found
FAILURE IN SERVER /n_free Node 15931 not found
warning: didn’t find engine: Engine_Decimator