Norns: ideas

Doubt it and don’t see anything in the API docs, but is there any programmatic access to the TAPE feature? :slight_smile:

Sometimes I use TAPE as my “reel to reel”, if you will, to quickly capture sketches, and it’d be real neat to have a tiny app with Grid control wrapping my workflow… I really love how I can be laptop-less and quickly record with TAPE, then run a quick rsync from my computer to fetch the latest recordings without unplugging or moving anything.


you can call tape functions directly on the Audio class - see

I’ve played around with some sequencer scripts this way to auto start/stop tape recording when I start/stop the sequencer


Awesome! Totally glossed over that module - thanks for the help.

1 Like

I was thinking that would be nice to have a harmonizer like the one in iZotope Nectar where you can use MIDI in to select the voices. A little box like Norns would really shine as a voice harmonizer.

what about a generic polyphonic synthesizer script?

after playing with dronecaster I was thinking it might be neat to expand on @tyleretter’s idea. droncaster is a beautiful ui built around selecting user-submitted supercollider patches for drones. instead of just drones, how about full-blown polyphonic synths? It can basically be the same ui (selectable supercollider engines), with a midi input. the main difference is that instead of just parameters hz and amp, each supercollider engine would require hz, amp, gate, adsr, and then possibly a few finite numbered generically named parameters (parm1parm4) that could be specially defined however the engine’s author wants.

i’ve actually got a primordial version of this working, mainly for learning purposes. if this already exists, please excuse my ignorance :wink: if not, i’d be happy to put mine out in the world.


maybe not as generic as you have in mind, but there’s Molly the Poly

Moln was built using the modular-style engine, R

definitely room for more options!

1 Like

yeah go for it. i have some half-baked versions of such a thing extending PolySub engine.

i think the reason it’s still half-baked is that i find it clunky and/or restrictive to deal with a rich array of sound parameters (as above) for a given synth structure, and also make their ranges/handles generic: it’s nicer for me to have e.g. detuning hz scriptable in its native unit.

and “philosophically” i also think this leads to deeper consideration of the specific architecture as compositional material. whereas having several generic modulation parameters makes things plug/playable but forces a certain disassociation between musical logic and actual sonic qualities. (like, a lua script can’t actually make intelligent decisions about the beating frequencies effected by use of the detune parameter if it is specified by a generic unit range.)

so, it becomes attractive to make some sort of dynamic parameter discovery / declaration layer. but at that point i start to wonder about the benefit of the whole exercise, since we already have discovery of parameters (or rather, general void functions) for engines in lua, and you can already actually switch engines on the fly, and engines can be subclassed in supercollider, and i don’t really want to build out a bunch of synth-architecture-management stuff in lua anyway. [*]

but, all that said of course there are plenty of very simple polysynth architectures that don’t need a lot of parameters but are still useful.

i also have a half-baked collection of one-shot polysynths, compatible with PolyPerc. this to me is a better fit for a generic interface. not sure i can put my finger on why, except that i guess these synths are usually simpler to write than sustained / continuously-modulated voices, and triggering them in a massively multitimbral way is more fun / natural. (with sustained polysynths i feel like since its more usual to manipulate several identical instances, it is nice for each instance to be able to go in lots of directions.)

anyways i’d be happy to collaborate on something like that, the idea has been marinating for quite a while but has not been executed.

[*] see also this issue, where after discussion we decide that its useful for “polysynth engines” to implement a standard interface (subset of params), without necessarily sharing a restricted param set:


i get what you’re saying - its useful to know what a parameter is doing. but then i think about the op-1 engine ethos where there is a forced dissociation between knobs and sonic qualities. that has been a bit freeing rather than restrictive for me personally. i think because the op-1 engine obfuscation forces exploration around a sound. which can be a rewarding (though sometimes frustrating) interaction. i think this is what i was going to lean into with this program - basically an op-1 “engine” selector but with maybe more than 4 parameters. also, it makes it simpler since you don’t need a dynamic parameter discovery :slight_smile:

this is really my personal motivation for creating something like this. there are a lot of collections of supercollider scripts that range in bakedness. there are github repos like SCLOrkHub that have even taken themselves to normalize a lot of them. i guess i’m thinking to normalize these just a little further - making sure they have adsr and gate and then maybe fit some other parameters into the [0-1] ranged that are mapped internally in sc.

however, i have to say with some hesitation that my personal motivation for collecting cool supercollider scripts is stunted a little bit - i’m surprisingly finding that a lot of supercollider scripts i find on sccode or github are not that good sounding and that they even can sound a little cheap. i’ve got scott wilson’s supercollider book and i’m hoping that will help me to understand the beauty of supercollider.

that being said, here is what i have so far:

it is a simple approach i have but it works pretty well. the adsr is working, the midi input is working, the basic synth layout is working (basically forked from PolySub). just putting it out there in case anyone comes by and is interested. its not ready to “release” yet since parameters aren’t hooked up and the selecting part of the ui is not done. i have two more ideas for norns scripts that i want to do first but i will come back to it later (seeing more exciting supercollider synths would help me to come back to it sooner) :slight_smile:


@dan_derks + @tehn and I were just talking about this a bit as well - definitely worthy of further discussion here or on github and I think there would be some solid benefits to having an optional standard with the right level of flexibility (a standard could be multi-tiered maybe ? i.e. ISO C > POSIX > SUS)

I’d love for such a thing to be able to address parameters and pitch/gate independently on voices somehow in addition to global + voice allocation format (can’t speak too fluently on the implementation details for that)

<< spefics >>

for me the interface would look something like:

  • x number of voices (we could allow them to be different (i.e. islands), and/or expect them all to be the same)

  • each voice n has P group of parameters

  • each voice has frequncy & vel (max style, vel == 0 for note off ?)


  • set param k in P for voice n

  • set frequency & vel for voice n

  • set param k in P for all voices (+ span ?)

  • set freq & vel for the all voices

  • set freq & vel for the next free voice

  • free voice n ( /kill the envelope )

  • free all voices

1 Like

Just throwing out some more tangentially related ideas, since this conversation has been putting thoughts into my head today…

What if there were some kind of meta-earthsea script? It would support MIDI keyboards, Grid as keyboard, looping Grid and MIDI patterns, but also have the ability to change between existing community engines (PolySub, Molly the Poly, Passersby)… Would need to account for the various engine commands for each (perhaps one takes HZ while another takes MIDI note, for example), but I don’t think it’d be a huge deal.

It could use the PARAMS page for editing synths, and just use the existing earthsea UI to visualize notes. I’m not sure how PSETS would play with changing engines, but I’m thinking if you save the selected engine along with the PSET maybe it’d work?

So rather than make all the engine interfaces homogenous, it would do that to the control mechanism instead. Just spitballing as I really like the looping capabilities of Earthsea but think it’d be cool to sub in other engines too… :slight_smile:

Edit: now that I’m reading more of the above thread, I guess these two ideas are pretty similar and could coexist/fuse together quite nicely…

1 Like

with current infrastructure it’s really easy to place most of the code for an interface in a lib directory and fill the base folder with scripts that look more or less like this:

include 'lib/earthsea'
include 'molly_the_poly/lib/molly_the_poly' = 'moly_the_poly'

that would populate the SELECT screen with like eathsea/molly_the_poly, earthsea/passersby, etc. the main script can even loops over params and build parts of the interface based around the engine params supplied. most of the groundwork is in place I believe, just add a community standard of some sort !

1 Like

yeah, i agree. making playable and good-sounding softsynths is just not easy, even in a high-level DSL. (and cabinetry isn’t easy even with power tools), details matter. (e.g., aliasing is fairly ubiquitous; one scsynth feature that would help a lot would be an option to oversample a UGen graph.)

anyways, i’d be happy to try and contribute a couple of voice definitions.

@andrew the functions listed do seem like good candidates for unified command -> voice interface. we can get more specific by examining the engines and considering the other points raised in the GH issue:

@cfd90 i agree that many parameterization decisions can and should be left to the scripting layer. for example i strongly favor the practice of the noteOn/start command accepting Hz and unit velocity, rather than MIDI numbers. what is needed is unifying e.g. polysub and molly arguments: the former takes (id, hz) and the latter (id, hz, vel): to complete the example, i’d add vel to polysub but make both take “velocity” as raw [0, 1]. (the control layer can take care of velocity curve/scale, because different controllers will require different tunings of these. the engine could accept curve-tuning parameters but eh… that seems less efficient.)

[as an aside/example, here’s a point of friction: in a voice architecture with many unique parameters, i think it makes sense to expose a raw per-voice amplitude rather than a generalized velocity, because it gives more freedom to the script to maybe correlate amplitude with other parameters in a velocity map. but with a small and generic parameter set you would probably want to make a larger number of more opinionated synth types which specify their own velocity mapping.)

the best way to enforce these conventions would be to provide a wrapper class. the wrapper would take care of control bus and voice instance management (providing the one voice / all voices mapping,) and common attributes like an ADSR volume envelope, maybe Hz and param smoothing, etc. this would also of course reduce boilerplate code considerably.

(BTW: yes, i believe it is totally possible to switch engines from a parameter action and it should “just work.” with the caveat that lua should keep track of the “loading” state. here’s an example of that. i’d be curious to know what happens if anyone wants to try adapting this to use an engine parameter… and maybe someone already did.)

another under-documented feature of the engine module is that the available commands can be dynamically queried (in the static field engine.commands.)

i would love to promise to implement this, it is straightforward… but i’m underwater on norns development promises already. so i won’t for now.


(paging @jah + @markeats !)

I’m happy to help here (esp. on the lua side), but I’m a big ol’ supercollider baby

1 Like

I don’t think I have a whole lot to add but I’d love to see this happen. Just glancing at what I drafted in the github issue I think that’s still pretty much where my own thoughts are at for the basics and those were based on some earlier conversations with @zebra. The notes there match what I’ve implemented in my own projects.

At the time I thought it might be good to align to what MPE standards are doing for some modulation params also but now I’m not so sure that belongs at the SC level.

Very happy to make changes to my engines to fit with any standard :slight_smile:


i’m sorry to spam the thread… but i think i can boil it down now:

there are two possible paths.

  1. perform this decoupling on the lua side. the new lua layer would (A) know about the parameter structure of several engines, (B) present a unified and opinionated(/configurable) API for controlling them, © manage dynamic engine switching. i believe the development of this layer would be amply supported by the existing core scripting API.

  2. perform the decoupling on SC side by providing wrapper class &c. the different voice types will be necessarily simpler and more opinionated than molly or polysub, &c.

philosophically i lean towards (1). that said, (2) might encourage more high-quality contributions, since with an appropriate wrapper in place one can just hone in on a playable range for 3/4 parameters and not worry too much about prioritizing flexibility. probably better for the time-constrained SC developer.

anyways, my opinion has little merit until i actually have the bandwidth to contribute anything. if @andrew is interested in architecting a muti-engine lua layer, that would make sense. i can contribute by cleaning up polysub or whatever, and it could expose bugs in the engine discovery stuff for which i take responsibility.

in the second option i may or may not have time to help with a wrapper class, and can certainly contribute some new voice types.

actually on reflection, i’d like to prioritize the wrapper abstraction. it dovetails with the work that has been (slowly) happenig on norns 3.0 which revises, streamlines and extends the supercollider engine API considerably. regardless of what else is created, reducing boilerplate and supporting interpreted code for polysynth definitions is useful.


keeping more in lua seems like a great option for flexibility & I’m happy to write it - my only concern there is trying to keep the voice allocator in sync with envelopes and freeing synths (but maybe we have enough CPU that we can just keep the voice count high + round robin things and not worry about that too much) (I bumped into this with some lua voice allocation using R but I believe that was measurably hungrier than the other polysynth engines).

I need to spend some more time with the engine API before I speak too confidently on anything but I’d be glad to start writing up a spec for how a flexible lua synth API might work

1 Like

cool. i don’t see an issue. lua has to handle voice allocation only to the extent of specifying arbitrary integer IDs when starting or stopping voices. (typically, ID is a MIDI note number.) the engine deals with assigning, instancing or freeing synths as needed. lua doesn’t have to know about how voice lifecycle is actually managed, nor what strategy is used (static or runtime allocation, self-freed, paused or always-running, etc)

(incidentally, synths in SC can be paused without being freed, so CPU footprint and lifecycle management can be orthogonal. voice mgmt in polly and polysub probably needlessly complex.)

1 Like

Load samples form USB. I am travelling with a just a recorder and norns. Possible?

@swhic I’ve made use of SAM for this. It’s not loading straight in, rather playing recorded sounds into SAM then trimming, saving and naming.

1 Like

I’m not an expert at this sort of Linux stuff, but it seems like you could set up a cron job or something that would sync new files from the usb to the audio folder. The benefit of Norns being a computer under the hood.