it’s not currently possible. would need rework to store observer lists instead of simple cc->param slots.

one thing that you can currently do is add meta-parameters to the script. while less immediate, this allows more arbitrary mappings.

4 Likes

norns makes this sort of thing incredibly easy because the whole system is scriptable.

limitations in the menu UI should be looked at as an opportunity to try some scripting.

if someone wants this, start a “meta mapping” thread and we’ll offer guidance!

6 Likes

Just had a daydream so bear with this abstract half-idea I had:
I don’t know if this exists already. It may be a form of synthesis or sampling that I’m not aware of, but what about a granular sampler than records particles, in a burst, spread out pattern rather than splitting them afterward. Or not even as fantastical, but something that fragments the incoming signal which you can sort of rearrange afterward from the buffer with encoders that map them around a x/y grid. Doing this with the grid would actually be really cool. Like random pieces of the incoming signal are recorded and then you can piece them together like a puzzle by selecting either clusters or individual “grains” with the encoders, maybe from a “pool” visualized on the screen, then grouping them (one grain or cluster of grains per pad, where increasing the size would group grains rather than increase the grain length) with the pads on the grid. A less linear way of granular sampling. I suppose there may have to be a sort of timeline maybe, so you aren’t just piling layers in one spot. Maybe page two of the grid or the right half of the grid could assign a clock division and “slice/cluster/loop” length in a similar way to Animator, per segment. I don’t know how the variable brightness works, but maybe these smaller chronologically grouped sections could be selected within the larger “puzzle” while piecing it together and distinguished with a different brightness. Might be better to just have the overall “puzzle to be laid out in a linear way. Like from top left to bottom right, the timeline progresses. I imagine it to be very micro sound oriented but I don’t know how possible it is to do extremely small microscopic pieces of the recorded signal. But the main idea would be to sort of stitch together fragmented slices of a chopped signal in a new patchwork in a tactile, visual way with the grid. I’ve never used Slate and Ash Cycles but this would be a maybe cooler version of what I imagined/hoped that plugin would do

1 Like

I love the idea of recording a granularized signal as a starting point!

1 Like

I would love to see Sapling re-created on the Norns.

It’s possible that this might seem a little redundant between MLR and Cheat Codes, but Sapling has its differences, and grid/arc functionality would take it somewhere really expressive.

1 Like

Idea: On Norns Shield, use the rpi output jack for an audio click to sync analog devices to. Should follow the global clock at 4ppqn or configurable.

2 Likes

Doubt it and don’t see anything in the API docs, but is there any programmatic access to the TAPE feature? :slight_smile:

Sometimes I use TAPE as my “reel to reel”, if you will, to quickly capture sketches, and it’d be real neat to have a tiny app with Grid control wrapping my workflow… I really love how I can be laptop-less and quickly record with TAPE, then run a quick rsync from my computer to fetch the latest recordings without unplugging or moving anything.

3 Likes

you can call tape functions directly on the Audio class - see https://github.com/monome/norns/blob/b06682cd3968e0bef8c8832ea90fa73c54dda90c/lua/core/audio.lua#L148

I’ve played around with some sequencer scripts this way to auto start/stop tape recording when I start/stop the sequencer

3 Likes

Awesome! Totally glossed over that module - thanks for the help.

1 Like

I was thinking that would be nice to have a harmonizer like the one in iZotope Nectar where you can use MIDI in to select the voices. A little box like Norns would really shine as a voice harmonizer.

1 Like

what about a generic polyphonic synthesizer script?

after playing with dronecaster I was thinking it might be neat to expand on @tyleretter’s idea. droncaster is a beautiful ui built around selecting user-submitted supercollider patches for drones. instead of just drones, how about full-blown polyphonic synths? It can basically be the same ui (selectable supercollider engines), with a midi input. the main difference is that instead of just parameters hz and amp, each supercollider engine would require hz, amp, gate, adsr, and then possibly a few finite numbered generically named parameters (parm1parm4) that could be specially defined however the engine’s author wants.

i’ve actually got a primordial version of this working, mainly for learning purposes. if this already exists, please excuse my ignorance :wink: if not, i’d be happy to put mine out in the world.

5 Likes

maybe not as generic as you have in mind, but there’s Molly the Poly

Moln was built using the modular-style engine, R

definitely room for more options!

1 Like

yeah go for it. i have some half-baked versions of such a thing extending PolySub engine.

i think the reason it’s still half-baked is that i find it clunky and/or restrictive to deal with a rich array of sound parameters (as above) for a given synth structure, and also make their ranges/handles generic: it’s nicer for me to have e.g. detuning hz scriptable in its native unit.

and “philosophically” i also think this leads to deeper consideration of the specific architecture as compositional material. whereas having several generic modulation parameters makes things plug/playable but forces a certain disassociation between musical logic and actual sonic qualities. (like, a lua script can’t actually make intelligent decisions about the beating frequencies effected by use of the detune parameter if it is specified by a generic unit range.)

so, it becomes attractive to make some sort of dynamic parameter discovery / declaration layer. but at that point i start to wonder about the benefit of the whole exercise, since we already have discovery of parameters (or rather, general void functions) for engines in lua, and you can already actually switch engines on the fly, and engines can be subclassed in supercollider, and i don’t really want to build out a bunch of synth-architecture-management stuff in lua anyway. [*]

but, all that said of course there are plenty of very simple polysynth architectures that don’t need a lot of parameters but are still useful.


i also have a half-baked collection of one-shot polysynths, compatible with PolyPerc. this to me is a better fit for a generic interface. not sure i can put my finger on why, except that i guess these synths are usually simpler to write than sustained / continuously-modulated voices, and triggering them in a massively multitimbral way is more fun / natural. (with sustained polysynths i feel like since its more usual to manipulate several identical instances, it is nice for each instance to be able to go in lots of directions.)


anyways i’d be happy to collaborate on something like that, the idea has been marinating for quite a while but has not been executed.

[*] see also this issue, where after discussion we decide that its useful for “polysynth engines” to implement a standard interface (subset of params), without necessarily sharing a restricted param set:

8 Likes

i get what you’re saying - its useful to know what a parameter is doing. but then i think about the op-1 engine ethos where there is a forced dissociation between knobs and sonic qualities. that has been a bit freeing rather than restrictive for me personally. i think because the op-1 engine obfuscation forces exploration around a sound. which can be a rewarding (though sometimes frustrating) interaction. i think this is what i was going to lean into with this program - basically an op-1 “engine” selector but with maybe more than 4 parameters. also, it makes it simpler since you don’t need a dynamic parameter discovery :slight_smile:

this is really my personal motivation for creating something like this. there are a lot of collections of supercollider scripts that range in bakedness. there are github repos like SCLOrkHub that have even taken themselves to normalize a lot of them. i guess i’m thinking to normalize these just a little further - making sure they have adsr and gate and then maybe fit some other parameters into the [0-1] ranged that are mapped internally in sc.

however, i have to say with some hesitation that my personal motivation for collecting cool supercollider scripts is stunted a little bit - i’m surprisingly finding that a lot of supercollider scripts i find on sccode or github are not that good sounding and that they even can sound a little cheap. i’ve got scott wilson’s supercollider book and i’m hoping that will help me to understand the beauty of supercollider.

that being said, here is what i have so far:

it is a simple approach i have but it works pretty well. the adsr is working, the midi input is working, the basic synth layout is working (basically forked from PolySub). just putting it out there in case anyone comes by and is interested. its not ready to “release” yet since parameters aren’t hooked up and the selecting part of the ui is not done. i have two more ideas for norns scripts that i want to do first but i will come back to it later (seeing more exciting supercollider synths would help me to come back to it sooner) :slight_smile:

4 Likes

@dan_derks + @tehn and I were just talking about this a bit as well - definitely worthy of further discussion here or on github and I think there would be some solid benefits to having an optional standard with the right level of flexibility (a standard could be multi-tiered maybe ? i.e. ISO C > POSIX > SUS)

I’d love for such a thing to be able to address parameters and pitch/gate independently on voices somehow in addition to global + voice allocation format (can’t speak too fluently on the implementation details for that)

<< spefics >>

for me the interface would look something like:

  • x number of voices (we could allow them to be different (i.e. islands), and/or expect them all to be the same)

  • each voice n has P group of parameters

  • each voice has frequncy & vel (max style, vel == 0 for note off ?)

commands:

  • set param k in P for voice n

  • set frequency & vel for voice n

  • set param k in P for all voices (+ span ?)

  • set freq & vel for the all voices

  • set freq & vel for the next free voice

  • free voice n ( /kill the envelope )

  • free all voices

1 Like

Just throwing out some more tangentially related ideas, since this conversation has been putting thoughts into my head today…

What if there were some kind of meta-earthsea script? It would support MIDI keyboards, Grid as keyboard, looping Grid and MIDI patterns, but also have the ability to change between existing community engines (PolySub, Molly the Poly, Passersby)… Would need to account for the various engine commands for each (perhaps one takes HZ while another takes MIDI note, for example), but I don’t think it’d be a huge deal.

It could use the PARAMS page for editing synths, and just use the existing earthsea UI to visualize notes. I’m not sure how PSETS would play with changing engines, but I’m thinking if you save the selected engine along with the PSET maybe it’d work?

So rather than make all the engine interfaces homogenous, it would do that to the control mechanism instead. Just spitballing as I really like the looping capabilities of Earthsea but think it’d be cool to sub in other engines too… :slight_smile:

Edit: now that I’m reading more of the above thread, I guess these two ideas are pretty similar and could coexist/fuse together quite nicely…

1 Like

with current infrastructure it’s really easy to place most of the code for an interface in a lib directory and fill the base folder with scripts that look more or less like this:

include 'lib/earthsea'
include 'molly_the_poly/lib/molly_the_poly'

engine.name = 'moly_the_poly'

that would populate the SELECT screen with like eathsea/molly_the_poly, earthsea/passersby, etc. the main script can even loops over params and build parts of the interface based around the engine params supplied. most of the groundwork is in place I believe, just add a community standard of some sort !

2 Likes

yeah, i agree. making playable and good-sounding softsynths is just not easy, even in a high-level DSL. (and cabinetry isn’t easy even with power tools), details matter. (e.g., aliasing is fairly ubiquitous; one scsynth feature that would help a lot would be an option to oversample a UGen graph.)

anyways, i’d be happy to try and contribute a couple of voice definitions.

@andrew the functions listed do seem like good candidates for unified command -> voice interface. we can get more specific by examining the engines and considering the other points raised in the GH issue:

@cfd90 i agree that many parameterization decisions can and should be left to the scripting layer. for example i strongly favor the practice of the noteOn/start command accepting Hz and unit velocity, rather than MIDI numbers. what is needed is unifying e.g. polysub and molly arguments: the former takes (id, hz) and the latter (id, hz, vel): to complete the example, i’d add vel to polysub but make both take “velocity” as raw [0, 1]. (the control layer can take care of velocity curve/scale, because different controllers will require different tunings of these. the engine could accept curve-tuning parameters but eh… that seems less efficient.)

[as an aside/example, here’s a point of friction: in a voice architecture with many unique parameters, i think it makes sense to expose a raw per-voice amplitude rather than a generalized velocity, because it gives more freedom to the script to maybe correlate amplitude with other parameters in a velocity map. but with a small and generic parameter set you would probably want to make a larger number of more opinionated synth types which specify their own velocity mapping.)

the best way to enforce these conventions would be to provide a wrapper class. the wrapper would take care of control bus and voice instance management (providing the one voice / all voices mapping,) and common attributes like an ADSR volume envelope, maybe Hz and param smoothing, etc. this would also of course reduce boilerplate code considerably.

(BTW: yes, i believe it is totally possible to switch engines from a parameter action and it should “just work.” with the caveat that lua should keep track of the “loading” state. here’s an example of that. i’d be curious to know what happens if anyone wants to try adapting this to use an engine parameter… and maybe someone already did.)

another under-documented feature of the engine module is that the available commands can be dynamically queried (in the static field engine.commands.)

i would love to promise to implement this, it is straightforward… but i’m underwater on norns development promises already. so i won’t for now.

4 Likes

(paging @jah + @markeats !)

I’m happy to help here (esp. on the lua side), but I’m a big ol’ supercollider baby

1 Like

I don’t think I have a whole lot to add but I’d love to see this happen. Just glancing at what I drafted in the github issue I think that’s still pretty much where my own thoughts are at for the basics and those were based on some earlier conversations with @zebra. The notes there match what I’ve implemented in my own projects.

At the time I thought it might be good to align to what MPE standards are doing for some modulation params also but now I’m not so sure that belongs at the SC level.

Very happy to make changes to my engines to fit with any standard :slight_smile:

2 Likes