Having engine params and bringing those over to lua side has been discussed. I’m kinda fond of that approach (did a proof of concept of this some time back). The current engine commands are not strictly params in the key-value sense. The ack lua module tried to solve bringing common params to scripts easily, yes.

Also, the formatters module contains general ways to present various controls.

2 Likes

definitely something i’ve thought about and wanted to put time into. you likely discovered how the cairo graphics lib is pretty flexible and easy— would be great to create some reusable elements— envelopes and waveforms are high on my list.

4 Likes

cool let me know if i can help!

definitely! some of my thoughts:

  • for sure, there is a general category of “polysynths” that share many common attributes.

  • clearly PolySub is just one possible algorithm, a very straightforward subtractive voice. (i saw you made a nice FM synth voice too! excellent.)

  • one thing i was thinking of doing is just extending a single engine to have multiple synthesis algorithms, certainly including FM and a couple physical models. so you have a command like .algo('fm2') that makes subsequently-created voices use the \fm2 synthdef, or whatever.

  • problem is, all parameters would have to be more generically labelled and usability would suffer a bit. (seems worth it though to have a fully polytimbral ensemble.)

  • as @jah mentions, not all engine commands are key-value. if they were, it would be straightforward for a script to generate a parameter list programatically given the output of engine.commands.

  • we could separate key-value commands into a separate registry of “parameters” or something, and this might solve a lot of things. (they could also declare allowed ranges, default warping specs, &c.) it’s also not hard to do and i’m happy to implement that change.

  • it’s not strictly necessary to have engine commands that aren’t key-value. but it makes some things much cleaner:

    • some commands don’t set a single state value, but just “make something happen” that affects the state in more complex ways - for example in SoftCut there is a command to immediately synchronize the position of one head to the current position of another head.
    • some commands naturally take more than one argument. for example SoftCut and others i’m working on have patch matrices; it’s a lot better to have a single command patch(src, dst, level) than to have [numSrc * numDst] separate commands each taking a single level argument.
  • one thing i’ll point out is that for PolySub the voice allocation is done on the SC side. we could pre-allocate all voices and force lua to deal with them. but its maybe a little more flexible to do this in SC. for example i dunno if you noticed but there is a .solo command in PolySub that is just like the .start command - it allocates a new voice - but it doesn’t map the control busses, so that voice is unaffected by subsequent parameter changes.

  • finally: as implemented, parameters in lua scripts don’t have to be mapped to synth parmaeters at all; they might be more “meta.” tempo comes to mind but i’ve also got some stuff in the cooker where the parameters are pitch lattice elements or other settings for generative structures.


so TL/DR: i think both arbitrary engine commands and arbitrary script parameters are basically necessary to do all neat weird stuff. but explicitly implementing a subset of script params that map 1-1 to synthdef argumnt values, would probably be useful as well.

in any case, super glad to be having this conversation.

re: widget library: oh yes please for sure

7 Likes

…and probably best to get working on this sooner rather than later, when the (then larger library of scripts and engines) will be harder to maintain at a shared services level. IMO.

can’t argue with that in 20ch

1 Like

Great to hear your thoughts!

Seems like even if we had this we’d still want to be able to go rogue sometimes right.

Yes totally agree we should hang on to multi-arg commands, I guess the value can always be an array but might get messy quickly with specifying ranges etc.

Agreed that voice allocation belongs in SC. Looking at the earthsea/polysub example though it seems a little mixed between - earthsea keeps track of voice IDs and decides when to allocate as far as I can tell. It’d be nice to be in a place where a script can just request a freq/gate and the rest is in the SC engine. I’m probably over-simplifying :slight_smile:

I feel like I don’t have enough knowledge of norns (or in general!) to comment much further but was just looking at @jah’s approach with ack some more and it is nice and flexible having an engine-specific lua commands file (maybe would make more sense living in the same location as the engine).

@jah’s approach with ack some more and it is nice and flexible having an engine-specific lua commands file
(maybe would make more sense living in the same location as the engine).

externalized in a separate Lua file associated with the engine.sc

sorry, took a minute for this to sink into my brain, but you are both totally right - having boilerplate in lua for a given engine’s param set is both way less work and more flexible than hard-coding more plumbing into the C code / OSC protocol. :+1:

hm, i had not realized it but yea, my perspective is a little cracked on this. my view being: - that’s true, but only in the kindof trivial sense that a midi keyboard performs voice allocation by sending noteon/nodeoff. in the .start(id, hz) command, i see id as analogous to midi note. the weird part is that the actual pitch is totally decoupled from the “midi note,” and this is just to allow arbitrary/dynamic tunings in lua - because i am a goofball who likes to sing pitches into my keyboard in real time, and stuff. (it also incidentally allows unisons.)

the engine is responsible for actually deciding when to start a new synth, retrigger an existing one, perform a legato, &c - and would steal voices if voice stealing was implemented :slight_smile: . it uses its knowledge of which voices are actually in their envelope release phase, &c, to make these decisions. (i think for lua to be fully responsible for voice allocation it would be necessary to send notifications when a voice actually finishes - that’s doable too but haven’t seen the need just yet.)

3 Likes

it would be straightforward for a script to generate a parameter list programatically given the output of engine.commands.

We can facilitate interconnection by making norns programs use a common discovery API. If synths (and effects, etc) are self-describing, that other programs can discover their various parameters. If each synth provided a method that enumerates its parameters (and their units, ranges, etc) then a user of that synth could automatically populate a UI (for example).

3 Likes

I’m very much in favor of such an approach. This closely resembles SuperCollider’s SynthDescLib in which each added SynthDefs’ arguments is spec’ed and can be looked up in a dictionary, and reused ie for its Pattern framework (such as Pbind).

Something along these lines (self-describing engines, engine parameters) has also been discussed prior to release.

It should also be noted that currently engine commands and metadata are only retrievable after an engine has been loaded. Ideally, I would want to be able to view an engine’s “schema” (so to speak) without having to load it.

3 Likes

whilst forking norns, for some dev work on macOS,
Ive noticed someone has a ‘case’ issue… master has both doc/modules/Log.html and doc/modules/log.html committed.
this will cause issues on platforms that are case-insensitive (windows) or case-preserving (mac os)

on my fork, Ive just removed docs/modules/Log.html (its the older of the two) , and that resolves the issue.

perhaps @tehn as you have access, you could delete Log.html directly on GitHub.com.

2 Likes

ok, so Ive now gone thru the norns/matron code base in more detail, and have a pretty good understanding go how it ticks…but, im back to struggling with issues around abstraction,

screen

I dont think doing this via a pre-processor flag is the best way (though perhaps easy)
I think it should be possible to have alternative ‘screen implementations’ co-exist, and even potentially have multiple screens. (perhaps one marked as ‘primary’ to help make app dev easier)

grid

these also seem to be, to some extent, hard-coded, since weaver calls thru the dev_monome_*
I think a grid should be an abstract concept, again where different implementations can co-exist


so Im wondering, how is it intended for norns to have new devices added/supported?
(excluding midi, which i know could be done in the lua/script layer, though that has its own issues)

having been thru the whole matron stack, its strikes me there are two directions:

  • more abstraction, looser coupling
    my preferred direction, but needs a bit of careful consideration about abstraction/interfaces.
  • adding more specific devices/events
    not a good direction in my opinion, currently doing this means touching lots of ‘core’ code, and would potentially make norns unwieldy mid-term (if many devices were added)

so I guess im asking, whats the thoughts of the dev team? is anyone actively working in this area?

p.s. sorry, I’m being ‘coy’ about what im actually doing, but thats because im not ready to ‘announce’ it in a public forum - I prefer to only announce once ive got something working, rather than vapourware/promises.
(Id be willing to privately talk to someone about this, if you want more details, and I could walk thru in more details the ‘issues’ i see in norns)

alsa raw midi: how does this handle midi devices with multiple midi ports? tested?
it seems to me that only one raw device file is created (/dev/snd/midiCnD0) is created even if devices has multiple ports, so you only have access to one port - whereas the alsa api (as say used by amidi -l ) includes sub-devices.

vid/pid for usb devices: it would be useful to have vid/pid available, and perhaps use this to help drive ‘device type’, or at least allow you to ‘override’ the default handling e.g. something might be usb midi class compliant, but you want to have specific handling.

btw: if no one is working in this area of the matron code (abstraction/device identification) , Im thinking of having a bit of a play -perhaps just getting a little more flexibility in there e.g. single device override, that we can then look to extend later ? of course if you don’t like it, you can always reject a PR.

1 Like

we started with an alsaseq-based implementation, but changed that to rawmidi later because rawmidi can be used with matron device manager out of the box (and it’s much better to have a single device manager than having 2 subsystems for notifying scripts about new devices).

i have a device with 2 sequencer ports myself and apparently it just works with norns/rawmidi.

right, we currently provide device name (as reported by udev) for that as well.

1 Like

i think that is what we are talking about, no?

and we are just working out the best place to put it right now.

i think imposing a descriptor format on every engine command, and pushing those descriptors through OSC, and into lua, is a lot of extra infrastructure that is brittle (in ways we might not foresee right now) and doesn’t accommodate a lot of more experimental structures.

(i’m emphatically not interested in recreating Audio Units, for example - way too prescriptive. if you try to accommodate a broader range of musical control structures you end up with something like LV2 - which has a lot of neat ideas and is super flexible, but also pretty inaccessible. an alternative version of norns would just be an LV2 host, solving many of these questions but requiring tons more work to implement DSP engines.)

what @jah has already done with the ack engine seems like a good template for an immediate pragmatic solution - the “discovery/descriptor” part is implemented in lua, alongside the engine, and just provides defaults for the existing parameter system in lua. it’s not as architecturally elegant as doing everything automagically in the engine API, and it requires the .sc and .lua components to be synchronized, but it’s a lot less work and more flexible - e.g. scripts can easily override or ignore the boilerplate as desired.


ok… well, this isn’t really compelling for me at this point. we want to work on cleaning up, extending and improving the core features for the norns device - not making it into a video synth (just yet), and not making the software work on hypothetical alternative hardware or on someone else’s product. if you want to do that it’s fine. if you want to, say, use a second screen for development (presumably with a usb-vga adapter), then i don’t see how that relates to supporting other outputs with the cairo drawing commands available to norns scripts.

sure. this sounds like its better done on the lua side to me. but it’s also a little vague what we’re talking about.

i’m assuming you want to say something like - “this script is designed to be used with a grid controller of at least MxN size - but it should work either with a monome TTY device or a MIDI device, and should seamlessly connect to one or the other when plugged.” ok cool - this sounds like a job for a (simple) adapter class in lua, and of course we would welcome such a contribution.

the matron C stack is about providing lower level connections. it’s simple. libevdev for hid, rawmidi for midi (and yes, as far as we can tell this supports multi-port devices just fine), libmonome for monome, OSC for whatever else.

the only device class that’s missing, in my opinion, is alternative TTY-based protocols. but this is such a vague category that i’m waiting for some actual use case to appear before worrying about it.

if it would be useful to pass more descriptor data to lua for each device class , that’s simple enough to do and doesn’t require some huge restructuring.

if you want to make the C device layer super smart and have the weaver layer be more abstracted, i guess i’d be curious to see what you come up with, but not sure i see the engineering advantages.

5 Likes

Apologies if I was just restating the obvious with my comment. I think having a discovery method at the lua level would be sufficient, it needn’t be plumbed through to sc. Maybe I’m naive to some of the potential challenges/applications though.

i tried this and i get this

ERROR: Message ‘lookup’ not understood.
RECEIVER:
nil
ARGS:
Float 0.000000 00000000 00000000
CALL STACK:
DoesNotUnderstandError:reportError
arg this =
Nil:handleError
arg this = nil
arg error =
Thread:handleError
arg this =
arg error =
Thread:handleError
arg this =
arg error =
Object:throw
arg this =
Object:doesNotUnderstand
arg this = nil
arg selector = ‘lookup’
arg args = [*1]
CroneAudioContext:buildVuBlob
arg this =
var ret =
< FunctionDef in Method Meta_Crone:initVu > (no arguments or variables)
Float:do
arg this = inf
arg function =
var i = 0
Routine:prStart
arg this =
arg inval = 3.970006393
^^ The preceding error dump is for ERROR: Message ‘lookup’ not understood.
RECEIVER: nil

but the rest continues on to the

Faust[FaustCompressor]:
Inputs: 8
Outputs: 2
Callback: zero-copy
Faust[FaustZitaVerbLight]:
Inputs: 7
Outputs: 2
Callback: zero-copy

similar errors anyone?

sorry it’s related to this
Execution warning: Class ‘ReverseAudioTaper’ not found
ERROR: Message ‘lookup’ not understood.

so it’s missing AudioTaper.sc which is currently living in dust/lib/sc/abstractions. it should get copied by the sc install script

(sorry this could all use a little tidying up for general use. like the symlinking procedure is very weird just to avoid some issues i was having during development with editor backups and other non-.sc files.)

i don’t see it in there i dont actually see anything in dust/abs except for
/emb Engine_Ack.sc Engine_Glut.sc Engine_PolyPerc.sc Engine_TestSine.sc Engine_Why.sc