re: screen

i didn’t write this source but i have been going over it this weekend; def. could use a little cleanup as you note. the pixel format discrepancy might be an oddity particular to this screen.

my goal was to make an x11 adapter layer. shouldn’t be too bad. if this interests you i’m happy to leave it and turn my attention to other things.

does cairo allow for scaling?

yes, cairo_scale() affects all subsequent drawing

1 Like

he he, sorry, I should have looked!
( I need to get deeper into the Cairo api now!)

I’m not really trying to do an X11 layer, but perhaps once Ive got more familiar I can look at it. I don’t really see a need for X for something like norns.

for now, im going to drive a touchscreen directly and that seems to be (kind of) working - but this is all just a step - to see norns working, ive a much more interesting idea for this :wink: (which will potentially be useful to some of the norns community)

2 Likes

Has anyone been doing development in a non linux env via something like docker or vagrant? I may take a stab at setting up an image in the former (because that’s what I’m familiar with) as an exercise in wrapping my head around how all the pieces play together.

2 Likes

I would be interested to hear what you figure out here. I have been doing some dev work on maiden, but missed the preorders so it will be a while until I can get my hands on the hardware.

Specifically I’d like to be able to “use” the REPL pane. It would be cool if I could “run” scripts within the environment as well.

I use Docker for Mac to run docker in.

Well we might have different use cases. I’d like to be able to run and work on norns patches on a laptop, and it’s only like 4 lines to make an x window…

I’ll try and push something in the next days

5 Likes

Definitely agree on there being a lot of value in taking an approach where scripts and engines are less intertwined. It seems at the moment that some of the scripts/engines communicate quite differently with each-other. A couple of ideas…

Could we define standard protocols / subclasses for the most common engine types. Eg, a ‘voice engine’ would have to take freq and gate inputs and output sound. An ‘FX engine’ has to support audio I/O. The majority of engines could fit these standards and then be completely interchangeable between control scripts.

At the moment engine params have to be defined in the lua scripts in order to show up in the menu, could this be automated from the SC or externalized in a separate Lua file associated with the engine.sc? I guess that’s what you’re going for @jah?

I’m also curious as to if there are further plans for the Parameter screen - it seems like we could do more there to organize and visualize.

Sorry maybe we should be in the dev thread :slight_smile:

5 Likes

@markeats yes! standardizing a subset of engines that are synth/voice engines would be really helpful.

a few ideas for the params screen:

  • i’ve already added the logic for slotted-pset saving (default=0, 1-99 also available)
  • i want to add an interface for midi CC mapping (learn)
  • perhaps psets organized into “subfolders” where a synth pset could be a sub of a main script pset
3 Likes

Cool! MIDI learn and organized presets sounds great.

It feels like some subfolders would be helpful in params, there are often so many related settings (eg, all the filter params together or all params related to one sample). I’m also wondering if there’s sensible ways of utilizing the screen more.

When editing envelope settings would it make sense to do so on a dedicated screen with a visual of the envelope? I could see a small library of parameter UI being built out, similar to the pan control that’s already in Ack but perhaps also fullscreen stuff. Graph displays for envs & filters, waveform display and selection, mixer, etc.

I’d be happy to help out with the front-end work for some UI pieces!

8 Likes

Having engine params and bringing those over to lua side has been discussed. I’m kinda fond of that approach (did a proof of concept of this some time back). The current engine commands are not strictly params in the key-value sense. The ack lua module tried to solve bringing common params to scripts easily, yes.

Also, the formatters module contains general ways to present various controls.

2 Likes

definitely something i’ve thought about and wanted to put time into. you likely discovered how the cairo graphics lib is pretty flexible and easy— would be great to create some reusable elements— envelopes and waveforms are high on my list.

4 Likes

cool let me know if i can help!

definitely! some of my thoughts:

  • for sure, there is a general category of “polysynths” that share many common attributes.

  • clearly PolySub is just one possible algorithm, a very straightforward subtractive voice. (i saw you made a nice FM synth voice too! excellent.)

  • one thing i was thinking of doing is just extending a single engine to have multiple synthesis algorithms, certainly including FM and a couple physical models. so you have a command like .algo('fm2') that makes subsequently-created voices use the \fm2 synthdef, or whatever.

  • problem is, all parameters would have to be more generically labelled and usability would suffer a bit. (seems worth it though to have a fully polytimbral ensemble.)

  • as @jah mentions, not all engine commands are key-value. if they were, it would be straightforward for a script to generate a parameter list programatically given the output of engine.commands.

  • we could separate key-value commands into a separate registry of “parameters” or something, and this might solve a lot of things. (they could also declare allowed ranges, default warping specs, &c.) it’s also not hard to do and i’m happy to implement that change.

  • it’s not strictly necessary to have engine commands that aren’t key-value. but it makes some things much cleaner:

    • some commands don’t set a single state value, but just “make something happen” that affects the state in more complex ways - for example in SoftCut there is a command to immediately synchronize the position of one head to the current position of another head.
    • some commands naturally take more than one argument. for example SoftCut and others i’m working on have patch matrices; it’s a lot better to have a single command patch(src, dst, level) than to have [numSrc * numDst] separate commands each taking a single level argument.
  • one thing i’ll point out is that for PolySub the voice allocation is done on the SC side. we could pre-allocate all voices and force lua to deal with them. but its maybe a little more flexible to do this in SC. for example i dunno if you noticed but there is a .solo command in PolySub that is just like the .start command - it allocates a new voice - but it doesn’t map the control busses, so that voice is unaffected by subsequent parameter changes.

  • finally: as implemented, parameters in lua scripts don’t have to be mapped to synth parmaeters at all; they might be more “meta.” tempo comes to mind but i’ve also got some stuff in the cooker where the parameters are pitch lattice elements or other settings for generative structures.


so TL/DR: i think both arbitrary engine commands and arbitrary script parameters are basically necessary to do all neat weird stuff. but explicitly implementing a subset of script params that map 1-1 to synthdef argumnt values, would probably be useful as well.

in any case, super glad to be having this conversation.

re: widget library: oh yes please for sure

7 Likes

…and probably best to get working on this sooner rather than later, when the (then larger library of scripts and engines) will be harder to maintain at a shared services level. IMO.

can’t argue with that in 20ch

1 Like

Great to hear your thoughts!

Seems like even if we had this we’d still want to be able to go rogue sometimes right.

Yes totally agree we should hang on to multi-arg commands, I guess the value can always be an array but might get messy quickly with specifying ranges etc.

Agreed that voice allocation belongs in SC. Looking at the earthsea/polysub example though it seems a little mixed between - earthsea keeps track of voice IDs and decides when to allocate as far as I can tell. It’d be nice to be in a place where a script can just request a freq/gate and the rest is in the SC engine. I’m probably over-simplifying :slight_smile:

I feel like I don’t have enough knowledge of norns (or in general!) to comment much further but was just looking at @jah’s approach with ack some more and it is nice and flexible having an engine-specific lua commands file (maybe would make more sense living in the same location as the engine).

@jah’s approach with ack some more and it is nice and flexible having an engine-specific lua commands file
(maybe would make more sense living in the same location as the engine).

externalized in a separate Lua file associated with the engine.sc

sorry, took a minute for this to sink into my brain, but you are both totally right - having boilerplate in lua for a given engine’s param set is both way less work and more flexible than hard-coding more plumbing into the C code / OSC protocol. :+1:

hm, i had not realized it but yea, my perspective is a little cracked on this. my view being: - that’s true, but only in the kindof trivial sense that a midi keyboard performs voice allocation by sending noteon/nodeoff. in the .start(id, hz) command, i see id as analogous to midi note. the weird part is that the actual pitch is totally decoupled from the “midi note,” and this is just to allow arbitrary/dynamic tunings in lua - because i am a goofball who likes to sing pitches into my keyboard in real time, and stuff. (it also incidentally allows unisons.)

the engine is responsible for actually deciding when to start a new synth, retrigger an existing one, perform a legato, &c - and would steal voices if voice stealing was implemented :slight_smile: . it uses its knowledge of which voices are actually in their envelope release phase, &c, to make these decisions. (i think for lua to be fully responsible for voice allocation it would be necessary to send notifications when a voice actually finishes - that’s doable too but haven’t seen the need just yet.)

3 Likes

it would be straightforward for a script to generate a parameter list programatically given the output of engine.commands.

We can facilitate interconnection by making norns programs use a common discovery API. If synths (and effects, etc) are self-describing, that other programs can discover their various parameters. If each synth provided a method that enumerates its parameters (and their units, ranges, etc) then a user of that synth could automatically populate a UI (for example).

3 Likes

I’m very much in favor of such an approach. This closely resembles SuperCollider’s SynthDescLib in which each added SynthDefs’ arguments is spec’ed and can be looked up in a dictionary, and reused ie for its Pattern framework (such as Pbind).

Something along these lines (self-describing engines, engine parameters) has also been discussed prior to release.

It should also be noted that currently engine commands and metadata are only retrievable after an engine has been loaded. Ideally, I would want to be able to view an engine’s “schema” (so to speak) without having to load it.

3 Likes

whilst forking norns, for some dev work on macOS,
Ive noticed someone has a ‘case’ issue… master has both doc/modules/Log.html and doc/modules/log.html committed.
this will cause issues on platforms that are case-insensitive (windows) or case-preserving (mac os)

on my fork, Ive just removed docs/modules/Log.html (its the older of the two) , and that resolves the issue.

perhaps @tehn as you have access, you could delete Log.html directly on GitHub.com.

2 Likes

ok, so Ive now gone thru the norns/matron code base in more detail, and have a pretty good understanding go how it ticks…but, im back to struggling with issues around abstraction,

screen

I dont think doing this via a pre-processor flag is the best way (though perhaps easy)
I think it should be possible to have alternative ‘screen implementations’ co-exist, and even potentially have multiple screens. (perhaps one marked as ‘primary’ to help make app dev easier)

grid

these also seem to be, to some extent, hard-coded, since weaver calls thru the dev_monome_*
I think a grid should be an abstract concept, again where different implementations can co-exist


so Im wondering, how is it intended for norns to have new devices added/supported?
(excluding midi, which i know could be done in the lua/script layer, though that has its own issues)

having been thru the whole matron stack, its strikes me there are two directions:

  • more abstraction, looser coupling
    my preferred direction, but needs a bit of careful consideration about abstraction/interfaces.
  • adding more specific devices/events
    not a good direction in my opinion, currently doing this means touching lots of ‘core’ code, and would potentially make norns unwieldy mid-term (if many devices were added)

so I guess im asking, whats the thoughts of the dev team? is anyone actively working in this area?

p.s. sorry, I’m being ‘coy’ about what im actually doing, but thats because im not ready to ‘announce’ it in a public forum - I prefer to only announce once ive got something working, rather than vapourware/promises.
(Id be willing to privately talk to someone about this, if you want more details, and I could walk thru in more details the ‘issues’ i see in norns)