with current infrastructure it’s really easy to place most of the code for an interface in a lib directory and fill the base folder with scripts that look more or less like this:

include 'lib/earthsea'
include 'molly_the_poly/lib/molly_the_poly'

engine.name = 'moly_the_poly'

that would populate the SELECT screen with like eathsea/molly_the_poly, earthsea/passersby, etc. the main script can even loops over params and build parts of the interface based around the engine params supplied. most of the groundwork is in place I believe, just add a community standard of some sort !

2 Likes

yeah, i agree. making playable and good-sounding softsynths is just not easy, even in a high-level DSL. (and cabinetry isn’t easy even with power tools), details matter. (e.g., aliasing is fairly ubiquitous; one scsynth feature that would help a lot would be an option to oversample a UGen graph.)

anyways, i’d be happy to try and contribute a couple of voice definitions.

@andrew the functions listed do seem like good candidates for unified command -> voice interface. we can get more specific by examining the engines and considering the other points raised in the GH issue:

@cfd90 i agree that many parameterization decisions can and should be left to the scripting layer. for example i strongly favor the practice of the noteOn/start command accepting Hz and unit velocity, rather than MIDI numbers. what is needed is unifying e.g. polysub and molly arguments: the former takes (id, hz) and the latter (id, hz, vel): to complete the example, i’d add vel to polysub but make both take “velocity” as raw [0, 1]. (the control layer can take care of velocity curve/scale, because different controllers will require different tunings of these. the engine could accept curve-tuning parameters but eh… that seems less efficient.)

[as an aside/example, here’s a point of friction: in a voice architecture with many unique parameters, i think it makes sense to expose a raw per-voice amplitude rather than a generalized velocity, because it gives more freedom to the script to maybe correlate amplitude with other parameters in a velocity map. but with a small and generic parameter set you would probably want to make a larger number of more opinionated synth types which specify their own velocity mapping.)

the best way to enforce these conventions would be to provide a wrapper class. the wrapper would take care of control bus and voice instance management (providing the one voice / all voices mapping,) and common attributes like an ADSR volume envelope, maybe Hz and param smoothing, etc. this would also of course reduce boilerplate code considerably.

(BTW: yes, i believe it is totally possible to switch engines from a parameter action and it should “just work.” with the caveat that lua should keep track of the “loading” state. here’s an example of that. i’d be curious to know what happens if anyone wants to try adapting this to use an engine parameter… and maybe someone already did.)

another under-documented feature of the engine module is that the available commands can be dynamically queried (in the static field engine.commands.)

i would love to promise to implement this, it is straightforward… but i’m underwater on norns development promises already. so i won’t for now.

4 Likes

(paging @jah + @markeats !)

I’m happy to help here (esp. on the lua side), but I’m a big ol’ supercollider baby

1 Like

I don’t think I have a whole lot to add but I’d love to see this happen. Just glancing at what I drafted in the github issue I think that’s still pretty much where my own thoughts are at for the basics and those were based on some earlier conversations with @zebra. The notes there match what I’ve implemented in my own projects.

At the time I thought it might be good to align to what MPE standards are doing for some modulation params also but now I’m not so sure that belongs at the SC level.

Very happy to make changes to my engines to fit with any standard :slight_smile:

2 Likes

i’m sorry to spam the thread… but i think i can boil it down now:

there are two possible paths.

  1. perform this decoupling on the lua side. the new lua layer would (A) know about the parameter structure of several engines, (B) present a unified and opinionated(/configurable) API for controlling them, © manage dynamic engine switching. i believe the development of this layer would be amply supported by the existing core scripting API.

  2. perform the decoupling on SC side by providing wrapper class &c. the different voice types will be necessarily simpler and more opinionated than molly or polysub, &c.

philosophically i lean towards (1). that said, (2) might encourage more high-quality contributions, since with an appropriate wrapper in place one can just hone in on a playable range for 3/4 parameters and not worry too much about prioritizing flexibility. probably better for the time-constrained SC developer.

anyways, my opinion has little merit until i actually have the bandwidth to contribute anything. if @andrew is interested in architecting a muti-engine lua layer, that would make sense. i can contribute by cleaning up polysub or whatever, and it could expose bugs in the engine discovery stuff for which i take responsibility.

in the second option i may or may not have time to help with a wrapper class, and can certainly contribute some new voice types.


actually on reflection, i’d like to prioritize the wrapper abstraction. it dovetails with the work that has been (slowly) happenig on norns 3.0 which revises, streamlines and extends the supercollider engine API considerably. regardless of what else is created, reducing boilerplate and supporting interpreted code for polysynth definitions is useful.

3 Likes

keeping more in lua seems like a great option for flexibility & I’m happy to write it - my only concern there is trying to keep the voice allocator in sync with envelopes and freeing synths (but maybe we have enough CPU that we can just keep the voice count high + round robin things and not worry about that too much) (I bumped into this with some lua voice allocation using R but I believe that was measurably hungrier than the other polysynth engines).

I need to spend some more time with the engine API before I speak too confidently on anything but I’d be glad to start writing up a spec for how a flexible lua synth API might work

1 Like

cool. i don’t see an issue. lua has to handle voice allocation only to the extent of specifying arbitrary integer IDs when starting or stopping voices. (typically, ID is a MIDI note number.) the engine deals with assigning, instancing or freeing synths as needed. lua doesn’t have to know about how voice lifecycle is actually managed, nor what strategy is used (static or runtime allocation, self-freed, paused or always-running, etc)

(incidentally, synths in SC can be paused without being freed, so CPU footprint and lifecycle management can be orthogonal. voice mgmt in polly and polysub probably needlessly complex.)

1 Like

Load samples form USB. I am travelling with a just a recorder and norns. Possible?

@swhic I’ve made use of SAM for this. It’s not loading straight in, rather playing recorded sounds into SAM then trimming, saving and naming.

1 Like

I’m not an expert at this sort of Linux stuff, but it seems like you could set up a cron job or something that would sync new files from the usb to the audio folder. The benefit of Norns being a computer under the hood.

i wish I could mate :slight_smile:

How hard would be to port this free M4L to Norns?
It’s too much fun to play with 16n controlling Attractor strength and listening how the system evolves.

3 Likes

Has anyone thought about porting a polygonal synth to norns? I think with Norns interface it would be such a great tool!
6 Likes

Has anyone got Doom running on norns?

15 Likes

If it can be run on a digital pregnancy test… :joy:

5 Likes

That is literally the reason was wondering about it! lol

2 Likes

if anyone ever wants to build some chiptune stuff for norns this would be the place to look !

1 Like

I’ve already compiled these for norns and have a couple of them converted to engines. :grin:

(work in progress stuff, but contact me if interested)

6 Likes

this is really interesting. Found while googling elsewhere if any fixed filter bank had been implemented in supercollider already. I’ve been hesitant to do any coding on my norns even though I have some experience with other languages in the past. If somebody would be willing to offer some intellectual support on the project I would value the opportunity to learn and contribute.

1 Like

what is project?

there is nothing magical about a fixed filter bank. it is a bunch of filters. it may be interesting to try this in a totally naive way and see what happens - depends on your application maybe.

for near-linear-phase reconstruction, there are a few other considerations.

@scztt has implemented a linear-phase crossover filter bank for SC:

if i understand correctly, it is the same algo used here:
https://faust.grame.fr/doc/libraries/#mth-octave-filter-banks

that algo is what i’d call a “dyadic” structure, which is a bunch of highpass filters in series. additionally, an allpass filter attempts to compensate for the phase shift introduced by each HPF stage.

the other main approach i’d consider is the phase vocoder.

happy to assist more concretely if you have specific questions.

2 Likes