Composition of Control (instrument design)

That is indeed an interesting topic , being myself very interested with adaptive / reactive interfaces: this could be a pretty good example of how it could be implemented.

2 Likes

Machine Learning (ex. Wekinator), when used for parameters to be mapped in the design phase, has yielded really beautiful and intuitive results for me. I can map a controller (slider or flexi or touchpad or thermistor) to multiple parameters, and set the bounds for each one in an n-dimensional array. The interpolation aspect often yields interesting results as well.

I also need to make sure that, whatever my mapping scheme is, it helps me make better music… I can build nifty things all day but they’ve got to be effective mappings for what I need to do or it misses the whole point.

My personal approach is often:

  1. What is the musical event desired
  2. How much work do I want the machine to do
  3. How do I want to feel while I do it – is the physical element important to effective performance of that particular event

Then the tools and mapping tend to make themselves pretty clear to me.

5 Likes

I’ve been resistant to go the Wekinator route (even though it seems powerful/appropriate) as I don’t like the idea of middleware and developing such funky dependencies like that.

(Foolishly) my hope is to have some kind of ML network being made, which I can then ‘export’ into a native context that lives outside of a heavy-lifting ML context… but it doesn’t really work that way.

Do you have any audio/video of some of this stuff? It would be great to check out.

Historical note: Functional Reactive Programming was pioneered and described by Conal Elliot and Paul Hudak in the 90s, long before newthings in the hype. Interestingly, their domain was interactive animation.

The essence of FRP is that it is a cohesive way of combining both continuous and event based streams over time in a way that is formally reasonable. It enables the expression of what you want your model (instrument) to do, rather than only being able to code the details of how.

Also note, Elm isn’t FRP and doesn’t claim to be. (Aside, I love Elm!) Most (all?) of the hip JavaScript stuff isn’t FRP, either: Attaching event handlers does not make something FRP, despite some appropriating the term “reactive”.


While there is an interesting discussion forming here about interactive performance and music structure… My initial concerns are far more prosaic:

Here are some situations a framework for control composition should be able to address:

  • Independently, I have two voice engines in SC, and each has a Lua script that maps some controls it. Combining the voices in SC is easy: a Mix UGen. How dow I combine the Lua scripts? …So I can play both at once? …So I can play them in parallel? …So I can flip between them?

  • I have a MIDI looper in Lua: It has controls for it’s operations - and it reads other control input (notes & ccs) to record and play back. How can I place control effects after the sequencer output (perhaps an arpeggiator), or before it’s input (like some algorithmic melody generator or perhaps a “corrector”?). These seem like common, approachable cases. Now what if I want to place a “conductor” script before the controls of the sequencer?

  • I have a complex effects chain built up. It is (of course) a single large engine, but chaining and mixing the parts up via UGens is clear and easy. Assuming that the first example has been tackled, there is now a nice combined mapping so that I have control over these effects from a giant MIDI knob box or two. So far, so good. Now I want to go on the road, and use a small MIDI knob box. How do I replace one with the other? What if the MIDI controller doesn’t support banking, so the banking needs to be done in Lua?

In each of these situations, it is clear that with some amount of editing of the original Lua scripts the combined or altered situation would be possible. The quest is to see how uninvasive it can be. Can those original scripts be written in such a way that while being “natural” and working stand alone… they are suitable for being combined - in the various ways - by another control script?


Tiny thought: I like the idiomatic way Python modules can both provide an API for other modules to use, and operate as stand alone programs. See §29.4. __main__.

8 Likes

Huh? Not anymore, but its creator certainly claimed Elm to be FRP when it was the newthing: http://elm-lang.org/assets/papers/concurrent-frp.pdf

I’d say your use cases are fairly well supported by sclang with its declarative Streams & Patterns classes and focus on metadata, despite it being a OOP language. The idiomatic approach to control in the norns world, though, is Lua. At this stage this revolves mostly on how rather than what. I’m all for declarative approaches, though, as I think it maps well to music.

Btw - I’m still interested in hearing about Kyma.

1 Like

More fodder for the horse:

  • When talking about control surfaces, remember that they are often bidirectional: some encoders, pads, and grids, and even keyboards display control setting feedback. In the case of banked encoders or knobs, this is more than just output: It sets the point at which input will pick up from.

  • Control might loosely fall into three categories: note controls (ex.: keyboards), parameter controls (ex.: knob for cut off freq.), state controls (ex.: looper rec. arm). The first is fairly easy to see how control routing might work. The other two get more progressively complex. And instrument control might mixes of these: Imagine a voice that takes notes and two knobs to control… and a second voice that takes notes and three knobs and a button. If I compose these, the options for notes are clear: split, or layer, or toggle between. But happens to the controls? If my controller has eight knobs and buttons, can I have all controls for both voices active all the time?

  • The display is part of the control system. Two layered instruments vie for control of the display. Is there a separate whole display toggle? From the examples given, display output is scripted as a global: “Position here, write this text.” - But that doesn’t give us the option for splitting the screen… though I don’t know if norns will need that level of complexity… Even without that - it will need a way to enable composed systems to manage the screen: In our layered voice example, I move a knob for voice B, I expect it to take over the display… twiddle voice A and it takes over.

  • There was mention of a parameter system… perhaps for a uniform patch saving concept. If so… that’ll need to be composed as well.

2 Likes

Yes - the elm of that paper is a distant relative of elm, the web programming system. The later retains a hint of FRP only in its top level construction (essentially Event -> Model -> (Model, Action), but offers no library or mechanisms for using FRP to build up that function.

I’m going to have to dig to find any specifics on Kyma: I was an alpha tester (had one of the very first Capybara units) and user back in the 90s!

@mzero are you at a point where you could write pseudocode or diagram an architecture yet? Or do we need to discuss goals in more detail, or gather some other kind of information first?

@jasonw22: I think I’d like to wait and see the norns source first. There are a number of systems who’s architecture was only hinted at: parameters, interaction with the engine (via @zebra’s magic Ugen?); and the controller processing pathway is still unseen. Also, getting something like this to work well depends on deeply on doing the “right” integration with the language runtime and core libraries. I’ll need more experience with Lua and the code base to get a feel for direction.


One more complex, but very real scenario:

  • Imagine I’ve got two different grid applications running. The buttons at the top right switch between them. I’m playing on the first app, which perhaps lets me play looped samples as long as I hold a pad down. So I start to play a “chord” of samples with my left hand, holding down the pads… Now I use my right hand to switch to the other app (or page of the same app) and press pads to do things - all the while holding down the pads with my left hand. Now here’s the hard part: I release the pads with my left hand. Those pad up events have to go to the first app, not the second, even though the second app is now “in focus”.

Many controllers get this wrong: Take a keyboard controller that has octave shift buttons: Hold down a chord, then octave shift, then release the chord: Do the notes keep playing? I’m always amazed at how many controllers get this wrong! Novation is a notable exception: Every device I’ve ever used of theirs has paid very careful attention to this detail and gets it spot on. Korg does too. The Ableton remote script framework in Python also does this right - though if I remember correctly, it is a bit of a snarly mess inside.

7 Likes

(sorry to post so much… hope this is all still interesting!)

A quick look at SuperCollider’s MIDIFunc gives a good example of an API giving the wrong affordance, and thus making composition of control hard:

An example of it’s use (from SC’s help):

notes = Array.newClear(128);    // array has one slot per possible MIDI note

on = MIDIFunc.noteOn({ |veloc, num, chan, src|
    notes[num] = Synth(\default, [\freq, num.midicps,
        \amp, veloc * 0.00315]);
});

off = MIDIFunc.noteOff({ |veloc, num, chan, src|
    notes[num].release;
});

Notice that the user of MIDIFunc has to do the matching of noteOff to noteOn. To compose with this interface (imagine we have two such users), the composition has to do a lot of work, and somewhat invasively.

Had the API been defined to be used like this, it would have been much easier:

on = MIDIFunc.note({ |velocOn, num, chan, src|
    var note = Synth(\default, [\freq, num.midicps,
        \amp, velocOn * 0.00315]);

   // and return the function to call when the note is off
   { |velocOff| note.release; }
});

This is both less code for the client, the client no longer relies on a global(-ish) array, and is much easier to compose.

2 Likes

This is a really common problem in javascript (e.g. handling a mouseup event whose associated mousedown event is attached to an object that has been transitioned out of view).

When I get home tonight, I’ll study the Lua event model and try to write some sample code that demonstrates how this would work.

I’m going to ignore MIDI and Supercollider for now and focus on the Lua event model with special attention to object hierarchy and association.

4 Likes

For SC you might wanna check out Modality (http://www.3dmin.org/wp-content/uploads/2014/03/Baalman_2014.pdf) and FPLib (referred in the Modality PDF and iirc maintained here: https://github.com/miguel-negrao/FPLib/blob/master/README.md)
(FRP hype alert - havent used any of these libs myself)

I made the grid based SuperCollider UI lib Grrr (http://github.com/antonhornquist/Grrr-sc) which handles widgets on grids as the regular SC GUI framework widgets. Composable? To an extent - not its main focus, but it was conceived for multiple-modes-per-grid keyup/down tracking.

After dabbling with C#/WPF at work I also toyed with a MVVM framework in SuperCollider potentially applicable for both SC GUI and Grrr: https://github.com/antonhornquist/DeclarativeViewBuilder-sc

The last one is at the crude-hack stage. I lost motivation after realizing the number of anonymous functions that had to be spawned and got worried about performance.

This is just for inspiration. Again: idiomatic norns control is lua. I’ve not had time to port any of my SC control libs to lua since gotten involved in the project. And, frankly, I’m not sure it’s worth it. :slight_smile:

3 Likes

Also, while I thoroughly enjoy reading ideas on pure FRP I’ve yet to see any of it in practical use.

Though, again, the imo reactive programming inherent in dataflow languages I see as an good example of mapping asynchronous data flows very much like FRP.

1 Like

in my day we called it “event-based programming”.

I’m being cheeky, but I guess I don’t see a huge difference between what i see being described as “reactive” and what we always had to do in JavaScript (when we were writing JavaScript well). Maybe someone can enlighten me.

3 Likes

If I get FRP correctly, it in practice means operating on continuous or discrete streams of events (immutable data - values rather than mutable objects in the oop sense) with conventional functional programming mechanisms: select / map / etc. There’s more to it than this of course, but that is part of it.

I reckon there’s an overlap here with Dataflow languages in which messages are sent around in streams boxes that combine, filter and join streams.

I mean- how often in Pd do you attach a callback handler? Its rather a spec of a graph to execute - as is FRP to a certain extent.

1 Like

Cool, that makes the distinction clearer. That being said, I never found attaching a callback handler to be quite the unnatural thing that so many others appear to see it as. :man_shrugging:

The core of why FRP is not Event handlers is in the Conal Elliot & Paul Hudak paper I referenced above, in the section “The Essence of Modeling”, in particular the part named “3. Declarative reactivity.” I know, the paper takes some time to unpack…

To expand: Attaching a callback handler gets complex as you compose meaning to the events: The key handler has to “know” that in one state the Y key should insert the letter ‘y’, whereas in another state it should choose the yes option. As you build higher constructs (the grid is now a sequencer, now a tape strip, now a preset memory grid…) you keep having to impose into the handler… or worse, globally manage installing and removing handlers. FRP provides tools for safely, and cleanly combining just behavior over events - and building up higher event streams. Sure, you build a library of event handler combintor functions… but then you’re building FRP over your event handling system! :slight_smile:

1 Like

I’ll look into that tonight! I have a sneaking suspicion that I just got used to

and also that such a thing may happen again in context of Lua for norns.

But all the more reason to read and absorb the paper.

Three cheers for closures!

don’t you have to do that in FRP as well? a declared handler still has to know what to do. but if i understand correctly it makes it easy to declare new events, so instead of a general handler that has to maintain the overall state of UI you can have handlers be closer to the events they should be reacting to and knowing only the minimal scope they need to know. but you could implement it in a procedural language as well. but yeah, it’s nice to have the language itself provide some scaffolding (it feels like C# has been trying to employ some FRP techniques as well…)

1 Like