Many, many deep thanks for resurrecting this. Iâve recently felt the need to double-down on these sorts of thoughts and explorations, and have been planning to put together a few blog posts on these sorts of things.
To start, Iâm really glad that Bret Victor was already mentioned here, because a lot of comments made me think of his talk Inventing on Principle. His primary position in the talk, that âcreators need an immediate connection to what they createâ cuts to the heart of what Dan_Derks said earlier in the thread: an instrument should respond with sound immediately, allowing you to immediately respond to the sound in a feedback loop.
Jumping into the rant format a bit â Iâve been thinking a lot in terms of dimensions. Immediacy, form and affordances, flux⌠How you can imagine looking at a guitar for the first time in your life and think âIâll tap the stringâ and you hear a sound. And how you might then imagine ways to play it. Some might be non-traditional, but none would really be incorrect.
This has kind of led me down the path of thinking âHow can we design electronic/computer music instruments that afford abuse?â Like, the piano was not designed to be prepared, but it affords it.
With physical instruments, physical preparations are relatively straight forward. With electronic instruments, you mostly have circuit-bending and parameter tweaking. With software, thereâs not really much space beyond the parameters exposed and quirks that may or may not be easy to identify.
Another dimension is static vs. dynamic, similar to the discussion about modal pixel interfaces. Most traditional instruments are played while theyâre static, which is to say, the underlying constitution of the instrument doesnât usually change as you play it. There are some exceptions to this â a guitar can go out of tune as you play it, or you can break a string, but these are usually error states rather than musical deliberation (unless youâre detuning a guitar string while youâre holding an ebow over it or something).
In these cases, the subtle variations come from dancing around a fixed object. Contrasted with something like modular synthesis, where the instrument is dynamic or modal, and only a specific patch may be considered static. Which brings us to another dimension: excitation. How does one excite a modular? The distinction between playing and exciting here is important: you can âplayâ with sequence programming on a modular, or design systems that self-excite with dirac deltas or gates, but facilitating human excitement of these systems requires some sort of sensor interface: turn the knob to sweep the filter, push the button to open or close the gate.
With general-purpose control surfaces, weâre very frequently able to completely change the full essence of the sound generator and the way itâs excited at the push of a button. Plug the grid into a different module. Chord correctly to change OSC prefixes to a different Max patch in Pages.
Itâs late and Iâm struggling to bring this all back together, but I suspect weâll find that our human-computer interactions feel more musical when how we push the button matters as much as the button we choose to push. The immediate feedback is necessary, but not sufficient, for that.
The recent five-episode-mini-series of âexpressive controllersâ from the Art+Music+Technology podcast is also extremely interesting in this context. Especially around the coupling of interface to sound generator.