Many, many deep thanks for resurrecting this. I've recently felt the need to double-down on these sorts of thoughts and explorations, and have been planning to put together a few blog posts on these sorts of things.
To start, I'm really glad that Bret Victor was already mentioned here, because a lot of comments made me think of his talk Inventing on Principle. His primary position in the talk, that "creators need an immediate connection to what they create" cuts to the heart of what Dan_Derks said earlier in the thread: an instrument should respond with sound immediately, allowing you to immediately respond to the sound in a feedback loop.
Jumping into the rant format a bit -- I've been thinking a lot in terms of dimensions. Immediacy, form and affordances, flux... How you can imagine looking at a guitar for the first time in your life and think "I'll tap the string" and you hear a sound. And how you might then imagine ways to play it. Some might be non-traditional, but none would really be incorrect.
This has kind of led me down the path of thinking "How can we design electronic/computer music instruments that afford abuse?" Like, the piano was not designed to be prepared, but it affords it.
With physical instruments, physical preparations are relatively straight forward. With electronic instruments, you mostly have circuit-bending and parameter tweaking. With software, there's not really much space beyond the parameters exposed and quirks that may or may not be easy to identify.
Another dimension is static vs. dynamic, similar to the discussion about modal pixel interfaces. Most traditional instruments are played while they're static, which is to say, the underlying constitution of the instrument doesn't usually change as you play it. There are some exceptions to this -- a guitar can go out of tune as you play it, or you can break a string, but these are usually error states rather than musical deliberation (unless you're detuning a guitar string while you're holding an ebow over it or something).
In these cases, the subtle variations come from dancing around a fixed object. Contrasted with something like modular synthesis, where the instrument is dynamic or modal, and only a specific patch may be considered static. Which brings us to another dimension: excitation. How does one excite a modular? The distinction between playing and exciting here is important: you can "play" with sequence programming on a modular, or design systems that self-excite with dirac deltas or gates, but facilitating human excitement of these systems requires some sort of sensor interface: turn the knob to sweep the filter, push the button to open or close the gate.
With general-purpose control surfaces, we're very frequently able to completely change the full essence of the sound generator and the way it's excited at the push of a button. Plug the grid into a different module. Chord correctly to change OSC prefixes to a different Max patch in Pages.
It's late and I'm struggling to bring this all back together, but I suspect we'll find that our human-computer interactions feel more musical when how we push the button matters as much as the button we choose to push. The immediate feedback is necessary, but not sufficient, for that.
The recent five-episode-mini-series of "expressive controllers" from the Art+Music+Technology podcast is also extremely interesting in this context. Especially around the coupling of interface to sound generator.