Totally agree! Grids can mitigate this somewhat by lighting up octaves and/or other intervals, but this still requires eyeballs in addition to muscle memory.
But for isomorphic layouts this shouldn’t really matter. For example when playing guitar or linnstrument I rarely ever think about notes, only intervals, and those are always the same finger pattern. It doesn’t really matter what the absolute position on the instrument is, only the relative positions of the fingers. The inlays on a guitar or lights on the linnstrument are useful in finding the starting point but not really as necessary otherwise.
Also true! And thus I will forever be stuck in the land between piano and guitar. Oh wait, that’s actually a good thing.
Yeah, this was from a RBMA interview he did, right? I remember it since it was along the lines of UI ideas from Jef Raskin I was reading at the time. If you haven’t read it I recommend https://en.wikipedia.org/wiki/The_Humane_Interface
To me it’s like you’re trading off usability for customizability with a generic design. I think the end result of a generic design is more of a tool than an instrument.
Generic touch screens as well as generic grids are inherently modal. In usability modality is a no-no due to habituation. With tactile grids at least you have kinaesthetic feedback which IMO is crucial.
I have never played the linnstrument but are guitars really isomorphic? To me the relative distances are different depending on where you are on the fretboard.
Depends on the tuning. A bass guitar is; the B string on standard six-string tuning is what stops that being isomorphic.
Some kind of visual indication is helpful when learning shapes/patterns … so I guess the piano layout helps this, but at the cost of requiring you to learn more patterns?
but once ‘learnt’ , is it not down to muscle memory/spatial positioning with reference to the body of the instrument?
the piano layout is perhaps the exception for musical instruments. we have grid layouts (stringed instruments) , linear layouts (e.g. harp), and many others (e.g. wind instruments), then of course there are fretless instruments (or the likes of the theremin) … theres not lack of skilled players in any of these.
Ive mused on this a bit in the past… Ive noticed, quite often, I come back to the piano for inspiration… or to work out a melody, Ive put this down to having a synth when I was a kid, so perhaps just its ‘in-grained’?
on the flip side with the Eigenharp (which allows arbitrary note layouts) , Ive also played with many different layouts on the grid, and its fun/interesting/inspiring switching between different ones. whilst doing this I found two things.
a) the layout influenced the kind of things Id play quite strongly b) your brain adapts to different layouts quite easily
Id assume a & b are down to the pros/cons of the minds great abilities with patterns… pro - we can learn new ones , con - we can fall into pattern ruts.
perhaps when used as a control surface (e.g. menus, option selection) , but they aren’t modal when used as a playing surface… at that point surely the layout is ‘fixed’, the spatial relationship between notes is the same?
(the option selection modes usually needs some kind of visual help, because of the modality… the eigenharp interestingly formed visual shapes with its LEDs - I think, later, the linnstrument adopted something similar)
Standard guitar tuning is non-isomorphic in favor of easy barre chords. If you can limit yourself to 4 note chords, or just get used to really difficult chord shapes, you can tune a guitar isomorphically. Or just play a 4 string instrument like bass or ukulele.
I think many piano players just really love the piano layout because of the way it influences their playing. I think perhaps the contorted chord shapes are building a neural relationship between scale, mode, key, and melody that would be more uniform (less distinct, fewer distinguishing characteristics ) on an isomorphic layout. Ever notice the hardcore theory nerds are usually piano players? (Yes, many play other instruments, but pick a random theory nerd at a random moment and look at what instrument their ha did are on. More often than not, a piano.)
Or you can use different tunings. I play a Schecter A-5X with five strings tuned in fifths. That gives me the range of a guitar and most of a bass. I started out playing bass, and when I tried to learn guitar, the broken isomorphism of the two higher strings stopped me dead in my tracks.
But you still have to start somewhere. I am always fascinated by the isomorphic keyboards that have row upon row upon row of identical white keys. Eventually somebody learns to put a texture or something on all the C notes.
I’m with @jah in suggesting, regardless of tuning, that guitar-type instruments aren’t fully isomorphic. Perhaps that goes against the definition, but the family of instruments avoids the problem that was discussed above regarding spatial awareness & the relation between performer & instrument.
The piano’s layout is perhaps most important because it covers such a wide range and would thus be very difficult to navigate so many notes without the repeating pattern of black notes. The guitar conversely is small enough, has variable note spacing, and is generally in a fixed position relative to the body, that it’s possible to learn ‘absolute’ muscle memory.
The issue I see (and the one that restarted this conversation) is in using an isomorphic keyboard on an instrument that 1: isn’t in a fixed relationship to the body (rests on a table), and 2: doesn’t have (physical) guidelines allowing for muscle-memory to be developed in a relational way. A physical grid of buttons is obviously at least capable of having note-to-note muscle memory developed (though absolute locations are incredibly difficult), while touch-screens also lose the guidance of physical note boundaries.
Bringing it back to the UX design discussion - I guess my point is that much care needs to be taken to give physical cues to the user, the strength of which need to be stronger as the instrument becomes larger & more complex. If you’re building an instrument for a touchscreen, then it’s probably best not to use large equal-spaced grids as the input method. Then again, why would you?
I’ve noticed that when Geert Bevin plays the Linnstrument, he puts it on his lap. He says this is because he feels the pressure from his hands on his legs, but I suspect it also has to do with arm extension.
Many, many deep thanks for resurrecting this. I’ve recently felt the need to double-down on these sorts of thoughts and explorations, and have been planning to put together a few blog posts on these sorts of things.
To start, I’m really glad that Bret Victor was already mentioned here, because a lot of comments made me think of his talk Inventing on Principle. His primary position in the talk, that “creators need an immediate connection to what they create” cuts to the heart of what Dan_Derks said earlier in the thread: an instrument should respond with sound immediately, allowing you to immediately respond to the sound in a feedback loop.
Jumping into the rant format a bit – I’ve been thinking a lot in terms of dimensions. Immediacy, form and affordances, flux… How you can imagine looking at a guitar for the first time in your life and think “I’ll tap the string” and you hear a sound. And how you might then imagine ways to play it. Some might be non-traditional, but none would really be incorrect.
This has kind of led me down the path of thinking “How can we design electronic/computer music instruments that afford abuse?” Like, the piano was not designed to be prepared, but it affords it.
With physical instruments, physical preparations are relatively straight forward. With electronic instruments, you mostly have circuit-bending and parameter tweaking. With software, there’s not really much space beyond the parameters exposed and quirks that may or may not be easy to identify.
Another dimension is static vs. dynamic, similar to the discussion about modal pixel interfaces. Most traditional instruments are played while they’re static, which is to say, the underlying constitution of the instrument doesn’t usually change as you play it. There are some exceptions to this – a guitar can go out of tune as you play it, or you can break a string, but these are usually error states rather than musical deliberation (unless you’re detuning a guitar string while you’re holding an ebow over it or something).
In these cases, the subtle variations come from dancing around a fixed object. Contrasted with something like modular synthesis, where the instrument is dynamic or modal, and only a specific patch may be considered static. Which brings us to another dimension: excitation. How does one excite a modular? The distinction between playing and exciting here is important: you can “play” with sequence programming on a modular, or design systems that self-excite with dirac deltas or gates, but facilitating human excitement of these systems requires some sort of sensor interface: turn the knob to sweep the filter, push the button to open or close the gate.
With general-purpose control surfaces, we’re very frequently able to completely change the full essence of the sound generator and the way it’s excited at the push of a button. Plug the grid into a different module. Chord correctly to change OSC prefixes to a different Max patch in Pages.
It’s late and I’m struggling to bring this all back together, but I suspect we’ll find that our human-computer interactions feel more musical when how we push the button matters as much as the button we choose to push. The immediate feedback is necessary, but not sufficient, for that.
The recent five-episode-mini-series of “expressive controllers” from the Art+Music+Technology podcast is also extremely interesting in this context. Especially around the coupling of interface to sound generator.
Amen. Thanks for joining! Been saving a seat for you.
maybe you heard the piano players, back from touring in asia, saying that the precise mathematical tuning of the piano keys at the venues was 'weird, made it harder to play… 'the notes should be bent out at the edges a little…
protools, ableton, audacity, monolase…
aalto, parc, mlr…
mabalhabla, dj64, mlrv (and max apps that have a 'save to audiofile option)
thanks again @jasonw22 this thread is cool
I feel like I’m missing something here… what’s hidden in the Television album?
UX/UI has been on my mind a lot lately, since the nonprofit I work for is currently in the midst of designing a new website and a custom online grants application and reporting system. I’ve been spending a lot of time critiquing websites and thinking about how design choices minimize or maximize the utility of the site since ultimately our future website has to be useful as a resource. While aesthetics are really important to me, prioritizing form over function is a pitfall I want to avoid.
In the context of music instruments and tools, I think of times when software designers make aesthetic decisions over considerations of utility, such as adopting simulations of real world control interfaces for a computer screen when it doesn’t make sense (this has been covered more eloquently by others ITT).
Ableton Live has been mentioned and I want to highlight one thing, from my perspective it is really easy use to navigate with a mouse and computer keyboard. The designers considered how people would use it and conciously adopted choices to maximize functionality.
In terms of hardware I think the OP1 is a masterpiece of both industrial and UX design. It has both immediate accessibility and hidden depths and is also incredibly fun to use.
it’s like this…
just 'cause they pushed it (production and marketing) a certain way (darkness)
listen to her voice (brillance) and she’s playing the drum kit
that’s all anybody wants…transcendence
why do we act like that can be bought and sold?