This is a topic occupying a lot of my brainspace recently, it seems very lines-ey and I didn’t see a thread on it so far, so let’s talk about it!
What exactly? Taking acoustic instruments and using electronics and software to augment them, specifically in ways which are playable in real-time, use the acoustic sound somehow, and are controlled mainly by interfaces built into the instrument itself (although that definition is flexible).
I just found this rather nice example of augmented sax promoted by one of my favourite Icelandic venues:
And here’s my latest work on the topic, using electronics and software to create new playing techniques for hybrid electro-electronic-acoustic hurdy gurdy:
This thesis mentions a lot of interesting stuff to look into, and captures quite nicely what I feel the difference is between an augmented instrument and a “prepared” instrument, or an instrument plus a bunch of controllers:
The motivation to create a complete, self‐contained instrument is deeper than mere ergonomics, however. It is also psychological and emotional. A self‐ contained instrument deserves respect. It suggests longevity. It demands commitment if it is to be mastered. Sukandar Kartadinata, creator of the Gluion range of sensor interfaces, shares this motivation: “The goal of the gluiph project [is] to derive at more mature instruments with a stronger identity. Integrating the defining components into a single physical entity is . . . not just a matter of aesthetics” (2003:181).
I was at that Tom Manoury performance at Mengi! How strange to see it pop up here. I think of Mengi is my home away from home… some of my peak musical experiences (as an audience) have occurred there.
The limitations of 7-bit midi and dealing with latency have always stopped me from pursuing ideas in this area. Advances in physical modeling (eg the amazing ways to create “extended piano” with pianoteq) further quenched my thirst, but there’s something about the idea of fusing truly acoustic sound with expanded expression through electronics in one cohesive instrument that is so appealing.
I recently purchased Hollyhock (which I’m pretty sure is what Tom Manoury used in this performance) in part to explore this idea again. So following this with great interest.
The acoustic and electronic elements are exceptionally well balanced, listening to those pieces I can often not tell what is acoustic and what is processed. I see his approach of using pre-recorded gestural automation as a slightly different thing, a precursor perhaps, to a realtime-playable augmented instrument, but this idea of creating an uncanny atmosphere into which acoustic instruments are played is an interesting way of thinking about it — and, seems very characteristic psychedelic Subotnick
Exciter speakers, and their potential to remove or reduce the need for a PA, are something which have interested me for a while, but I’ve never experimented with them. I think @papernoise did an interesting piece a while back with an exciter-driven string quartet, they’re probably a good person to ask about how to achieve that technically. Placing a contact microphone and an exciter on the same acoustic body seems like a recipe for feedback, but that in itself could be an interesting thing to experiment with.
Yeah, 7-bit MIDI is a pain, although in my experimentation so far I’ve found that it’s steppiness is less noticeable when using it to process acoustic material than when controlling synthesized sound, as the variations and imperfections in the acoustic sound can smooth out and hide the steps — but of course it depends on what you’re using the data for.
The augmented trumpet thesis mentions that, if you’re using any sort of DSP for the audio processing, latency introduced by the controller system is likely to be dwarfed by the latency introduced by the interface, computer and DSP algorithms.
Hollyhock looks interesting! Between that, bidule and Reaktor, I feel like my current approach of trying to implement all custom audio processing and control software in Puredata might be a dead end
This is a big area of interest for me, though these days less in terms of a controller/sensor type thing, and more things driven by audio input and analysis. That, along with some high-level control (via monome and a Softstep generally) is how I generally approach “augmenting” an instrument.
That being said, I am working on trying to get more information out of a drum that can then be used to drive audio processing with:
One thing I commented on in another thread is that, it’s a shame to see really powerful sensor and/or synthesis technology locked behind software walled gardens, where all that’s available is “regular” MIDI.
Latency-wise, you’d be surprised what you can get away with. Even doing percussion-heavy music, using processes that are triggered by onsets detected in an incoming audio stream, with attack-based sounds that fade quickly, if I have my I/O vector size set to 64, you can’t tell. Even 128 is passable really.
For me the bigger issue is resolution (and meaningful-ness) of the data you have coming. So 7-bit MIDI is shitty, as is uncalibrated/unsmoothed/noisy data.
There’s tons of interesting stuff going on in this area, off the top of my head here are two super-modified trumpets:
The idea of augmenting instruments through electronics is my focus in composition. I’m an Oboist and use MaxMsp in my work. My teacher Mari Kimura always believed that these types of works are most effective when the laptop and performer are on equal footing. Basically, write a program that acts a second performer and supports the instrument rather than steals the spotlight. I don’t use any prerecorded sounds in my work either - I like the idea that every piece will be different.