Electronically augmented acoustic instruments

This is a topic occupying a lot of my brainspace recently, it seems very lines-ey and I didn’t see a thread on it so far, so let’s talk about it!

What exactly? Taking acoustic instruments and using electronics and software to augment them, specifically in ways which are playable in real-time, use the acoustic sound somehow, and are controlled mainly by interfaces built into the instrument itself (although that definition is flexible).

I just found this rather nice example of augmented sax promoted by one of my favourite Icelandic venues:

And here’s my latest work on the topic, using electronics and software to create new playing techniques for hybrid electro-electronic-acoustic hurdy gurdy:

This thesis mentions a lot of interesting stuff to look into, and captures quite nicely what I feel the difference is between an augmented instrument and a “prepared” instrument, or an instrument plus a bunch of controllers:

The motivation to create a complete, self‐contained instrument is deeper than mere ergonomics, however. It is also psychological and emotional. A self‐ contained instrument deserves respect. It suggests longevity. It demands commitment if it is to be mastered. Sukandar Kartadinata, creator of the Gluion range of sensor interfaces, shares this motivation: “The goal of the gluiph project [is] to derive at more mature instruments with a stronger identity. Integrating the defining components into a single physical entity is . . . not just a matter of aesthetics” (2003:181).

The Keith McMillan k-bow looks like an interesting possibility for gestural string playing.

Post your own experiments, ideas, or nice examples of hybrid instrument techniques!


I was at that Tom Manoury performance at Mengi! How strange to see it pop up here. I think of Mengi is my home away from home… some of my peak musical experiences (as an audience) have occurred there.

The limitations of 7-bit midi and dealing with latency have always stopped me from pursuing ideas in this area. Advances in physical modeling (eg the amazing ways to create “extended piano” with pianoteq) further quenched my thirst, but there’s something about the idea of fusing truly acoustic sound with expanded expression through electronics in one cohesive instrument that is so appealing.

I recently purchased Hollyhock (which I’m pretty sure is what Tom Manoury used in this performance) in part to explore this idea again. So following this with great interest.


Great topic!

Mario Davidovsky explores this in his synchronisms series: https://en.wikipedia.org/wiki/Synchronisms_(Davidovsky)

They’re mostly tape-and-live-instrument, but the point is they are meant to sound like a single expanded instrument, rather than a duet. It’s a really cool series.

Morton Subotnick also has a series of “ghost” pieces for expanded instruments: https://www.eamdc.com/psny/blog/new-releases-of-morton-subotnicks-works-for-ghost-electronics/


I’ve thought of making an acoustic guitar with a midi pickup, pisound, contact microphone, and exciter speaker. maybe I’ll get around to it someday. should be decent possibilities there.

1 Like

i confirm !
Tom is a Usine Hollyhock user :wink:

I didn’t know about the Morton Subotnick ghost pieces, that’s really interesting stuff. Here’s a master’s thesis I found which goes into quite a lot of detail describing the historical, technical and artistic context of the works, as well as how they actually work: https://scholarworks.sjsu.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=4861&context=etd_theses

The acoustic and electronic elements are exceptionally well balanced, listening to those pieces I can often not tell what is acoustic and what is processed. I see his approach of using pre-recorded gestural automation as a slightly different thing, a precursor perhaps, to a realtime-playable augmented instrument, but this idea of creating an uncanny atmosphere into which acoustic instruments are played is an interesting way of thinking about it — and, seems very characteristic psychedelic Subotnick :slight_smile:

Exciter speakers, and their potential to remove or reduce the need for a PA, are something which have interested me for a while, but I’ve never experimented with them. I think @papernoise did an interesting piece a while back with an exciter-driven string quartet, they’re probably a good person to ask about how to achieve that technically. Placing a contact microphone and an exciter on the same acoustic body seems like a recipe for feedback, but that in itself could be an interesting thing to experiment with.

Yeah, 7-bit MIDI is a pain, although in my experimentation so far I’ve found that it’s steppiness is less noticeable when using it to process acoustic material than when controlling synthesized sound, as the variations and imperfections in the acoustic sound can smooth out and hide the steps — but of course it depends on what you’re using the data for.

The augmented trumpet thesis mentions that, if you’re using any sort of DSP for the audio processing, latency introduced by the controller system is likely to be dwarfed by the latency introduced by the interface, computer and DSP algorithms.

Hollyhock looks interesting! Between that, bidule and Reaktor, I feel like my current approach of trying to implement all custom audio processing and control software in Puredata might be a dead end :confused:

1 Like

I’m sort of obsessed with the Vo-96, despite not having any hope of ever owning one, nor even being a guitar player.


This is a big area of interest for me, though these days less in terms of a controller/sensor type thing, and more things driven by audio input and analysis. That, along with some high-level control (via monome and a Softstep generally) is how I generally approach “augmenting” an instrument.

That being said, I am working on trying to get more information out of a drum that can then be used to drive audio processing with:

One thing I commented on in another thread is that, it’s a shame to see really powerful sensor and/or synthesis technology locked behind software walled gardens, where all that’s available is “regular” MIDI.

Latency-wise, you’d be surprised what you can get away with. Even doing percussion-heavy music, using processes that are triggered by onsets detected in an incoming audio stream, with attack-based sounds that fade quickly, if I have my I/O vector size set to 64, you can’t tell. Even 128 is passable really.

For me the bigger issue is resolution (and meaningful-ness) of the data you have coming. So 7-bit MIDI is shitty, as is uncalibrated/unsmoothed/noisy data.

There’s tons of interesting stuff going on in this area, off the top of my head here are two super-modified trumpets:




I assume a lot of lines folks may also be Ableton folks, but if not, there is a cool video from last year’s Loop conference that they just posted the other day…



I could see this turning into a cool delay… or maybe it will just be really noisy

1 Like

The technology behind the Vo-96 was releases as an e-bow like exciter the Wond. Paul Vo is working on the next generation of it, called the empick. https://www.paulvo.com

RPI is a little beehive of electroacoustic research:

1 Like

I thought this is how plate reverbs work?


How do I find the podcast ?

Sounds very cool, are they available on Apple podcasts?

I don’t know how I missed this thread the first time around… One of the neater things I saw at NIME a few years ago was the Feedback Cello from Alice Eldridge and Chris Kiefer

They do the “resonant transducer on the body of the instrument” thing, which is part of a feedback path to the strings and pickups… Some lovely sounds there.


Understood, but you’d have at least one devoted listener here if you do!

1 Like

The idea of augmenting instruments through electronics is my focus in composition. I’m an Oboist and use MaxMsp in my work. My teacher Mari Kimura always believed that these types of works are most effective when the laptop and performer are on equal footing. Basically, write a program that acts a second performer and supports the instrument rather than steals the spotlight. I don’t use any prerecorded sounds in my work either - I like the idea that every piece will be different.

My two newest works for oboe and live interactive electronics are here: