I didn’t know about the Morton Subotnick ghost pieces, that’s really interesting stuff. Here’s a master’s thesis I found which goes into quite a lot of detail describing the historical, technical and artistic context of the works, as well as how they actually work: https://scholarworks.sjsu.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=4861&context=etd_theses
The acoustic and electronic elements are exceptionally well balanced, listening to those pieces I can often not tell what is acoustic and what is processed. I see his approach of using pre-recorded gestural automation as a slightly different thing, a precursor perhaps, to a realtime-playable augmented instrument, but this idea of creating an uncanny atmosphere into which acoustic instruments are played is an interesting way of thinking about it — and, seems very characteristic psychedelic Subotnick 
Exciter speakers, and their potential to remove or reduce the need for a PA, are something which have interested me for a while, but I’ve never experimented with them. I think @papernoise did an interesting piece a while back with an exciter-driven string quartet, they’re probably a good person to ask about how to achieve that technically. Placing a contact microphone and an exciter on the same acoustic body seems like a recipe for feedback, but that in itself could be an interesting thing to experiment with.
Yeah, 7-bit MIDI is a pain, although in my experimentation so far I’ve found that it’s steppiness is less noticeable when using it to process acoustic material than when controlling synthesized sound, as the variations and imperfections in the acoustic sound can smooth out and hide the steps — but of course it depends on what you’re using the data for.
The augmented trumpet thesis mentions that, if you’re using any sort of DSP for the audio processing, latency introduced by the controller system is likely to be dwarfed by the latency introduced by the interface, computer and DSP algorithms.
Hollyhock looks interesting! Between that, bidule and Reaktor, I feel like my current approach of trying to implement all custom audio processing and control software in Puredata might be a dead end 