I wouldn’t use a Shnth via usb, I’d go with piezos as direct voltage (perhaps with something to clamp it to a sensible range).
Not tangential at all! I’ve been playing with my onset detection stuff again after coming across the Sensory Percussion stuff and seeing how responsive and accurate that was. What I had before worked really well, but didn’t handle rolls and super quiet attacks, and was sometimes susceptible to false triggers from quiet/ambient noise. So reading this stuff is suuper fascinating, and super relevant, as this is tied into improvement my machine listening stuff with a massive emphasis on latency. (more on this below)
The wand is super badass, especially for its time (hell, even for now!). A friend (Richard Scott) used one, along with LiSa as the ceterpiece of his setup for years.
Interesting! I guess it calculated the velocity/acceleration at the start and presumed an appropriate ‘end’? I’m currently waiting a fixed period of time (based on the volume differential enveloping) to take a maximum value, which serves as my velocity. For my current setup it’s 15ms latency after the initial onset detection, for the velocity to be calculated.
Also super fascinating, and incredibly creative solutions to that ‘problem’ too. Especially with the added perk of ‘negative latency’ here, allowing for more latency heavy processes down the line that could be then compensated out (more complex/detailed fft stuff). God, negative latency, what a luxury!
Heh, yeah not MIDI-based at all, and in fact that’s the big downside to the Sensory Percussion stuff. I sent them an email with a bunch of questions (which they were super helpful and quick with responding), but the bottom line is that there’s no lower level access to the machine learning/training stuff, and it only spits out MIDI (no OSC or 14bit MIDI), so it wouldn’t be possible to do any descriptor-based stuff, or 2nd order feature extraction on the data from the system (unless I do it on the ‘raw’ sensor output, which I’d happily do, I would just lose out on the dsp side of their stuff).
Back to the dsp stuff, I’ve been reworking my onset detection based on a bunch of info in this thesis (section 2.4 specifically). Essentially trying to recreate the system they are describing which works like this:
Audio -> (DC-removal) -> (lowpass filter) -> (moving median) -> (moving mean) -> (thresholding) -> (peak picking)
I’m limited in that I’m working in Max and my ‘real’ dsp chops are pretty low, but incorporating the hipass/lowpass/median stuff in the mix has helped quite a bit. I still have to implement more stuff (dynamic thresholding, as what I have is super glitchy/nervous in quieter bits). Here is what I have if anyone is curious.
onset.maxpat (18.1 KB)