Not a proper video, but a quick test I made with my Sensory Percussion trigger sending MIDI over to Max and then using a generative synth patch (based on the ciat-lonbarde Fourses (this gen-based implementation of it)) along with some BEAP modules for Filter/ADR/VCA.
Position on the drum (center-to-edge) controls filter cutoff, dynamics controls amplitude, and hitting the rim of the drum generates a new/random synth setting.
I have to say that it works pretty well, very sensitive/dynamic. The “Timbre” controller is a bit more crude than I thought (in terms of determining position on the drum), but it’s still crazy good that it can do that through audio alone.
In doing some further tests I also determined that using the SP triggering takes (on average) 11ms longer than my native onset detection algorithm (if I don’t care about velocity), which makes sense. It takes actual time to do the fancy-pants machine learning stuff it’s doing. That being said, it is then 4ms faster than my onset detection that can determine velocity of the strike too, so that’s good.
And in doing all that I also made a native Max UI for the MIDI data that matches the Sensory Percussion (as much as possible without circles/curves).
edit:
Not worth making another post, but did a test with sample playback too:
Position on the drum (center-to-edge) controls the position in the sample to play and dynamics controls both amplitude and playback duration.