SP-Tools are a set of marchine learning tools that are optimized for low latency and real-time performance. The tools can be used with Sensory Percussion sensors, ordinary drum triggers, or any audio input.
SP-Tools includes low latency onset detection, onset-based descriptor analysis, classification and clustering, corpus analysis and querying, neural network predictive regression, and a slew of other abstractions that are optimized for drum and percussion sounds.
SP-Tools is built around the FluCoMa Toolkit and requires v1.0 to be installed for this package to work.
Requirements
Max 8.3 or higher or Live/M4L (Mac/Windows).
FluCoMa v1.0 or higher.
All abstractions work in 64-bit and M1/Universal Binary.
Thanks so much for this Rodrigo! I remember having a conversation with you about this a while ago, as you were developing it ā super interesting work. Going to have a look through the tools today and see about using them for an upcoming show with a classical singer. Thanks for your continued contributions and support!
From what I remember in the contact mic thread they are super duper sensitive, so it may be hard to dial in the gain and thresholds so it doesnāt false trigger all the time.
That being said, it depends on what you will be putting it on. If itās going to be a drum/head, Iād recommend getting (or making) one of those foam cone-style contact mics as they isolate quite well and are fairly rugged. If itās for an arbitrary surface, then something like the marshmallow would work just fine. Iāve not actually tested that yet, but I think you can do all the training/classification stuff too.
That reminds me, does anyone remember the name of that product/iphone app that was a contact mic and it could do attacks or if you did friction/rubbing it would do another process? Iāve not seen/heard about that in years, but it might be nice to revisit that and perhaps try and implement something like that too. (it was Mogees)
Thanks, you are correct, with reference to the gain and thresholds. Will consider this when I setup the drum, probably be a Simmons, or tech star electronic trigger, should be interesting?
The broad strokes is the ability to filter corpora by descriptors, a completely revamped core sampler/playback engine, and realtime descriptor analysis (laying the groundwork for future updates).
I am sincerely hoping that machine learning, A.I. stuff, can make drum apps reliance on huge audio libraries to get acoustic real world drum and drumming sounds a thing of the past.
Machine learning specific drummers styles and their specific kits could lead to smaller apps and some great features not currently available in drum apps. Add in envelope followers and the app ālisteningā to the music and creating drum lines⦠Yeah, the future is gonna sound really fun
Indeed. Or at least change the way they are navigated. For me one of the biggest motivations (beyond interesting/weird sounds) was how to scale that up. If you have 3k+ sounds, you canāt map them without it being a huge pain. Not to mention things like setting up splits/roundrobins/etc⦠are super tedious. So I wanted to be able to do it via descriptors/analysis.
I still want to try adding more on the machine learning-side of things, but the issue with most of that is speed/latency. Thereās some cool signal decomposition stuff, but it has audible latency, and although Iāve gotten cool/interesting results with pre-decomposing corpora, the amount of HD/memory space gets out of hand. (I think I made a test corpus a while ago that was although only being a couple hundred samples, ended up being almost 10GB due to the pre-decomosed layers being saved as different channels in audio files)
Indeed! You can use it with vanilla contact mics, or regular air mics too. Your mileage may vary with regards to accuracy (particularly when it comes to the classification/clustering stuff), but all the algorithms/abstractions donāt care about what the input is.
Iāve fired up the native Sensory Percussion software only a handful of times. The rest of the time has been spent in Max just using the āaudioā from the pickup directly.
Yeah, it sounds way more complicated than I could hope to understand
Im using Toontracks drum apps sometimes atm, and their way of (I assume) ālisteningā to the frequency bands and using that to get beat detection and then add kick/snare etc using all that info, with probability and ārandomnessā sets my imagination on fire. That combined with machine learned sounds, rather than the audio libraries⦠Iām ready for all that already!
added āconcatā objects for real-time mosaicking and concatenative synthesis (sp.concatanalysis~, sp.concatcreate, sp.concatmatch, sp.concatplayer~, sp.concatsynth~)
added ability to apply filtering to any descriptor list (via sp.filter)
improved filtering to allow for multiple chained criteria (using and and or joiners)
updated/improved pitch and loudness analysis algorithms slightly (you should reanalyze corpora/setups/etcā¦)
The concat stuff is something Iāve wanted to add for a bit (and is why v0.3 added realtime descriptor stuff). Itās not as clean/robust sounding as C-C-Combine but thatās completely its own thing.
From here forward Iāll add some new/big features, but one of the biggest changes going forward (which I talk about in the last bit) will be that I will start adding/including some M4L devices. It wonāt be a gigantic amount of them, but there will be flagship ones for the main functionality (āCorpus Matchā, āDescriptorsā, āControllersā, etcā¦). I just need to figure out the best way to implement them as most of the things SP-Tools does falls in between being an āaudio effectā and an āinstrumentā, which are the only archetypical M4L devices you can make unfortunately.
Really looking forward to M4L devices! Even as just a simple MIDI effect for now so I can use the zones/sp.filter and controllers to send midi to my m8. (Iām also really into the idea of controlling the m8/Ableton Live using input to only require the drum for interaction, however:) Only having live Suite means Iāll need to wait till then to efficiently have a ready way to use SP-Tools without making a temporary (no-save) patch, correct? My understanding is that in its present state, I would need to make a temporary patch/modify an M4L device that Iād be unable to save, calling the various sp. functions.
Been dying for a way to use my SP sensor without the DRM and even having dealt with Sensoryās customer support to get my license back, I really just want to wait to use what youāve been working on instead. Looks and sounds great. Hearing the concat example is wild.
Finishing up some testing/tidying, then will make a vid for the release.
In the first push of M4L devices there wonāt be a MIDI effect thing as thatās, unfortunately, the faffier bit since I will have to make an audio effect that sends stuff to a separate track thatās a midi effect, due to Live limitations. Not difficult, but a bit fussier to set up (well).
So my focus was on just getting the core functionality up and running with these.
At the moment there arenāt any tools for creating your own corpora, so you could only load the included examples, but I am thinking about adding that in a future update. Though Iām not sure if thatās a kind of āLiveā thing and more of a āMaxā thing.
Edit: Had time to try the sp.gridmatch and it works great with the Sensel!! I will try to create a M4L device with it as soon as I can. I will try too to implement pressure or velocity for each triggered sample inside the poly object.
Thanks so much Rodrigo for touching on how you wanna make sp-tools accessible for people near the end of the video and working on the M4L devices. Itās very inspiring and I strongly believe many beginner-enthusiast type Live users will begin a deeper understanding of the potential in Max and more abstract concepts such as the various descriptors and machine learning as a whole. I know for sure this project is changing how Iām using Live.
ps That Erae controller looks amazing. This has made me aware of it and itās at the top of my watchlist now. Support for striking with sticks and whatnot is something I assume is a terrible idea for the Roli Lightpad.
Question you probably have looked into: Are there contact mic-level pickups/sensors that are able to be used raw, in-DAW, for sp-tools? What Iām asking is if there is some low-latency kind of input you could use when interacting with the Erae thatās even remotely close to an Sensory Percussion sensor? Something similar to attaching a contact mic to the pad is what I imagine. Actually doing that seems kind of ābarbaricā for what looks like such an impressive interface. Either way a very tempting trigger-pull even watching you adjust the contrast on the corpus visualization, very very impressive.
You can get very usable results with a generic drum trigger contact mic-type thing. (DDrum etcā¦) And thereās no shortage of DIY solutions ranging from simply taping a piezo disc to the head to using those foam cones and making a little mount.
Having a preamp/buffer of some kind helps, but honestly, piezos (when used with percussion) produce a really strong spike either way, so it will likely work well with all the onset-based stuff. Less so for the classification/descriptors.
What the Erae Touch offers, and is miles ahead of anything else in that regard, is super accurate position resolution. I bought a KMI BopPad off the Kickstarter years ago and was super disappointed with the resolution. People talked up the Sensel Morph, though it has ot be struck delicately, but thatās been abandoned now.
That being said, most of whatās in SP-Tools lets you browse the corpora of samples using descriptors, so the input could be anything (SP sensor, contact mic, air mic), including now XY grid things.
I think Iāll add separate devices for Cluster Training and Cluster Matching (even though this is done with sp.classmatch under the hood just the same) as it will just better differentiate the UI while in Live.