Ok, that’s pretty heavy. I’ll limit the upper bounds of the rate, and the lower bounds of the “all off” descriptor matching, so it can’t get that high.
If you really want to fry it, the absolute worst would be having the rate AND size maxed, as that will play really fast, and have tons of overlapping grains…
That makes sense too. The bigger the corpus, or when you have multiple corpora loaded (and are in ALL mode), the more CPU it takes to look through, as you are querying the entire database per grain.
If you crank the ‘select’ knob up too, that should help. That controls the overall multiplier for each of the individual descriptors. Think of it as a master “more” or “less” knob, in terms of matching. Even just cranking that knob with default descriptor matching settings should reduce the amount of blips to near zero.
That’s exactly it, and that’s what was causing crashes initially. With low knob settings, it’s almost literally checking over the entire database per grain of playback, so I needed a way to override that. At the moment, whichever descriptor is set the highest gets checked first, to rule out as many matches as possible, then it wittles down from there, but having low (but non-zero) settings, is the most CPU usage. And turning it off completely just stops checking that descriptor completely, which uses less CPU. Let me know if you find those lower edges useful, as I may dial the limits on that back as well, to reduce the overall footprint.
The centroid is where the bulk of the energy sits in the spectrum, but it essentially is perceived as “brightness”. What I label as “noise” is actually a measurement of spectral flux, which is perceived as noisiness.