Yeah, some corpora take forever (the sax and plexure ones are particularly huge).
Hmm, what might be happening is that it’s always matching, so if there is hiss coming or something, it will match from the quietest sounds in the corpus. (Are you sending it digital silence, or something plugged in but not playing?)
The noise knob (and the rest of the matching knobs actually) only effect the weight of that descriptor when matching, so how fussy it is about each component. So turning the noise knob down will just mean it will ignore noise when matching grains. That’s not to say you won’t get noise at all, but rather than it won’t be looked at.
Maybe I did not explain it right - it is not that there is some sound/noise even when I don’t feed any audio in. It is that the audio I feed in is distorted even with the output blend tuned fully CCW. And the distortion fits with the grain selection as it is shown in the small play display but is not connected to the sound of the loaded corpse.
BTW it is pretty hard to turn the knobs to zero sometimes - they tend to stuck somewhere on the last 5 %.
The knob thing I’ve encountered too. It’s not ideal, but it’s how I’m “faking” the look of those knobs. There are regular (but invisible) Ableton-style knobs on top, and those are what you are turning/controlling, but if you grab them towards the edge (for whatever reason), that happens. The same happens with the actual Ableton knobs, but not if you grab the center.
I’m not getting any of those. Could you make a little recording of this? Or even better, a little phone video of the screen showing what’s happening?
For the next iteration I’ll line up the ‘invisible’ knobs a bit better. I erred on the side of the square bracket thing lining up right, rather than the actual ‘knob’ UI element, but that makes it a bit harder to grab.
I actually thought about going with a regular Live knob for the M4L version of Combine, but I didn’t want to have to develop two separate UIs for the Party-based stuff I make. So wanted to keep it consistent with TPV2 look/feel.
While doing a phone video of the cracking behaviour I felt that it is har to replicate. It seems that is a mix between knob positions, actually moving knobs and the movements on the play screen. Don’t know if you can make it out in the video but there are graphic glitches too where the whole GUI disappears in a very short flash (around 00:34).
Before I did the recording it was a bit different in that the centroid knob has a bigger impact on the craking than the noise knob. It is a bit inconsisting but looks like this:
Damn, I did not expect the sun to come out - sorry for the inappropriate clothing…
Thanks, that helps a lot actually. So those little crunching sounds are what you are referring to yeah?
What does the CPU usage read in Live (or in activity monitor)? It sounds a bit like the cpu not being able to keep up. Also try this, turn one of the single matching knobs all the way up (so the blue lines in the display stop) and see if the crunching still continues.
Basically when you turn the knobs all the way down, in order to still match something (as in, you are telling it to “find whatever”), it looks for stuff based only on loudness, with a really high tolerance. What this means is that it’s scanning through a HUGE database, a lot, for every grain, so it’s quite intensive. In an earlier version of this I was getting crashing as a result, so I refactored stuff so avoid searching for “whatever”.
If that fixes the problem, I’ll just reduce the aperture on the loudness query when everything is turned down.
And the flashing is a weird thing I’m seeing too on mine. On mine it happens even more than in yours. I’m not sure why it’s doing this, and it seems to be M4L-specific behavior (as in, it doesn’t happen in Max at all), so I need to test more to find out what’s up.
When I turn the knobs down with a higher rate both the action of turning the knobs and their state being turned down increase Live’s CPU display up to 200 - 300 % and the internal CPU usage of the MacBook (Pro, 2013) up to 90%. bringing up the match knobs and reducing the rate a bit brings Live’s own CPU usage display down to 70 % and the main one down to 35%. This is all in Smooth mode.
When I am in Chunky mode I still get cracks with Live’s CPU display jumping between 3 and 30 % and system CPU usage for Live at about 30 - 40 %.
Normally Live’s CPU display is under 10 % with all knobs set to somewhere medium and I have no cracks then.
All this seems to be dependent on the select knob position, which corpus I use and if I use more than one.
Though I did not find a way to let the blue lines disappear by setting on of the matches to max.
What might have an influence too is that often when the knobs look like turned fully ccw they actually are not. Low settings here seem to be more problematic than off settings. I understand that setting limiting the choice by setting match parameters makes it more stable.
Ok, that’s pretty heavy. I’ll limit the upper bounds of the rate, and the lower bounds of the “all off” descriptor matching, so it can’t get that high.
If you really want to fry it, the absolute worst would be having the rate AND size maxed, as that will play really fast, and have tons of overlapping grains…
That makes sense too. The bigger the corpus, or when you have multiple corpora loaded (and are in ALL mode), the more CPU it takes to look through, as you are querying the entire database per grain.
If you crank the ‘select’ knob up too, that should help. That controls the overall multiplier for each of the individual descriptors. Think of it as a master “more” or “less” knob, in terms of matching. Even just cranking that knob with default descriptor matching settings should reduce the amount of blips to near zero.
That’s exactly it, and that’s what was causing crashes initially. With low knob settings, it’s almost literally checking over the entire database per grain of playback, so I needed a way to override that. At the moment, whichever descriptor is set the highest gets checked first, to rule out as many matches as possible, then it wittles down from there, but having low (but non-zero) settings, is the most CPU usage. And turning it off completely just stops checking that descriptor completely, which uses less CPU. Let me know if you find those lower edges useful, as I may dial the limits on that back as well, to reduce the overall footprint.
The centroid is where the bulk of the energy sits in the spectrum, but it essentially is perceived as “brightness”. What I label as “noise” is actually a measurement of spectral flux, which is perceived as noisiness.
Same! Just make sure to label your Live and Max Instances as 32 or 64, haha. I think I also had to manually map the 32-bit Live to the 32-bit Max and then both the 64-bit and 32-bit Live loaded with the correct corresponding Max version when switching back and forth.
There’s a separate patch or subpatch that makes them. At the moment that part of the patch is a bit of a mess, and may change a lot soon. If there’s an audio file you want corporized, send it along and I’ll convert it over.
(I will probably change the database format again so that it will have the audio file, a metadata .json, and then a separate analysis file, that loads much faster, as iterating through a dict structure in Max is slow as shit, as it turns out)
It’s been fun learning about data structures in general, and if you have any thoughts/input on the data structure in the .jsons I’d be all ears, as I’m sure you’re waaaaaaay better at that shit than me, hehe.
As I mentioned, it will change again, in moving the actual analysis data outside of that, but the rest is just what seemed sensible, and encompasses all the information I’m likely to need from the corpora. (for now)