Honestly? Probably not. But I’m guessing you’ll end up with something that isn’t text based. You want your format to be as compact as possible with as little brackety fluff as possible. Usually this translates to some non-human-readable binary format.
Or the other option is not a file at all. A database could be a lot faster. There are all sorts of tiny open source databases intended for this sort of thing. I’ll dig around a bit and see what I can find.
EDIT: here we go:
SQL is kind of a thing for me, so let me know if I can help.
Heh, “brackety fluff”. The dict output is definitely syntax heavy. The file size is a touch bigger, as a result, just in that stuff.
In the context here it’s handy that the metadata is human readable (and editable), though the analysis data itself, not so much.
I’ll have a look at the SQL, though the nice thing with the native dicts is that you can pass around references to dictionaries, as well as use a whole bunch of native tools to manipulate/iterate through the data.
Looking it into it a bit further, and it can (import/)export yaml files too, which have less fluff, though more escape characters:
edit: (pasting here ignores the indentations, which I imagine are significant for the syntax)
artist: ““Rodrigo Constanzo””
title: ““Accordion Noise””
description: ““Mechanical noises produced by manipulating a robotic accordion.””
date_analyzed: "“2017-04-17 / 20:09:31"”
comment: ““Created using one of Patrick Saint-Denis’ robotic accordions with the assistance of Pierre Alexandre Tremblay for the Black Box project.””
loudness: ““loudness 1 max””
pitch: ““pitch 0.6600 median””
centroid: ““log_centroid 10 20000 mean””
spectral_flux: ““sfm 10 20000 mean””
Don’t know if “less characters” is is make-or-break here, but that’s another (native) option.
YAML might help over JSON. Worth a try.
I think the idea is that you would pass around smaller dicts created on the fly by querying a database. Yeah, maybe less convenient. Eventually a necessity as your haystack grows and your needles stay the same size.
Another way of looking at it is: try to keep your haystacks small. Eliminate anything unnecessary or excessive.
LOL, so now I’m trying to define “unnecessary or excessive” in a musical context. Not having a lot of luck!
Thank you for the in depth explanation and sorry for the delay!
Regarding the lower edges, personally I do not see me using this part of the matching range that much because I do not hear a lot of qualitative difference there soundwise. Except that it just sounds a bit different - but the difference between far away from a recognisable/good matching to very far away seems to be not very substantial.
What I find more interesting is the upper matching range to play with how recognisable a timbre or melodic structure still is and moving back and forth from there.
After all and given the choice I always would prefer stability over a wider range.
Of course this also depends on the file size of the corpora. Maybe it would have to be considered when creating a corpora that it should be very dense, meaning differentiated regarding timbre and dynamics over a short time to keep the file as small, yet the sound as as complex and nuanced as possible.
Yeah I’ve tightened up stuff some. In fact, yeah, here is the most recent version which fixes a bunch of things. (less CPU overall, wider grabbing areas, colors all dim when bypassed, etc…)
Hmm, that’s an interesting thought. I’ve scaled things back, as mentioned, but perhaps I can add clamps even further when a the corpora loaded are quite big, so if you have just one, or a smaller one, it will allow a wider range, but once the database gets bigger, the queries are limited (internally), to avoid further problems.
Yeah, seems to be far more stable now! Knobs are working fine and while the CPU usage is still going up on the more extreme edges with 3 corporae (as expected - but not that extreme anymore) I get now cracks.
Now that we are in Live, would it be possible to tie the rate to Ableton’s clock speed somehow?
And, I like the idea of the normalization function, though I wondered if it would be possible to split it off for the four parameters/loaded corporae. Sometimes when I use it I hardly get any hits anymore.
Still great Fun!
Hm, just found out that Kontakt 5 has no 32-Bit version for Mac.
So no coco-combine with Kontakt instruments - does anyone have an idea how to solve this?
I take it you mean “no cracks”? In which case, that’s great news!
I will still try to optimize it further, limiting the width in extreme cases, but if you had no cracks with 3 corpora going, that’s working worlds better already.
Hmm, interesting. Like a sync thing? I guess that would fundamentally change the idea approach, in that it would be an audio analysis-informed grain sequencer. I could see that being cool, but that then brings a whole lot of features along with it (selecting subdivisions etc…)
Not to mention that this would only really work in SMOOTH mode, as in CHUNKY the grain playback is aperiodic, so it couldn’t really sync to a stable tempo.
I may change how that works further in the future. Originally I had a bunch of presets you could load, and allowed for the option to create and upload your own ‘profile’ to it, but for the sake of simplicity, I added a ‘Learn’ button instead. It is very picky in that, it really wants to have 5 seconds of audio similar to what you will be doing. So you mean being able to ‘learn’ but for each individual parameter? That could be interesting.
This is the kind of thing that would benefit from a much more complex/robust UI though, as the idea of normalization is kind of abstract, with the change in the UI just giving a nod to what’s happening (but isn’t accurate at all). But then that get’s a lot more complex, and a lot more out of my skillset too…
Oh snap, that sucks. I recently got Kontakt too, but I’m still in 32-bit for most of my stuff due to a few lingering externals and such.
When the real version comes out it will be 32/64 mac/win.
I guess a klunky workaround would be to use the standalone Kontakt and rewire out to it from Live? (actually can you rewire from 32 to 64? I wouldn’t see why not, but I’ve never tried)
After a blizzard of a marking season at my job, I’m back to working on this.
The new version loads super fast (the smaller corpora are almost instant on my computer), and there’s been a bunch of other improvements, but the main thing I’m revamping now is how corpora are loaded.
So I’ve got a question about what people think makes more sense.
There’s a single menu which shows all of the corpora installed on your system (by checking for them at startup), and the ones that are actually loaded into memory get a tickmark next to them.
In order to load a corpus you select it in the menu, it takes a moment to load, and now that item has a tickmark next to it.
You can load external corpora and clear loaded ones from the same menu.
Current/All toggle selects between the currently selected one and all the loaded ones (i.e. ones with tickmarks).
This option is compact and makes more sense with regards showing what’s available and loaded in the same place. The downside is you have to click on the menu to find that out.
There are two separate menus, one for the available corpora and one for loaded corpora. The available menu would work like in Scenario1 in that it auto-populates from what is on your system.
When you select an item in the available menu it gets added to the loaded menu.
To load external corpora you would do this through the available menu, and to clear currently loaded ones, you would do this from the loaded menu.
Current/All toggle selects between the currently selected one and all the items in the loaded menu.
This was my initial idea for this, but after thinking about it further, the UX seems weird (selecting one menu to populate a different one etc…), but at the same time the first option doesn’t seem perfect either.
So if anyone has any thoughts on which option is better, or a different suggestion altogether, I’m all ears!
Definitely prefer option 1. I think you just need a bit of hint text/legend somehow saying that the checkmark means “loaded”.
After a busy marking period I’ve gotten back to this, and it’s just about done. Still some minor issues with the parameters not being always recalled correctly by the Live set, but this works for the most part. Tons of cpu and under the hood optimizations as well, so should run smoother/leaner overall. And the corpora are all loaded by default in the menu, so it should work straight out of the box.
UI looking tip top, so thanks for input on that front.
Here’s a lil vid before the main visualizer was put in:
Yeah! Looks and sounds great - I can hopefully give it a try over the weekend…
Works like a charm!
beautifully organized. I’m interested by the load custom corpus option.
Still tidying up that part of the patch, but there are more corpora that you can load from here:
it sounds incredible
I’m loving cccombine. Is TPV2 an active project? I really like the idea of the sections of tpv reinvented as separate max patches like cccombine for production purposes.
I’m still drooling over the idea of tpv in m4l form synced to tempo for live sampling and performing. I love what you do! Keep it up @Rodrigo
Yup, still active project. Currently nearing the end of the reworking of C-C-Combine, which is also my first M4L device, so lots of learning happened there.
Once that’s done, there will be more M4L bits coming out, as well as a larger TPV2 patch.
(it’s less likely that there will be a (sample accurate) tempo-synced stuff, due to how karma~ is built)