Ok!
Here is an alpha version of the C-C-Combine M4L device.

Here’s the download link:
http://rodrigoconstanzo.com/party/combinev0.8_beta1.zip
And here are the initial corpora to use with it:
http://rodrigoconstanzo.com/party/corpora_v3.zip
(If you are unfamiliar with C-C-Combine or what concatenative synthesis is in general you can have a look at the explanation of the original Max version of it here.)
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
There are still some UI/UX things to muck about with, and some other existing issues which I’ll mention below, but I just wanted to get a version of this out there for people to start playing around with.
So some initial bits:
*It’s 32bit only at the moment (though it should play fine on Mac/Win). You shouldn’t need to do anything except open the device.
*Some of the bigger corpora take a long time to load (pinwheel and all). This is normal, and just due to the file sizes involved.
*Automation and mapping should all work fine.
*As far as I know, you will likely need to clear/reload/relearn all of the loaded corpora each time you open Live. This is one of the more complicated parts of the device (and the most new to me, never having made an M4L device before), but this is a known issue/limit at the moment.
*I will likely change the CORPUS section so that it auto-detects the included corpora and populates a menu from that, but for now you have to select the corpora manually.
*Still tidying up the patch that lets you make your own corpora.
*Everything has descriptive annotations, so you can hover over things to find out what things do (quick start guide below).
*There will be a bunch of ‘special corpora’ that will come with the initial release, like a curated “release” of sorts where people will create/donate custom sound sets for people to play around with. There’s one included in the download above from Sam Andreae.
Things to check out:
- General usability. Does it make sense? Is it confusing? etc…
- General mapping/naming. Do things work, automate, and map/behave as expected?
- Does anything crash or otherwise act funny?
In general I welcome any/all comments/suggestions, from UI/interface/color, to bugs/crashes, to general workflow improvements etc…
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
Now when it’s released released I’m going to put it out with a nice blog post explaining shit in detail, some performance videos, a couple tutorial videos, etc… all trying to help with the concept and how to go about using the patch. But here is a simple ‘getting started’ guide.
- First up is loading a corpus, by pressing the flashing “load corpus” menu in the top of the CORPUS section. Point it to any of the
.json files (the corresponding audio file is loaded automatically).
- Once a corpus is loaded, the device is ‘running’. If you play audio through it, you’ll hear it resynthesized by the loaded corpus, by default.
- You can load additional corpora to easily be able to switch between them using the same menu. (You can also select the current, or all loaded corpora, so you can get really interesting results by choosing what corpora you choose to load).
- Most of the change in sounding results will come from the PLAY section. Play around with the smooth / chunky toggle, and smear. Pitch and glitch have more extreme impacts on the sounds, as do the various envelopes you can apply to the grains.
More advanced bits:
- Once you are playing audio through it (at least 5 seconds worth), press the LEARN button in the CORPUS section. This will match the input audio to the corpus so that you can access more of the corpus based on your input (like if you only have high pitched sounds, you can still trigger low pitched sounds in the corpus). You can then press the NORM button to enable the normalization.
- The MATCH section tunes the matching engine. You can tweak the select knob to get more or less matching overall.
- The individual descriptor knobs in the MATCH section lets you dial in really specific querries since the underlying code will reorder based on how the knobs are set. For example, turning a descriptor knob all the way down stops searching for it, so play with these knobs to get some quirky settings (centroid and pitch turned all the way down, with loudness/noise turned up 3/4 of the way).
So yeah, have a play and let me know how you get on.
///////////////////////////////////////////////////////////////////////////////////////////////////////////////////
The main things that will be changed is the recalling/reloading of stuff, so that reloading a Live set should recall all the loaded corpora, and learn/normalization settings. This is a pain in the butt since it’s large data sets, but should be doable.
I do plan on changing the UI a bit more as well, but I’m open to general suggestions/improvements, so let me know what you think.