C-C-Combine (M4L) beta


I shared an early version of this in The Party Van thread, but it’s now ready for primetime (beta testing).

I give you C-C-Combine:

(this zip contains the main device, with 4 built-in corpora, and a ‘Create Corpus’ device, for making your own corpora)

Extra corpora to test with:

Here’s a little demo video of what it looks/sounds like (this is before the main MATCH visualizer was added):

What it is is a concat synthesis Max for Live device that is a MASSIVE improvement to the Max patch with the same name that I initially shared about 5 years ago (crazy!). The core concepts are the same, so if you’re interested you can have a look at the original post about it, but the current implementation is better in every conceivable way.

It has gone through a bit of testing (special thanks to @Leverkusen and @jasonw22) and iteration, and is now at a point where it’s ready for a broader testing.

When it’s done it will be shared openly/freely, come with a manual, tutorial videos, blog post, etc… so for now I’ll just include some text instructions.


So some initial bits:
*It’s 32bit only at the moment (though it should play fine on Mac/Win). This means you will have to use it in 32bit Live and 32bit Max for Live. This will change by the time the full release is out.
*Some of the bigger corpora take a long time to load (pinwheel and all). This is normal, and just due to the file sizes involved.
*Automation and mapping should all work fine.
*The Live set should store/recall everything just fine, including externally loaded corpora.
*Everything has descriptive annotations, so you can hover over things to find out what things do (quick start guide below).
*There are lots of checks and limits built on CPU usage, but you can still drive it to really high CPU usage (at really fast/smooth settings, with lots of corpora loaded), so be aware of this.

Things to check out:
*General usability. Does it make sense? Is it confusing? etc…
*General mapping/naming. Do things work, automate, and map/behave as expected?
*Does anything crash or otherwise act funny?
*Does the corpus creation device make sense?

In general I welcome any/all comments/suggestions, from UI/interface/color, to bugs/crashes, to general workflow improvements etc…


Here is a simple ‘getting started’ guide.
*There are 4 built-in corpora that you can load via the CORPUS loading section. You can load more via the load custom entry in the corpus select menu (either ones from the additional corpus zip or ones you’ve created.
*Once a corpus is loaded, the device is ‘running’. If you play audio through it, you’ll hear it resynthesized by the loaded corpus, by default.
*You can also select between the current, or all loaded, corpora using the current/all toggle. With this you can get some really interesting results by choosing what corpora you choose to load/combine.

*Most of the change in sounding results will come from the PLAY section. Play around with the smooth / chunky toggle, and smear. Pitch and glitch have more extreme impacts on the sounds, as do the various envelopes you can apply to the grains.

More advanced bits:
*Once you are playing audio through it (at least 5 seconds worth), select the learn input entry in the normalization section. This will match the input audio to the corpus so that you can access more of the corpus based on your input (like if you only have high pitched sounds, you can still trigger low pitched sounds in the corpus). You can then press the NORM button to enable the normalization.
*The MATCH section tunes the matching engine. You can tweak the select knob to get more or less matching overall.
*The individual descriptor knobs in the MATCH section lets you dial in really specific querries since the underlying code will reorder based on how the knobs are set. For example, turning a descriptor knob all the way down stops searching for it, so play with these knobs to get some quirky settings (centroid and pitch turned all the way down, with loudness/noise turned up 3/4 of the way).


So yeah, have a play and let me know how you get on.

App - The Party Van
On exercises, practicing, and rehearsing

So many thank yous, so little words.


Ditto all above (and wow AMAZING). thank you for making this happen. I’ve been fooling around with IRCAM’s CataRT and always wished there was a more elegant/user friendly option. prayers answered.

Any chance this can be run in 32bit Max, or using something like J-bridge for 64bit Ableton/Max4Live?


Awesome, glad you guys dig it!

It will be straight-up 32/64bit once it’s done done. It uses 2 externals that will be updated before it’s fully released.


I wish I had more significant feedback to give, but all I can offer is a big thank you. This is a lot of fun to play with. Hoping to use it in a live-coding show this week!


Lots of little updates since the last version I put out, but I’m nearing a proper 1.0 release, so I’ll post that once it’s ready here first (before putting it out out via blog/etc…)


Looking forward to 1.0!

One potential bug I’ve found… the width parameter seems to be binary. It changes dramatically when it finally gets to 100%, but I can’t perceive changes to the stereo image going through the rest of the range.


Hmm, as in the value jumps, or the audible perception of it?


The audible perception jumps, but the visual feedback from the knob is smooth.



If you click on the ‘edit’ button, do you get any errors in the Max Console? (there’s nothing critical in the panning patch).

Does the ‘gain’ knob have a similar binary response?

Could you make a little audio recording of you going from 0. to 1. in width?


Here it is working as expected - just with a little exponential impression. It seem to make a step forward (or outwards in this case) when I reach fully CW. But there is increasing width over the whole range of the knob.


Here’s a video:


That does sound to be pretty binary. Does it act/sound that way with other corpora? And with other sound material? (like something more consistent like human speech or whatever)

Internally, it works by randomly panning each grain it plays back left or right, then the ‘width’ parameter controls whether that’s all panned to the center, or whether it’s opened up, but the input to it is just getting a purely L or R signal.


I’m missing something, I’m not able to get resynthesized sound at the “end” of the M4L device…
Any help, please?


Are you using the 32bit Live/Max? I don’t know how Live works, if it lets you load a 32bit device into 64bit Live, but it would still pass dry audio if that’s the case.


Ok, thank you Rodrigo:
I’m using 64 bit Live…
Will try to change that tomorrow .


Yup, that’ll be the issue!


early flavor of the next TPV…


chiming in due to extreme excitement for this (and been holding off on the 32 bit downgrade)

any chance the full version is on the near horizon? enthusiastic + godspeed in any case!


Yup, been working on it a lot. Loooots of little changes and improvements since the last one, including a compensation/correction feature (sounds AMAZING) better gain staging, better windows, significantly faster loading and matching times (like no pinwheel at all when loading anything), native Push support, and automation looking/working well.

Still working out the main corpus loading menu so you can load/unload individual corpora and still be able to automate everything well.

Current screenshot: