App - The Party Van


Hmm, I’ll make a note of it and take a look, though I can’t really promise a fix for BP since I will likely completely refactor the pattern recording in TPV2.

Not having tested the pattern super thoroughly, are you recording a pattern, stopping the loop, and THEN triggering the pattern playback?

What I suspect is happening is that when you send karma~ a play message from a stopped state, it plays from the start of the loop by default.


Ok, been working on the C-C-Combine part of TPV2 pretty hardcore since moving, as it’s relatively lightweight thinking, and as part of that, I’m going to release a standalone C-C-Combine2.0, as a patch, but also as an M4L device!

Here’s a little sneak peek of what it looks/sounds like:

That’s something I’ve wanted to do for a while now, and this would be something relatively easy to implement it with, I think.

The functional code is done, and I’m just going through it all and cleaning it up, then I have to polish off the UI and stuff, then look into doing any M4L-specific shit, as that’s all new to me.

Any seasoned M4Lers have any guides or important things to consider that I should know about before getting into that?

Once I do this, I will probably package up a few more of the ‘blocks’ into M4L devices, mainly the Tape and Skipping CD player ‘modes’ thing I posted in this thread a while back.

In the case of all of these, I will post them all here first, for a quick beta test, especially for C-C-Combine2, as I will probably make a couple videos and a blog post to go along with it, so it will be a bit before it’s done done, and its release.


As usual, that looks and sounds beautiful, @Rodrigo.
Can’t be of any help with the M4L and, in fact, just recently moved out of Ableton, but happy to help beta testing any Max patches you’d want to break out and have people play with.


This, Sir, is really beautiful and triggers once again my curiosity for TPV2.
Big up from Berlin!


i can certainly test within ableton m4l! this sounds amazing rodrigo!!


Yess! I can’t help with m4l programming but I certainly plan on using these in Live. Thanks for doing the work to make that possible :slight_smile: As with dude, I can do my best to give feedback as these blocks come out.


Cool, I may post a version sooner than later, while I finish hammering out the UI stuff.

The main thing is finding a balance between TPV2 looking, and ‘acting like an M4L’ object, so there are advanced controls that will unfold etc… but I don’t want to just use general live.objects since it means I’d have to redo how it would look in TPV2. What you see in the UI actually are live.objects, just dressed up and layered.


Still working on other parts of the UI, and not quite happy with the SMOOTH/CHUNKY toggle*, but the UI for C-C-Combine2 is coming along nicely.

(I’ll probably post what I have of the full UI on here soon, to see what people think)

*Had some really useful feedback from a programmer friend on the opaqueness of a button that only changes words vs this kind of toggle setup. I had never really thought about it that way, but this is more legible as you don’t have to interact with the object in order to see what the options are.

I played with showing the differences in modes iconographically, but couldn’t come up with something satisfying.


Maybe something like this?

Might be a bit confusing vs. the waveforms above. Maybe you can describe a bit about what smooth and chunky does? I might come up with a different metaphor.


Man, that looks much better already! Just using that whole vertical space.

Smooth is faster and periodic grains.
Chunky is slower and aperiodic grains.

I tried playing with visual descriptions of that, but it ended up looking like morse code (". . . . ." and “_ . . _ _”).


I’m really wanting to see peanut butter in that part of the UI.


I mentioned salsa, as a tip of the hat to Peter Blasser, which was rejected. HOWEVER, peanut butter has a better chance as it’s one of Rod’s favorite flavors.


this makes me very happy


might still work if you have a small field of dots and dashes representing each (rather than one line)


I tried something like that, but ‘inside’ the space of a button. I didn’t think of taking it “full width” like @jasonw22 did. That really uses that space well, and gives suffient weight to that parameter in that it changes the sound of it a lot.


I also welcome any input on how to visualise the data in the MATCH section:

In this section I want to display the incoming audio descriptor stream visually, along with what the parameters are controlling.

There are 5 parameters here, with ‘select’ being the macro version of the individual knobs (as in, turning ‘select’ turns all of the smaller knobs), and what’s being controlled is the weight attached to each parameter for matching. So turning the ‘loud’ knob up, means that that descriptor is given more weight when searching the database.

I’ve tried a few different approaches for how to best visualise this, and thought about a few more, and I don’t necessarily want to add anything flashy that communicates very little (or nothing). Beyond just having the 5 knobs, I want to help communicate what the underlying process of database searching/retrieval is doing.

So I thought about doing a polygon thing where the distance of each point from the centre corresponds to a descriptor, so the ‘shape’ of the polygon would be representative of the incoming descriptors, something like this:

I haven’t tried it context, as I’m not handy with jsui, but this doesn’t really communicate what’s important about this section of the patch. In that it is mainly trying to communicate the state of 4 descriptors, and more importantly, the weight/tolerance of the matching algorithm with respect to those parameters. And ideally, over time.

What could potentially solve the first of those issues is something like this:

Where there’s still the underlying polygon showing the state of the descriptors, but then a separate geometric mesh that gets bigger/smaller around the polygon. That communicates more, but without a time dimension, it would just be a winamp visualiser.

To bring time in, I’d have to devote an axis to it(?), so that brings me back to a scrolling line/multislider type thing:

I could use the center line to show the descriptor value, and then a (lighter) colored bar around that center line, but this get’s a bit noisy…

It’s also not straight forward to implement due pointscroll/linescroll in multislider looking like shit and now allowing for a transparent background.

I can also approach it very differently, and not visually represent the incoming descriptors, and rather show something more lookup/retrieval oriented or something like that.

So yeah, not really sure how to go about displaying this stuff, so any input welcome! (even if it’s just “have only 5 knobs”).


have you gotten into detailed functionality of match elsewhere? (are there any audio examples previewing it’s usage?)

I’m re-reading your post cause I don’t think I understand it’s purpose well enough to critique how it’s being represented in the current UI


maybe a silly idea but something again inspired by peter blassers nume/deno oscillators might work

I think this is similar to the red polygon you had posted above but different enough to post although it doesn’t do anything with time

loudness - mapped to X of a shape quieter sounds have less width louder sounds have more width. The shape moves from around the "cent"er point

Pitch - mapped to Y of a shape higher pitch has less height and lower pitch has more height. The shape moves from around the "cent"er point

cent - mapped to the shapes overall x axis 20hz-20khz (not sure the range that it looks at, or I possible have the intention of cent wrong). so that the whole shape moves along the frequency range as its cent changes

so far we have what would be a “+” that moves along an axis

noise - could be diamonding the “+” and by that I mean filling in the 4 quadrants that are empty in the “+” from the center of the “+” being least noisey to the outside (full diamond or even fill it to a square perhaps) being most noisey


Not explicitly. The original C-C-Combine page explains the overall idea, but I’ll explain more clearly what’s happening here.

There are a couple of things happening, first there’s a whole corpus of grains, which is a database of pre-analyzed grains, where each grain has a value for loudness, pitch, centroid, and noise (spectral flux really). The database looks something like this:

0, -30.743082 0. 110.99221 -14.119069; 10, -29.847477 0. 102.78936 -18.857525; 20, -30.850266 0. 106.913391 -14.339486; 30, -34.505981 0. 100.441933 -19.840559; 40, -37.735123 0. 96.140182 -22.903746; 50, -45.966511 0. 90.787933 -27.842623; 60, -38.871426 124.842621 103.400925 -15.41413; 70, -35.512058 0. 105.463936 -14.856118; 80, -31.557777 0. 114.881561 -7.335074; 90, -24.237932 0. 106.483566 -12.101012; 100, -24.299706 0. 101.199158 -17.09067;

So the first number is the position of that grain in milliseconds, and the next 4 numbers are those descriptors.

What the MATCH section is doing is analyzing the incoming live audio for the same 4 descriptors (loudness/pitch/centroid/noise), and then looking for the nearest matching grain in the database. Since it is suuuuper unlikely that there will be a perfectly matching grain, what the select knob does (along with the other 4 knobs does), is to control the tolerance for each of these parameters.

So if I set the loud knob to max, and turn the other 3 knobs down, it will look for a perfect match on loudness, ignoring the other parameters. Or if I set pitch to max and loud to 75%, and turn the other two down, it will find a grain that is the same exact pitch, that is close in loudness, but the brightness/noisiness of it won’t necessarily match at all.

In a more general sense, the select knob controls how precisely the audio coming out will match the audio going in, with the tradeoff being precision vs amount, in terms of the grains matched.

It would be cool to visualize the descriptors in the corpus too, but since you can load multiple corpora at once, that wouldn’t make much sense. But the data/visaluzation is used elsewhere in the patch (when you make your own corpus).

This is what the descriptor spread in one of the corpora looks like:


In the context of my previous post, that means visualizing the incoming stream of numbers associated with each descriptor (as in getting a readout, or some kind of display over the grain-by-grain loudness or pitch, etc…) AND then somehow showing how tolerant the MATCH-ing will be. So if the current loudness is -34.553dB, I want to allow anything between -24.553dB and -44.553dB.


Is there a time axis in the descriptor spread charts? So the match visualization, if it displayed a time dimension, would have a similar shape for each of the four descriptors?

How might having that visible time series influence your playing, over the radar graph you were describing or @wednesdayayay’s intriguing visualization idea?


Alternatively, could make each on bit tall and narrow. Wider and evenly spaced for smooth, narrower/irregular and unevenly spaced for chunky.

Kinda tight on the bus right now, but might have more elbow room on the train, in which case I’ll pull my laptop out…

Happy Monday!