Dear all

The Fluid Corpus Manipulation project is on some of y’all’s radar, because @Rodrigo posted a few good words about it. To make a long story short, we have various machine listening and learning tools to help tackle programmatically sound bank mining in Max and SuperCollider (and Pd in progress) on MacOS, Windows and Linux with all the code open source…

As part of the project design, we commissioned works to get first users feedback on interface decisions and directions and many other things. Rodrigo was part of the first cohort (you can see who else was on the roster here)

Now, in early July we have 8 more pieces coming. We asked all composers to tell us how they (ab)used the tools. You can follow the saga here and I’ll post one per week, which leads us to the gigs. This week, Hans Tutschku explains his approach to polyphonic piano processing, informed by 30 years of practice.

I hope you enjoy!



Dear all

I didn’t want to pollute the forum with updates, but today is the fourth geeky video presentation, and there are 4 more to come. Here is the publication schedule:

18 May: Hans Tutschku
25 May: (yours truly)
1st June: Sam Pluta
8th June: Richard Devine
15th June: Owen Green
22nd June: Alex Harker
29th June: Gerard Roma
6th July: Alice Eldridge

All here:

Gigs are live on the 7th-8th-9th of July, and 2 keynotes on the 10th. I’ll post about them when I have the official link from the festival.

I hope you enjoy! I’ve certainly been inspired by them!


This looks absolutely incredible - and utterly fascinating. Thank you for sharing!

1 Like

Thanks! As I was saying elsewhere when someone asked if there were any tutorials:

The help files and examples are a starting point for now, and the forum… but the project is dedicated full time for the next year to design more enabling material… so more people (ab)use the tools towards a more inclusive and subversive programmatic data mining… so I’d say, try now, and if it is too steep a learning curve, either ask on the forum or wait for the next versions with more inviting knowledge exchange.


The FluCoMa team is devising online tutorials. Now we have Max up (MLP classifier and MLP regressor) and we plan to have them in SuperCollider and PureData.

It might be of interest to some people here? I wouldn’t want to pollute this forum with ‘promotional’ stuff but this is a fun free relax opportunity, so I’ll post and maybe add stuff if there is a few interested people?


I’d be up for it. Ultimately depends on how much time I’ve got at the time.

1 Like

Yes definitely. I can find my way around with the package, but some more in-depth stuff would be great.

1 Like

I’d happily do testing for Pure Data!

1 Like

ok for now we have Max, but with Pd and SC patches attached. The SC ones are next - the exact same tutorial in SC. There is also a draft of a more dimension reduction one in Max coming. Feedback welcome here, on discourse.flucoma.org

In the pipeline there is also a slightly more advanced ‘how to train/tame your MLP (meta)parameters’ which i reckon would be good for stage 2.

warning: squeely oboe polyphonics vs trombone debate:

warning: feedback fm farty sounds included

More soon.



i have interest in testing but time will be scarce for a few weeks
otherwise yes to all of this esp max + sc tutorials

+1 for a SuperCollider tutorial !


Looks very interesting (just watched the ‘squeely oboe’ tutorial).
I’ll download the Max package to give it a try and patch along with the tutorials soon, hopefully today.
Am I reading you correctly that you would prefer feedback on your own Discourse? Or is there maybe some value for you in seeing what thoughts and/or discussions develop over here?

1 Like

wherever… for me, anywhere good ideas fly by is good :smiley: There is an SC version of the same task being rendered now, so I’ll post it here too.

Here we go, as promised, classifier in SuperCollider:


I went through the Max ‘controlling a synth’ tutorial this afternoon.

First of all: my compliments! I found it to be a really clear, well paced, well layed out tutorial. I played the whole thing once and then a second time while patching along. Which was easy to do. Had to pause the video every now and then to catch up, but there weren’t any parts where I needed to rewind in order to to understand what I was doing.
I also think it was very helpful, in that I feel I now have a decent grasp of how the externals need to be set up and how they work together. I’m sure I could have figured that out eventually from looking at examples and help patches, but the tutorial made learning this part of using the package both easier and quicker.

Of course, there are some things you gloss over, like the arguments for the fluid.mlpclassifier, but I guess that makes sense for a beginner tutorial.

The only thing that raised some questions for me was the retraining of the model on the same data points. As someone with only superficial knowledge of machine learning algorithms (did part of Andrew Ng’s Coursera some years back) this remains somewhat counterintuitive. How can the model really “get better” if you’re not actualy giving it new input? What does “better” mean in this case, how can we best think about it? Also, how does “better” in modelling terms relate to “better” in terms of what a musician is aiming for? In the use case you’re presenting here – morphing between snapshots of parameter values – there could be a number of criteria a musician might have for whether the model performs well: smoothness of the morping, whether or not the original snapshots can be recalled exactly when morphing, whether or not the parameter values exceed min and max values of the snapshots, etc.

I played around with this a little bit, to see if I could figure out what difference the extra retraining (and thus lower error values) makes in practice. Not so easy to get a handle on. With less training parameter values seemed to be more averaged out, but apart from that…
As a side note here: for these explorations I did change the synth to a very basic additive synth patch (with the parameters being the relative strength of the individual partials). This made it easier to hear what was going on. I actually quite liked the farty synth in the original patcher, but it is quite chaotic and quite sensitive, which makes it a bit harder to hear what’s actually happening with the parameter changes.

Hope this is in some way useful for you. Looking forward to other tutorials.


Ted is great indeed :slight_smile: (it is not me in the video) Thanks for the props, I’ve forwarded.

Indeed that is the plan - this is a teaser in and there will be a tutorial on what they are, how to musicianly think about them, and further links to more mathy explanations should people want to go that way.

That is a good point, and this question is gold. I’ll make sure we explain this but for now, this next paragraph should explain it. If that is not enough, try this amazing two-part video explanation: But what is a neural network? | Chapter 1, Deep learning - YouTube

The machine gets better at being less wrong. The process is a bit dumb but works well: you start with random settings in your network’s states (parameters) and you check how wrong is the output vs the input you give it (for A->1 and B->2 in your training set, you run the input A and B and check what answer you get) Depending how wrong it is, you correct in the inverse direction.

The best example is bow and arrow shooting. The first arrow will help you know the strenght, distance, wind, depending how wrong you are, how far you are from the target. You will correct accordingly - too far and you’ll relax tension, too left and you’ll pull right, etc)

So training on the same set helps the network to be able to make a model where when you fit the known input you get the known output, which then should (in theory and many caveats pending) should behave for an unknown input.

Does that help?

That is the whole project’s argument: this answer is temporal, local, and personal. We offer tools and thoughts to show how subjective all these assessment criteria are in musicking with technology, and how the answers/models/instruments you will get should pollute/inspire/transform your musical questions.

Thanks! That is my synth :slight_smile: It is chaotic and unweildy exactly for that reason. You could explore a very small subset of its sonic space, and if you find it musically performative, save the state of the neural net as a ‘performing preset’. You can do that many times and get many performable smooth spaces from a single patch, which is fun. This is roughly what Sam Pluta is doing in his piece with a much more complex synth: Traces of Fluid Manipulations: Sam Pluta - YouTube

Incredible! I hope my answers are too!


Hi @tremblap!

Thanks lot for sharing this. I always wanted to understand machine learning/“real world” AI in a practical level but never found the right source.

I started watching the tutorial/streaming that you posted below, amazing! :exploding_head:

1 Like

This is great! I’ve been working with machine learning in Max for a bit now, really looking forward to checking out your implementations and tutorials!

1 Like

Great, please share your findings! We’re taking a slighly more open-box approach than other solutions, to enable more mobile experiments, but we know that it might not be the best workflow for every setting - I take a very ecosystemic perspective at all this: what matters for me are the art and the questions a community of (ab)users of such tools will bring to the fore!

1 Like

Yes, very helpful, indeed, thank you! The archery metaphor is insightful: good image to keep in mind.
Thanks for the links, also. I’m a big fan of the 3Blue1Brown videos, so I’ll definitely watch those (need to find some distraction-free time, though, these are generally not for casual viewing).
Watched and enjoyed the Sam Pluta video (some good farty and R2D2y sounds there as well). Having several distinct models trained on different training sets and being able switch between those models during a performance is great! I was thinking this might be a great topic for a future tutorial: to show how to set that up in Max/SuperCollider and talk about some approaches to finding interesting ‘pockets’ in parameter space and then training the model.