These look brilliant, thank you! I’m massively excited about it all, but I also don’t really know what to do with the excitement!

EDIT: having read through the Morphagene thread it appears that there is an M4L device which does some very interesting Morphagene-like functions already and furthermore the people behind it are involved in some of my favourite iOS apps, which means they have my attention and trust immediately!

2 Likes

Max and Max MSP are pretty much the same thing — technically Max MSP refers to the part of Max that deals with audio signals. Max For Live is a thing that allows Max patches to be made into devices that can be used within Live — they become units that can be added to effect chains, and look (more or less) just like the native Live effects. They can also do some cool stuff that native Live effects don’t do, like launching clips, controlling parameters within Live, and some other fancy stuff. The only difference with standard Max patches is that Live requires them to work as either MIDI instruments, MIDI effects, or audio effects, which dictate the set of inputs and output the patch has. The size of the interface is also restricted by the standard device height. But generally you can convert any standard Max patch to a Max For Live device, given it works appropriately as a MIDI instrument, effect, or audio effect.

5 Likes

That’s excellent, thank you for taking the time to explain. It’s probably helpful that, at this stage at least, I can’t completely wrap my head around objects which are not either MIDI or audio so that doesn’t seem like a limitation I can conceive whereas being able to add additional functionality to Live is something I can absolutely appreciate. In my head, M4L is basically a way I can access more devices like the custom Puremagnetik effects racks I have, for example so it’s great to hear that I’m not massively off the mark on that one.

In theory, can Max For Live be used to make a hardware controller interact with Live in a different way? For example, I have a Native Instruments Maschine MK1 which I’ve always loved as a controller but never really used as more than a set of drum pads. Would there be a way of utilising the built-in screens via M4L (or full Max MSP) to allow them to reflect changes being made, for instance?

Whether Max can let a controller do new things depends on what kind of control information can be exchanged with the device, which is decided by the manufacturer of the device. I know nothing about Maschine, but if it has an API (Application Programming Interface, essentially a protocol dictating how to get data in and out of it) that allows accessing LEDs and displays you should be able to make something interesting with it.

2 Likes

I shall investigate! On one hand I can imagine that Native Instruments might guard something like an API and not release it to the public but on the other hand I surely can’t be one of the first people to want to do this

Maschine has a User Mode which transforms it into a generic midi controller like Push. You can assign the encoders and pads in Live or Max. There might be a CC map available somewhere and (I’m not sure) but I think I remember there is a program that lets you re-assign the CC and/or midi notes per encoder. AFAIK, it’s not possible to access the LED and display. It’s not possible on Push either, sadly… especially since Ableton and Max are almost the same company. I wish Push 2 had a à better integration with Max but that’s another topic.
Something else: According to Native Instruments, Maschine Mk1 doesn’t work on Mac OS Catalina. I don’t know which OS you’re using. That’s just FYI :wink:

1 Like

Thanks - that’s a shame about the displays but I’m not exactly surprised. As much as anything else it’s a really well made piece of hardware that, despite being “outdated” (by NI’s thinking) thanks to subsequent versions, is still really impressive. It feels a bit irresponsible to consign it to the dustier recesses of my shelves just because it’s not the latest thing. You can assign CC values via the Controller Editor though so it’s far from useless… especially as I’m on PC :wink:

1 Like

Yes, I still have my Maschine Mk1, I broke an encoder long time ago but I should repair it + I really liked the pads on this hardware. Considering the price of these devices it’s actually sad that the displays are only dedicated to the factory software.

About Max, I think they have improved Midi mapping in the latest version (it was already quite easy to map midi controllers in the previous versions but it’s even better now) and in Live it’s always been easy of course :slight_smile:

1 Like

I agree completely. I actually got it to use with the MPC 2.0 software because I liked the Maschine pads so much more than the ones on any of the Akai controllers, but ultimately I just couldn’t get along with the MPC software (and should probably sell it one of these days as it sits unauthorized on my iLok account these days).

1 Like

A question about Jitter and shaders:

I want to learn OpenGL in depth and especially GLSL to create my own shaders and use them with Jitter. I have the impression that learning GLSL is something essential in order to achieve what I want to do graphically with Jitter :wink:

Would you recommend the official books (The red one, the Orange one or the OpenGL superbible ? Would you recommend buying the most recent edition of these Open GL books (4.5) or a second hand/older edition considering « OpenGL 3 » is the latest OpenGL version supported (beta) in Max 8 ? I don’t know how many things differ between OpenGL 3 and 4.5 in terms of syntax etc, that’s why I’m asking. (i.e: Recently I’ve learned the latest JS in Eloquent JavaScript but Max uses an older version and the differences are sometimes noticeable ^^)

Or would you recommend another source than a book to learn GLSL ?

Thanks.

Actually David Butler has written a Max external that allows sending Jitter matrices to the Push 2 display.

3 Likes

I found that learnopengl.com was a good resource for learning through tutorials. Once you understand the basics of the OpenGL state machine (actually you might not need that for Jitter), the way vertex and fragment shaders operate, and the way all of these things communicate together, you’re set to create just about anything, given enough tinkering.

4 Likes

Oh ! I didn’t know about this external for the Push display, that’s good news, thanks a lot. I will visit the website learnopengl.com tomorrow, hopefully the section about fragment shaders, geometry shaders etc is enough for what I want to do / thanks again ! After posting the previous message I’ve also found thebookofshaders.com : let’s learn GLSL

1 Like

I’m curious as to whether anyone here has attempted to measure Acoustic Indices - Acoustic Complexity Index, Acoustic Diversity Index, etc - using Max? If that sounds unfamiliar, basically it’s a means of consolidating acoustic information (example: field recording) and deriving acoustic incidences for data set analysis.

A prime example would be taking a one-hour field recording and running it through an algorithm to identify x amount of vocalisations from a species of bird.

I made some rudimentary attempts at patches using much shorter audio samples stored in a buffer~ and using a combination of biquad~ filters to narrow down the given frequency range (and/or the loudest parts of its constituent frequencies) of a vocalisation. Then, this would go through a peakamp~ set at a given threshold to trigger a bang every time an event is picked up. From here, a counter and a list dump, etc.

The problem with this approach is that in order to get the desired result, I would need to play the buffer~ through every time. This is fine for 2-3 minutes, but certainly not an hour or longer!

So, first question: has anybody attempted to run similar processes in Max?

Secondly: if you have (or even if you haven’t) can you recommend or think of an alternative approach to derive this information that doesn’t rely on playing through buffers - i.e. spectral analysis, etc?

Thanks in advance!

3 Likes

max is not a great environment for non-realtime feature analysis of large amounts of audio. (which is something i do for work.)

there is a non-realtime driver, so i guess it is doable, but you’d have to structure all your I/O as soundfiles and do all computation with signal objects.

i would look at a scientific computing environment like matlab, R or scipy. all can easily work with audio data. it is likely that you’ll find existing implementations of ACI computation with a quick search…

like this R library

7 Likes

Thanks for this! I was actually hoping someone would mention R or Python since I’m currently learning to code for work. I work in finance, but I took up the opportunity to learn this stuff so that I might transfer it into other areas. :slight_smile:

here’s another, smaller R package just for sound ecology

5 Likes

Marvelous - thank you again!

I’ve been working with Ableton’s looper as part of an ongoing performance process.

I have an old Launchpad and was trying to find some way of utilising the ‘User 2’ mode to control the ‘Speed’ of the looper. e.g. Hit a button and it sends a CC message at a fixed value pitch ing the loop up a fifth etc.

Not wanting to get into using Dummy Clips I found a M4L Snapshot Manager which makes it really simple, just set the parameters in looper hit store and midi map the recall button.

What I’m finding is that, using the trackpad everything is fine but when mapped to the Launchpad the midi cc message hangs, like it’s constantly sending the same message, if I hit another button it highlights another recall and just fires more midi data on top of the other…then it crashes :slight_smile:

The M4L device is old, but made me think what’s the best way to store and recall M4L device states?

Bit of a ramble!

1 Like

Pattr objects with parameter mode enabled (and parameter visibility set to stored or automated and stored) or native m4l controls which effectively default to these settings already. Then the values of those objects will save to the Live set and can be saved/loaded in Live as a preset.