I think jitter questions are fine here.

1 Like

Also if you (or anyone in the future reading this post) do not feel comfortable publicly asking jitter beginner questions, these days I primarily use Max for Jitter and would be happy to answer questions over DMs :wave:

4 Likes

Thanks @_mark and @ellips_s , I really appreciate

Fortunately, I’ve found a solution today in order to achieve what I wanted to do with Jitter (I’ve read the various object references again and re-watched the excellent tutorials by Armando Gonzales and Federico Foderaro)

(patch posted below).

I was struggling with jit.gl.multiple and I didn’t understand how to assign some individual parameters to a gridshape duplicated by jit.gl.multiple. Now that I understand the concept of jit.matrix and (when to use them) a bit more, that opens up more possibilities. Even after reading the tutorials and watching some great tutorials on Youtube, some things were not really clear, possibly because there are often many ways to achieve the same thing and each tutorial demonstrates a different method.
I’m still experimenting with jit.gen though…

For info, here’s the very basic thing I was trying to do: Creating a grid of shapes, assigning a variable alpha channel to each cell/gridshape :slight_smile:

Code

----------begin_max5_patcher----------
1472.3oc4Zs0bqZCD9YmeEZ3wNtdj3ls6St+NNSGOxfBQoxRTPj3bNy4+d0E
f.1.gXvm3N8E73UW1c+1aRr7iGV3bPbhj6.9Cv2.KV7iGVrvPRSXQ4+W3bDe
JhgyMSygSdUb3Ymk1gjjSRCYI3.3PEUQgjQjx2RI1c14.lm3rr7WveUNsTrL
5IJOYeFIRZmIB4sBtD35En+wy27G2Uv50vKNR4pM2HLnRhzXiLnjqe2asy6y
zJGlo5VRMW9FyHUNNZB+7gGzOVNRkWJRRTKuO0jxkiQ6PgVkzu5430tPm5cO
CejHIY6Ib7AqJA6TyQyhl2iY+eTxPl.ffvgs8iwlCM+rFMrM2sCTw24Wul+L
UthKn4DfG3Ql.K8bUFVvldwA0B1eDKynmzQBChHaMdFdggFuE20e5n.zl+CD
Er8JCBPAeUAAGJjRAeBd5asg+atVM26KxQ+UQFKF7DgwDfcjLbNYejfIx.vU
v0azO25qdh7C5EbpS++QN+HOKJAst8Aa97d+ntgIuaMLkP3iL7en+MH53iVE
rD3qwC0Ode5bkgcCM9WBMMkARVIfThHKbdjxHuPxxop3g2sBKbvooMHunwRz
v3yByFsdYMIJ2RxqlTF4EZ05CqohyTHgTACEYV46TXUVe81HhIY7BpQTrDUF
zGp1vJvaSXf8zEFepPHz.hAv2wuENjXpTC4GRLd31EBWscYiGnlK3PBSD82j
3FIeTl+TBmxSyH4DtDKKUm5giIOhKXx8OJ3xb52MZDRaR6X7GwQjdWLWk+y.
G+YFEyp.DmjLZrfqEhVFGM4J1oBzLfg946JiYFbbZGKV48n.xdFLWojE4GvY
ZaWYlX2pAkBAq8P0qiQdTVNbJkyOCEkhz9GLil7z.q8fPM3wg1ayH46K31Q2
qhvk6ywuzFskXFqLju81eByopfVhjZMAtv5AsUidJOJSvXszW6HuzwHwJ29H
xqzX4SFF0zYPMcZZkSjSsUNllPxksoIwI4sobQHshTwgxv58RxwTlRKZOgV2
JnYLbyjisnOTRx1IJyek9cvolCcYshZGxdxB54hr9tvtSC1aYhKJUr0o8Jtr
pZWkLVTU1nLSy7iR+lxCHXZfzVSATzFS4T6EptNLZycJF8BIRGmD79ioAXU9
Q1icC29wHlWOHleSAoYUflbyZWrdxvxGmyp6En1Dz91r3OF3N0XVOm6XP5zb
3CNCfj6ccP6Zj9VCgdaUO8BCldxNuPq20V6kG7GQvqeOHG5NE4TxAnkqe+EF
cCM9OdquZ+mApKB+JAAJG3NOmcv9VH7feLDA6Fh9+RYQzlP6slfSst3563HK
WmwgBSNxJ7NMxJWcS7iyQwqM1ZW9WcfUvcpahJ2CZZ.j89JH+Il44dtBExYT
GxdxgQ9yQXz6u3CFk22MdMJgd7tAmbQQVTkGP0cJAsUH0U3kTd86A5a00X0y
aT1lOqPrYjx.5VJD5iR8kiDnQCEJWJ.5VIEdiUJ1bKwB2OCVbyjB+QJE2RgH
XrHwkFNQVr8EhC+ZkL29kLzMQxVORIKrW6VIQKk4qwK1lmbC5DqscKdPSEaj
G5S2KJ+aYiXGpUTrUGKXRZJiT12NevtDloio4fTQNUay.4QX0Dr8xamDmkPL
sT.vEwjAak2XZRUn4DNdacGF076nCdvuNXyzBjmvoDvNqShh+opC1.A6rjSY
XtZvZbBrCWHEp4RizSho6KgFOruqeUEkcV7EspiW7wN0T3w0ysRnzAkQhBtr
oVOcG4.6kdC8CpdmviuopWiEwr6sOak0.cdlmRyzkkJZkLrmzMdgslT6jfW3
aLVVe1t1CqaTh554j+X3z54fSqGAm7mKFg9.F07nNWMmZe7tAJHglJi7FAiN
SZZeRgql0Zw28i3LzNoooiiIbq8o5mScDNFcbpNLgiwyDNCNL9izwD9qvwz+
iSQZSVe1Gvglom8gab1GswkevF8+wZb9Gpghy+7g+E.F0LzO
-----------end_max5_patcher-----------

Now I just have to fill the alpha channels with values stored in a dictionary (instead of using jit.noise for the test) but that should be fairly simple, I’ll do that next year ^^

I will probably have more questions about Jitter and jit.gl.multiple soon…

Thanks again :wink:

2 Likes

I’m soon to be upgrading to Live 10 Suite (having never accessed Max in any format before) and I’ve very excited to play with M4L devices,except… I have zero concept of where to even start. I don’t even mean “how do I programme” questions, I mean: I literally don’t understand what the difference between Max, MSP & Max For Live is and whether something created in one environment will run in another.

Essentially I have this very lofty idea whereby the possibly exists to create something along the lines of what the Morphagene is capable of, but using M4L devices - eventually expanding to include more of the functionality offered by the MN Tape & Microsound Music Machine system.

Wayyyyyyy before all of that, though, I need somewhere to start comprehending the differences, similarities and connections between the different forms of Max

I’m in the same boat - just got Live 10 Suite last week and I’ve started dipping my toe back into Max (I haven’t used it in almost 20 years so it’s basically new to me). One nifty resource is this free pack by Cycling '74:
https://www.ableton.com/en/packs/max-live-building-tools/
Although the website doesn’t mention it, this pack has lots of Max tutorials you can go through inside of Live. I’ve started using them and they’re really handy. There’s more info at this link:

3 Likes

These look brilliant, thank you! I’m massively excited about it all, but I also don’t really know what to do with the excitement!

EDIT: having read through the Morphagene thread it appears that there is an M4L device which does some very interesting Morphagene-like functions already and furthermore the people behind it are involved in some of my favourite iOS apps, which means they have my attention and trust immediately!

2 Likes

Max and Max MSP are pretty much the same thing — technically Max MSP refers to the part of Max that deals with audio signals. Max For Live is a thing that allows Max patches to be made into devices that can be used within Live — they become units that can be added to effect chains, and look (more or less) just like the native Live effects. They can also do some cool stuff that native Live effects don’t do, like launching clips, controlling parameters within Live, and some other fancy stuff. The only difference with standard Max patches is that Live requires them to work as either MIDI instruments, MIDI effects, or audio effects, which dictate the set of inputs and output the patch has. The size of the interface is also restricted by the standard device height. But generally you can convert any standard Max patch to a Max For Live device, given it works appropriately as a MIDI instrument, effect, or audio effect.

5 Likes

That’s excellent, thank you for taking the time to explain. It’s probably helpful that, at this stage at least, I can’t completely wrap my head around objects which are not either MIDI or audio so that doesn’t seem like a limitation I can conceive whereas being able to add additional functionality to Live is something I can absolutely appreciate. In my head, M4L is basically a way I can access more devices like the custom Puremagnetik effects racks I have, for example so it’s great to hear that I’m not massively off the mark on that one.

In theory, can Max For Live be used to make a hardware controller interact with Live in a different way? For example, I have a Native Instruments Maschine MK1 which I’ve always loved as a controller but never really used as more than a set of drum pads. Would there be a way of utilising the built-in screens via M4L (or full Max MSP) to allow them to reflect changes being made, for instance?

Whether Max can let a controller do new things depends on what kind of control information can be exchanged with the device, which is decided by the manufacturer of the device. I know nothing about Maschine, but if it has an API (Application Programming Interface, essentially a protocol dictating how to get data in and out of it) that allows accessing LEDs and displays you should be able to make something interesting with it.

2 Likes

I shall investigate! On one hand I can imagine that Native Instruments might guard something like an API and not release it to the public but on the other hand I surely can’t be one of the first people to want to do this

Maschine has a User Mode which transforms it into a generic midi controller like Push. You can assign the encoders and pads in Live or Max. There might be a CC map available somewhere and (I’m not sure) but I think I remember there is a program that lets you re-assign the CC and/or midi notes per encoder. AFAIK, it’s not possible to access the LED and display. It’s not possible on Push either, sadly… especially since Ableton and Max are almost the same company. I wish Push 2 had a à better integration with Max but that’s another topic.
Something else: According to Native Instruments, Maschine Mk1 doesn’t work on Mac OS Catalina. I don’t know which OS you’re using. That’s just FYI :wink:

1 Like

Thanks - that’s a shame about the displays but I’m not exactly surprised. As much as anything else it’s a really well made piece of hardware that, despite being “outdated” (by NI’s thinking) thanks to subsequent versions, is still really impressive. It feels a bit irresponsible to consign it to the dustier recesses of my shelves just because it’s not the latest thing. You can assign CC values via the Controller Editor though so it’s far from useless… especially as I’m on PC :wink:

1 Like

Yes, I still have my Maschine Mk1, I broke an encoder long time ago but I should repair it + I really liked the pads on this hardware. Considering the price of these devices it’s actually sad that the displays are only dedicated to the factory software.

About Max, I think they have improved Midi mapping in the latest version (it was already quite easy to map midi controllers in the previous versions but it’s even better now) and in Live it’s always been easy of course :slight_smile:

1 Like

I agree completely. I actually got it to use with the MPC 2.0 software because I liked the Maschine pads so much more than the ones on any of the Akai controllers, but ultimately I just couldn’t get along with the MPC software (and should probably sell it one of these days as it sits unauthorized on my iLok account these days).

1 Like

A question about Jitter and shaders:

I want to learn OpenGL in depth and especially GLSL to create my own shaders and use them with Jitter. I have the impression that learning GLSL is something essential in order to achieve what I want to do graphically with Jitter :wink:

Would you recommend the official books (The red one, the Orange one or the OpenGL superbible ? Would you recommend buying the most recent edition of these Open GL books (4.5) or a second hand/older edition considering « OpenGL 3 » is the latest OpenGL version supported (beta) in Max 8 ? I don’t know how many things differ between OpenGL 3 and 4.5 in terms of syntax etc, that’s why I’m asking. (i.e: Recently I’ve learned the latest JS in Eloquent JavaScript but Max uses an older version and the differences are sometimes noticeable ^^)

Or would you recommend another source than a book to learn GLSL ?

Thanks.

Actually David Butler has written a Max external that allows sending Jitter matrices to the Push 2 display.

3 Likes

I found that learnopengl.com was a good resource for learning through tutorials. Once you understand the basics of the OpenGL state machine (actually you might not need that for Jitter), the way vertex and fragment shaders operate, and the way all of these things communicate together, you’re set to create just about anything, given enough tinkering.

4 Likes

Oh ! I didn’t know about this external for the Push display, that’s good news, thanks a lot. I will visit the website learnopengl.com tomorrow, hopefully the section about fragment shaders, geometry shaders etc is enough for what I want to do / thanks again ! After posting the previous message I’ve also found thebookofshaders.com : let’s learn GLSL

1 Like

I’m curious as to whether anyone here has attempted to measure Acoustic Indices - Acoustic Complexity Index, Acoustic Diversity Index, etc - using Max? If that sounds unfamiliar, basically it’s a means of consolidating acoustic information (example: field recording) and deriving acoustic incidences for data set analysis.

A prime example would be taking a one-hour field recording and running it through an algorithm to identify x amount of vocalisations from a species of bird.

I made some rudimentary attempts at patches using much shorter audio samples stored in a buffer~ and using a combination of biquad~ filters to narrow down the given frequency range (and/or the loudest parts of its constituent frequencies) of a vocalisation. Then, this would go through a peakamp~ set at a given threshold to trigger a bang every time an event is picked up. From here, a counter and a list dump, etc.

The problem with this approach is that in order to get the desired result, I would need to play the buffer~ through every time. This is fine for 2-3 minutes, but certainly not an hour or longer!

So, first question: has anybody attempted to run similar processes in Max?

Secondly: if you have (or even if you haven’t) can you recommend or think of an alternative approach to derive this information that doesn’t rely on playing through buffers - i.e. spectral analysis, etc?

Thanks in advance!

3 Likes

max is not a great environment for non-realtime feature analysis of large amounts of audio. (which is something i do for work.)

there is a non-realtime driver, so i guess it is doable, but you’d have to structure all your I/O as soundfiles and do all computation with signal objects.

i would look at a scientific computing environment like matlab, R or scipy. all can easily work with audio data. it is likely that you’ll find existing implementations of ACI computation with a quick search…

like this R library

7 Likes