I definitely think the openness of MAX is exciting and appealing. Honestly, it’s always been super intimidating and overwhelming, but slowly but surely I’m settling in on how I might approach it.

I think, as you point out, the biggest variable is latency. I’ve been using MAX primarily to get away from the grid-based sequencing offered by my Octatrack. As a result, I’ve been doing pretty fast paced/staccato triggering of my modular. I’m about to tinker with running the modular audio into MAX for some filtering and other effects. That way I can totally bypass my Octatrack (which lately I’m just using as a master output and effects hub anyway). If you start doing multiple runs back and forth I think it’ll get dicey, but that also depends on the type of music you’re making. If you’re doing slower paced more soft edged music though, I’d bet you’d be fine, I’d just be nervous committing to a specific approach if you’re pushing it until you get one and try it. I’ve had my vector sizes set to super small and I’m running a relatively simple patch with (hopefully) a lot of precision at about 15%. I’ll be curious if I drop some reverb in the mix how much my CPU usage will spike at ultra-low vector size.

Much like you, I’ve been pondering more of a fixed architecture for my inputs and outputs. Right now, I’ve been working on a patch with three trigs outputting to 3 inputs of my Just Friends (in that LPG run mode) and a 4th output sending the loose tempo to my Zularic Repetitor percussion generator. That leaves 4 outputs for possible quad sound or 2 more for other control and still stereo for master out. Plus, all 4 inputs can be audio processing from my mini matrix mixer, which is how I’ve been approaching things since I got the Xiwi routed into my Octatrack. That limitation of ‘only’ 4 voices has been a big help.

Anyway, curious to hear what others have to say, but the hybrid system is super exciting and it’s fun taking advantage of the openness of MAX and the tactility of the modular. Hope we can share experiences. I know Karl is doing something very different and using audio in MAX to control his modular.

Not sure why that’s not working. Anyway:

One idea on the latency thing is using delays in Max to re-sync everything (as you could calculate latency). I’m not expecting to ‘perfom’ anything, but more have generative stuff doing the sequencing in Max.

1 Like

I’d be curious how you’d set something up to measure latency anyway. When it gets to a certain point, I find it hard to detect by just listening. Timer object?

I addressed the latency issue here:

Essentially, one-way communication is fine.

To calculate latency, you need to know your sampling rate (samples per second) and target samples per buffer.

1 Like

Would you mind expanding on that a bit? I know how to find target samples per buffer in Live, but Max uses “vector” terminology that I’m at the moment confused by. It would be great to find an automated way to compensate for latency wherever it crops up.

I’ve been using my ES-8 with CVToolkit mostly. Sometimes with silent way or Reaktor in Ableton. The idea of getting started with max has been daunting and I haven’t made the leap yet but am getting increasingly interested in doing so. I was wondering recommendations (tutorials, resources etc) people might have for getting started in max with the modular in mind? :raised_hands:

2 Likes

That’s a good point. I may even turn up at a shop with my laptop one day. Does anyone know if the London Modular people are on here? They’re my local dealer.

Thanks for sharing your approach. I remember one of the Art+Music+Tech podcasts referring to someone as being an ‘expert explorer’ of a system, logging and finding possibilities. I think that would be the appeal - working out the potential interplay between the two systems.

1 Like

You can quote any thread in any other thread. Highlight the part you want to quote, click the “Quote” button that appears, then, while the edit pane is still open, navigate to a new post (or any other post), finish your thought, and click reply. The forum software (Discourse) will ask you if you want to reply in the location you’re currently looking at, or in the original thread. You want the new thread, not the old thread.

2 Likes

Cool, I think I found a workaround to do that in a new topic. Thanks Jason!

1 Like

Yeah - sending a ping through the system and measuring how long it takes to come back. I bet it would be possible using [count~] then you could get closer to sample accuracy. Maybe even keep it all in MSP.

I’m imagining that if you could achieve that, then adding up the round trip time and using [delay~] to re-sync everything at output…

Good info on understanding vectors here:

2 Likes

Yep, think of it like this:

A typical sampling rate is 44.1k. This means that there are 44,100 samples of audio data processed every second. A typical buffer/vector size is 1024 samples. Processing occurs on blocks of data. It’s much more efficient to run calculations on 1024 samples at a time than, say, one sample at a time. So, what’s happening with latency is that the computer is outputting the previously processed block of samples.

To calculate the rough latency, you would divide the buffer size by the sampling rate. 1024/44100 = .02322 seconds.

You won’t be able to do latency compensation in every scenario, but that will help with a lot of basic patches.

2 Likes

.02322 = 23ms of latency?

I believe so. Again, I say rough latency. That’s just the buffer size latency. Individual plug-ins, certain Max objects, and more processing callbacks can add more latency to the equation. It’s easiest to not make perfect calculations and just use a delay control to dial it in by ear.

2 Likes

[quote=“sellanraa, post:3, topic:7061”]
I’ve been working on a patch with three trigs outputting to 3 inputs of my Just Friends
[/quote]I’d like to peak under the hood

probably a million ways to make this simple trigger output but I’d like to see your method

1 Like

Glia, you can see my patch in another thread about envelope generation in MAX.

1 Like

aw nice I did see that pop up but hadn’t read the thread
I’m a week behind on lines stuff

Somehow I missed that OWL now supports running Max’s gen~ code directly on the module.

1 Like

Is anyone here using the RT Owl?

Seems like it could be super handy for trying experiments.

Is there any reason that karma~ wouldn’t work with signals that change at CV rates rather than audio rates? I’m feeling more drawn to CV loopers than to sequencers as the core of a setup right now…

1 Like