I’m coming at this from being a long time Max user, thinking about an approach for combining the two. The idea of a combined system (thereby not giving up the Max stuff I love) is very appealing.
I’m thinking of a system where sound generation (synthesis and sampling) and final mix/effects are done in Max, but having an intermediary phase at the modular. I really like the idea of developing a system where inputs to, and outputs from the modular are always the same (eg always using 1-4 out for audio, always using 4-6 out for CV etc). So then you could have a bank of generator/mix combinations in Max, and then if you patched the modular up in a new and interesting way it would be possible to recall older Max patches and the ins/outs to still work a as expected.
This is the kind of system I had in mind (adapted from the Make Noise System Concrete):
Another variant swapped Cold Mac and the 3VCA for Veils and Kinks.
I’m trying to focus on interesting analog filtering (as the sound sources will come from Max - hence no oscs etc) and a range of ways to manipulate/generate CV. I’ve no idea if I’m missing anything in terms of essential utilities…
I’m also wondering if it would be possible to have more than one stage back and forth from the modular, or if this would break down because of latency (though a lot of what I do is ambient/soundscape, so maybe not such an issue?). An idea was to build a Phonogene style patch in Max you see…
That modulargrid link isn’t working for me at the moment.
Many people with hybrid systems seem to reserve ITB components for “end of chain” (effects, etc) because of latency concerns. I’d love to hear more about actual experiences with this. I would love to be able to use ITB components anywhere in the signal chain. Because even the largest case is always going to bump into the “if only I had another _____” eventually.
One idea on the latency thing is using delays in Max to re-sync everything (as you could calculate latency). I’m not expecting to ‘perfom’ anything, but more have generative stuff doing the sequencing in Max.
Would you mind expanding on that a bit? I know how to find target samples per buffer in Live, but Max uses “vector” terminology that I’m at the moment confused by. It would be great to find an automated way to compensate for latency wherever it crops up.
I’ve been using my ES-8 with CVToolkit mostly. Sometimes with silent way or Reaktor in Ableton. The idea of getting started with max has been daunting and I haven’t made the leap yet but am getting increasingly interested in doing so. I was wondering recommendations (tutorials, resources etc) people might have for getting started in max with the modular in mind?
That’s a good point. I may even turn up at a shop with my laptop one day. Does anyone know if the London Modular people are on here? They’re my local dealer.
Thanks for sharing your approach. I remember one of the Art+Music+Tech podcasts referring to someone as being an ‘expert explorer’ of a system, logging and finding possibilities. I think that would be the appeal - working out the potential interplay between the two systems.
You can quote any thread in any other thread. Highlight the part you want to quote, click the “Quote” button that appears, then, while the edit pane is still open, navigate to a new post (or any other post), finish your thought, and click reply. The forum software (Discourse) will ask you if you want to reply in the location you’re currently looking at, or in the original thread. You want the new thread, not the old thread.
Yeah - sending a ping through the system and measuring how long it takes to come back. I bet it would be possible using [count~] then you could get closer to sample accuracy. Maybe even keep it all in MSP.
I’m imagining that if you could achieve that, then adding up the round trip time and using [delay~] to re-sync everything at output…
A typical sampling rate is 44.1k. This means that there are 44,100 samples of audio data processed every second. A typical buffer/vector size is 1024 samples. Processing occurs on blocks of data. It’s much more efficient to run calculations on 1024 samples at a time than, say, one sample at a time. So, what’s happening with latency is that the computer is outputting the previously processed block of samples.
To calculate the rough latency, you would divide the buffer size by the sampling rate. 1024/44100 = .02322 seconds.
You won’t be able to do latency compensation in every scenario, but that will help with a lot of basic patches.
I believe so. Again, I say rough latency. That’s just the buffer size latency. Individual plug-ins, certain Max objects, and more processing callbacks can add more latency to the equation. It’s easiest to not make perfect calculations and just use a delay control to dial it in by ear.
I built something sort of like the Intellijel Shifty in CV Toolkit. It’s really just a series of sample & holds that are sampling a MN Wogglebug. It’s not a CV recorder, but it’s pretty fun to play around with. Was planning on adapting it to Max so I could use more S&H chains as a sort of shift register / cv looper. I don’t know karma~, so can’t speak to it, but maybe the S&H approach will get you what you want?