M4L :: Designing a MIDI-controlled effect plugin - need help

I’m building an M4L plugin for my Eventide H9 (and yes, I’ll be delighted to share it with the world once I get it working). I’ve got the core functionality working in Max already - it can query the device, display the currently selected preset by name and module type, set it’s 10 knobs to the current values of the effect parameters, and send/follow the knobs as they are updated either from the unit’s front panel or via the Max interface directly. So far so good.

I’ll also make it clear that V1 of this plugin is intended to be a playback/automation tool only - in other words, I’m not presently designing it to handle figuring out what algorithms you have loaded and controlling them independently of your preset list - it’ll just be able to observe the current algo/knob position, switch to another preset (by preset number), and set knobs to positions or follow the positions of knobs as you change them on the front panel. The expression pedal and hotswitch settings will also be tracked and can be controlled.

What I’d like to do is turn this into an M4L plugin so that effect parameters could be automated. I’m not sure how best to do this: should loading the file slam the H9 to the preset it was last saved with, and set the knobs from there? Should it be a physical “sync” action (either to or from the device)? How do integrate this with the data storage of Live best? How to connect the max pots and sliders to the Live datastream/automation params?

I’m wading through the M4L automation docs right now, but it all seems rather disconnected - there doesn’t seem to be a “best practice” guide for this situation, so I’m hoping some of you with more experience can weight in on a good strategy.

My other question: should this be an audio plugin? A MIDI plugin? Neither? What’s the best “object” to encapsulate this?

I was hoping to do something like the “external audio effect” plugin - audio in and then sent to some output, midi control via a sideband MIDI setting (e.g. a drop-down in the plugin params choosing interface and channel), a handful of knobs, an expression slider, and the left, right, and centre buttons, and then audio back in from some other external input and sent out the plugin’s audio data output connections onwards to the next device in the chain. I’d expect automation on the knobs to be the main form of control, not a MIDI track or anything. Does this make sense?

i can only speak for the live storage/automation. You need to use the live versions of graphical elements, like live.dial, live.slider, live.numbox etc instead of the vanilla max objects. Then you can rename their automation names and set if they should be automated or stored only.

Oh, and regarding the MIDI question, it will need to be a MIDI plugin, as you’ll have to send Track MIDI data.

But the first question is : have you searched if this doesn’t exist ?
At least, take a look to this one that might give you clues : http://www.maxforlive.com/library/device/2131/eventide-pitchfactor-controller

3 Likes

Good points.

My concern (which may be unfounded) about the Live versions of the controls is then when you first load the preset, maybe you have a nice patch all setup and you want to capture it. If I use the Live objects, will they initialize to some default state, forcing the controls to output to those values? If so, I’m concerned that either you’ll lose your preset before capturing it, or that the values will be out of synch with the hardware’s idea and you’ll get a CC “jump” when starting playback.

Basically, you need to send Sysex to query the device at some point to say “hey, what are you currently loaded with and what are your knobs/expression pedal settings”, and then you need to send the configuration you want. During playback my assumption is that the automation would be “absolute” - e.g if I have chosen patch 42 and knob 3 is at 64, I expect that patch 42 with knob 3 at value 64 is actually what the device is set to then. But when I first insert the plugin, or before I record automation, I expect it to track / reflect the device’s actual configuration. So that’s my main concern - I don’t want dropping the plugin in to change any settings initially - in fact I want it to query the device and reflect the actual settings - and I’m not sure if, when you load the live set, it should “slam” the saved config or should again re-query the device until playback starts. I’m leaning towards “query on insert, slam on reload”… maybe in the future I can make it an option.

So, with that in mind, is there a way to determine that the plugin has just been inserted / loaded from a saved set?


As for being a MIDI track, it’s not really necessary: I can use the IMP.MIDI devices (https://www.theimpersonalstereo.com/blog/blog/2016/3/impmidi-cross) to access midi connections that are not directly in/out for the plugin. Thus it can, in fact, be a MIDI-controlled audio processor. To me, this makes more conceptual sense since I don’t want to store midi clips of notes or whatever, I want to store automation (e.g. on a bus or on top of the channel’s other data, which is likely to be audio). This makes it more flexible than requiring it to be on a MIDI track separate from the audio that it’s processing.


As for searching, I didn’t find that plugin, but it’s different enough from what I’m trying to do that I think we’re not crossing paths too much - mine uses the native SysEx to query the device and read the algorithm (which is not supported in the same way as a Factor pedal) and preset name directly from the config - this is not transmitted by the Program Change / Bank data. Mine also listens for preset change and then re-queries the native state of the device to resynch the knobs and display the correct “Factor” box that the algorithm belongs to (e.g. if you switch from a TimeFactor algo to a Space algo the colours of the device change, the names of the knobs change, etc…).n That said, I’ll take theirs apart and see if I can learn anything useful from it, thanks!

I’m close to releasing the beta for this plugin, and I’d like to thank @chapelierfou for the pointers!

I’d like to record what I learned in case it helps anybody else:

This document (https://docs.cycling74.com/max7/vignettes/live_parameters) was most helpful, but still confusing at first.

  1. pattr/patterstorage/autopattr are not necessary in live, but also neither is using the live. objects. In the M4L environment all objects can store state to Live, IF you add a pattr object bound to it and check the “parameter mode enable” box in the properties. Basically M4L has a sort of “autopattr” built in which will slurp up all of the objects with parameter mode enable. It apparently also works in a sort of different way with pattr/autopatter/patterstorage but that appears to be more for legacy support or for making it easier to use the same patch code in an M4L and a standalone at the same time, so if you’re just making a M4L patcher, don’t worry about these objects. Just either use the live. objects or slap a pattr in front and bind it with parameter mode enabled.

All objects with this mode enabled comprise the “parameters” for the patch, which can easily be viewed as a group in the Parameters window, just like you would with a pattrstorage and autopattr.

  1. When you add an M4L effect or instrument to a track, initially the “initial” values are set (if you specified them in the parameter mode options), then the loadbang is fired. Parameters load based on their load order, with 0 being the first to load and higher numbers (8, 10, 20) loading later.

  2. Live saves the current values of the parameters in the Live session file, and will restore these values rather than the initial values when the session is loaded for the first time. The only time Live uses the initial values is when adding a blank (e.g. not from a preset) version of the patcher to the track.

  3. Presets are basically just collections of these current values saved in a sidecar file and loaded (again, in the load order of the parameter mode order setting) back into the object.

  4. Having a MIDI remote object in an audio timeline is super easy with those imp.midi* objects, and it REALLY makes a midi controllable external effect feel like a plugin. It’s remarkable.

  5. Generating pedantically formatted ASCII from a max patch is a huge pain in the ass. Making that into legit sysex while ensuring values from literally 50+ control objects are actually present in the signal (and not blank or zeroed out) while not turning your patch into spaghetti or having a zillion signals pinging everywhere each time a control is tweaked is even harder. Gates and sequential triggers are your friend!

  6. There are bugs. Subtle ones (e.g. I’ve found a case where banging an object to give it’s value works fine unless I bang it from a subpatcher using a remote send and sending that value to another subpatcher - and even this is reliably broken in my case but doesn’t break if I simplify it - I’ve traced the signal using breakpoints and it just disappears as soon as it bangs the numbox, but only in this case…) that will make you do really annoying workarounds to avoid. The more complex your patcher is, the more likely you’ll find these.

  7. While it’s nice to let Max handle the complexity of the signal flow, it really pays to try to block signals from going down complex paths if you don’t need them to in that moment. This can really improve the responsiveness of your device. I’ll reiterate - gates and sequential trigger objects are your friend!

  8. Encapsulation is great, until you need to debug it. Try not to hide too many receivers and senders inside your encapsulations.

====

So, that’s what I’ve learned so far. I’d be glad to have comments and advice (even/especially contrary ones - with details, of course!) that can help me patch better. And of course, I’ll share the patch soon. (Speaking of which, where’s the proper place to post a beta patch for comment?)

2 Likes