I was sending to an MPE device.
Re mpe implementation: have only tested it for a couple of hours with my kijimi (external, worked great) and a couple of plugins (surge didnāt work so well, the roli thing did work fine, quanta as well). All tested with a seaboard rise 25.
I guess what Iām talking about might be a gray area between āmultichannel MIDIā and the finalized MPE spec. There are times when multichannel MIDI is useful (even with poly pitchbend and CCās) but you donāt need to fit with the MPE spec because it will be used for specific purposes. A simple example is something like the Vermona PerFOURmer that has 4 voices that can be addressed by 4 assignable MIDI channels. Each voice can respond to pitchbend and aftertouch, so it is close to the MIDI spec, but it can use channel 1 (there is no global channel) and it only has 4 channels. Each voice can have individual pan, pitch, filter, etc, so being able to send specific notes to specific voices is sometimes desirable. I guess in Live the way this would need to be handled is by using 4 different tracks? Thatās kind of a drag in terms of workflow and editing. There are workarounds no doubt but my point is that by not allowing manual channel assignments can be limiting.
I also often send MID out of a DAW into Max/MSP where I decode the MIDI by channel and can route as needed. This all gets a bit specific but lmk if youād like me to explain.
Thereās a few videos of live 11 and mpe as well as website showing the features.
If youāve seen any in depth ones that talk about the channel assignments lmk!
PS: From my testing Ableton seems to strip incoming MIDI notes of their channel data and reassigns each noteās channel in ascending order. So the same MIDI clip will have different channel assignments on every playback. This is unfortunate imo.