Well shucks, I had no idea this was even in the works. Are there more technical details available somewhere? It seems kind of lame the whole spec is being developed in private among these large member companies, but I suppose that’s the only way to push something like this through.
Very curious to see how little or much this deviates from the current spec! (Especially in regards to recent discussion here RE the usefulness of something like OSC for dealing with events as param clusters, streams, or in other non-note-centric ways…)
This talk clarified a whole lot for me. The basic idea (MIDI-CI) that Mike Kent came up with 2 years ago that makes all the new extensions possible is pretty clever & straightforward – the new bi-directionality in MIDI-CI is like a handshake devices can do to decide if all these new features can be used, otherwise they just fall back to MIDI 1.0…
The dude himself:
JSON over the property exchange API is pretty crazy exciting!
…protocol negotiation sounds like it could support even something crazy like OSC over MIDI…
Thanks for the video. This could potentially be a new era for electronic music gear. They are talking about 32bit resolution (!!!) and much higher bandwidth and clocking rates too. This means Midi could probably even approach audio rates and the old argument about the resolution of Midi vs. continuous CV voltages would also be obsolete. Also tight sync would never be a problem again.
The downside of course is that the simplicity and elegance of the midi protocol would also be gone. All this talk about protocol negotiation kind of makes my head spin.
I think the protocol negotiation part will be easily encapsulated into a library or other set of functions, and the ability to debug/diagnose the protocol will still be fairly simple. No, we won’t be reading the actual bytes on the wire as easily (but if it stays more-or-less human-readable ASCII in JSON format, that’s a plus), but since they still translate to direct messages it won’t be hard to chart/plot/observe them discretely. It’s certainly a lot better than trying to decode each manufacturer’s custom sysex for a lot of the extended parameters!
I would like them to decide on a physical protocol though - ideally Ethernet or USB - because I see that becoming the next big issue once we need bidirectionality and higher bandwidth.
It is a testament to MIDI that it supports a whole world of use cases that the likes of us have no connection with… But, sad that 90% of the spec. is about those use cases.
Prediction: None of the profile and property exchange parts of the spec (which are the bulk of it) will have any impact on electronic musicians - we’ll never use any of it. Sort of the same way that General MIDI has little impact on us.
The only part that will get used is Universal Packet Format, the MIDI 2.0 voice messages (higher resolution), and the lovely 8-bit clean SysEx message.
As an implementer, the spec. is a pain, especially for small devices: It defines the 4th and 5th encodings of MIDI messages, so implementations will now have to handle:
MIDI 1.0 over serial (the original stream encoding)
MIDI 1.0 over USB (which is a packet encoding)
MIDI 1.0 over BLE (which is a stream encoding, but with timestamps and different running status rules)
MIDI 1.0 over UMP (the new packet format defined by MIDI 2.0, different than USB’s packet format, has optional timestamps, different than BLE’s)
MIDI 2.0 over UMP (finally, the actual new messages)
MIDI 2.0 messages will be a bit of a mixed bag: The extended resolution of data values (velocity, controller, pitch bend, etc.) in 2.0 will be very appreciated, as will the per-note control values. On the other hand, things like per-profile-attributes, and complex per-note management are likely to get glossed over in many implementations.
How to come up with a uniform API for representing this so applications, and musicians who work with things like MAX, Pd, SuperCollider, etc…, can easily code and manipulate it is going to be quite a challenge.