I read the entire MIDI 2.0 spec set last night…
It is a testament to MIDI that it supports a whole world of use cases that the likes of us have no connection with… But, sad that 90% of the spec. is about those use cases.
Prediction: None of the profile and property exchange parts of the spec (which are the bulk of it) will have any impact on electronic musicians - we’ll never use any of it. Sort of the same way that General MIDI has little impact on us.
The only part that will get used is Universal Packet Format, the MIDI 2.0 voice messages (higher resolution), and the lovely 8-bit clean SysEx message.
As an implementer, the spec. is a pain, especially for small devices: It defines the 4th and 5th encodings of MIDI messages, so implementations will now have to handle:
- MIDI 1.0 over serial (the original stream encoding)
- MIDI 1.0 over USB (which is a packet encoding)
- MIDI 1.0 over BLE (which is a stream encoding, but with timestamps and different running status rules)
- MIDI 1.0 over UMP (the new packet format defined by MIDI 2.0, different than USB’s packet format, has optional timestamps, different than BLE’s)
- MIDI 2.0 over UMP (finally, the actual new messages)
MIDI 2.0 messages will be a bit of a mixed bag: The extended resolution of data values (velocity, controller, pitch bend, etc.) in 2.0 will be very appreciated, as will the per-note control values. On the other hand, things like per-profile-attributes, and complex per-note management are likely to get glossed over in many implementations.
How to come up with a uniform API for representing this so applications, and musicians who work with things like MAX, Pd, SuperCollider, etc…, can easily code and manipulate it is going to be quite a challenge.