yeah, I think being client/server, theres always room for a discussion on what functionality resides on the server, and what on the client - true for norns (scsynth/lua) as it is SC (scsynth/sclang).
as you say though, the more functionality on the scsynth side, the more ‘conventions’ (protocols/api) there are - in a similar way to the way the norns lua api is extending.
it’s kind of inevitable, if you want reuse, which is of course is only one possible model!
which kind of leads to the next ‘question’ - how generic should it be?
you could bake in to SC how a particular ‘engine’ responds to (e.g.) a timbre message, and consider this a characteristic of the engine, and how it sounds.
or
you could just have a mapping, which says timbre maps to (eg. ) cutoff parameter, but the user can change.
the latter at first seems the ‘obvious’ choice, but is it?
one of the principles of Supercollider is that it is quick to code, so if you want a variation , just code it.
(why build ease of use, on top of ‘ease of use’
)
this way you can really ‘tune’ the SC code to make it feel right…
also the idea that any parameter can be mapped to any mpe attribute will be an ideal that cannot be attained, for the very simple reason not all parameters will be ‘per voice’ , or may be too processor intensive to realistic alter with continuous control. (so this ‘suitability’ would need to be fed back to lua some how)
anyway not entirely clear to me the ‘best’ solution,
I think my approach (as its close to what i have now) will be to try the direct osc route, and perhaps initially hard code the SC engines - as this is also inline with my desire to have SC as a flexible part of the solution, rather than some black box with lots of layers on it, that i dont understand 