āIf it sounds good, it is good.ā
ā Duke Ellington
I think the logical conclusion is a music AI that has a sense of style.
Iād like to point out that music is so much more than pitch and that musical decisions typically associated with quantizers (scales, modes and such) can also be applied to timbre, accent, duration, tempo, etc. A simple approach to this can be to apply pitch CV to, say, envelope decay, which results in higher pitches getting more and more legato. A more advanced approach would be to send a quantizerās trigger out to an envelope that applies a little glissando chirp to an oscillatorās aux CV inputānew notes get a gliss, repeated notes donāt. Run that trigger through a clock divider to get rhythmic variation.
The strength of modular synthesis is that these kinds of constructs are easy to implement. Time domain decisions can be made with e.g. slews, comparators, sample & hold and brethren like ASRs, etc., and their effects can be easily added, subtracted, multiplied and otherwise mangled.
But I preach to the choir. 