there’s also the question of whether or not the error is a salient feature of the overall structure of a given performance. it might be useful to generalize “error” here as (weighted? biased?) noise, as a banal factor of any analog system (and thus logically trivial). another example might be the imputation of vibrato when modeling a string or wind instrument. vibrato does not usually entail a non-trivial change in pitch-class or timbre, which implies that its structural function is perhaps merely rhetorical rather than systematic or formal. this is demonstrated by the conventional lack of notational specification for things like vibrato in music scores, insofar as such things function on a sub-ornamental level. thus, does one risk anthropomorphization when modeling microlevel rhythmic variability as statistical human performance error? perhaps, by definition, yes. which anticipates the question of whether or not the ‘human factor’ of musical performance operates at the sub-ornamental or rhetorical level… Randall (1967) made a similar argument when discussing emergent electronic music in response to psychoacousticians and their analysis and modeling of musical performance … I wonder if the same could be applied to, in this case, physicist’s analyses of music …
there’s also the common conflation of ‘humanization’ with complex division. e.g. ‘un-quantized’ versus ‘quantized’. it’s not a question of ‘humanization’, but rather a question of complex temporal subdivision—still quantizable, but not necessarily in a manner that treats subdivision as globally symmetrical or uniform (e.g. different sizes of polyrhythmic divisions throughout some composition). unfortunately many commercial electronic music devices either don’t account for this or do so in a rather obtuse, arbitrary manner (e.g. preemptive division specifications: quantizing ‘up to 32nd-note-triplet’, ignoring any further irrational values), perpetuating the aforementioned conflation.