Stochastic Fractal Linkage

Or computers playing human.

I recently went down a bit of a rabbit hole after being pointed to this paper by Holger Hennig describing a method to “humanize” computer rhythms using a stochastic model of synchronization. I don’t full grasp the technique as a whole from the paper, but it lead me to some broader questions about computer music:

What do you do to humanize your electronic rhythms (especially curious about teletype scripts)? Is such a thing convincing or even possible using programmatic systems? Part of me wants to believe, that introducing a convincing method of error or randomness could lead to believable grooves. Another part makes me think this is like a “bad TV filter” from video equivalent in music


This reminded me of this article by James Holden:

It comes with a Max for Live patch designed to add human feeling to midi sequences. It is a “group humanizer” so it’s a effect you can put on multiple tracks and synchronize them to a master instance of the effect so that they feel like a band playing as loosely as you would like to a general tempo


This is a very interesting topic. @dianus thanks for posting this paper.

On a meta level I start to question myself, why I should use an algorithm to add human feeling to a computer groove. From a composer’s point of view, I choose human players for a human kind of playing and a computer for a non-human kind, or things that humans can’t play. This gives me a thing or two to reflect on while standing in my traffic jam. …

2nd thought: How do I get humans to play more like a computer?


Human rhythmic error and inconsistency is not random, though. If you want to model it, introducing randomness is not the way to go.

But I agree with mheton: If you want a human feel, then the easiest way is to play the rhythm in yourself via a controller.

1 Like

there’s also the question of whether or not the error is a salient feature of the overall structure of a given performance. it might be useful to generalize “error” here as (weighted? biased?) noise, as a banal factor of any analog system (and thus logically trivial). another example might be the imputation of vibrato when modeling a string or wind instrument. vibrato does not usually entail a non-trivial change in pitch-class or timbre, which implies that its structural function is perhaps merely rhetorical rather than systematic or formal. this is demonstrated by the conventional lack of notational specification for things like vibrato in music scores, insofar as such things function on a sub-ornamental level. thus, does one risk anthropomorphization when modeling microlevel rhythmic variability as statistical human performance error? perhaps, by definition, yes. which anticipates the question of whether or not the ‘human factor’ of musical performance operates at the sub-ornamental or rhetorical level… Randall (1967) made a similar argument when discussing emergent electronic music in response to psychoacousticians and their analysis and modeling of musical performance … I wonder if the same could be applied to, in this case, physicist’s analyses of music …

there’s also the common conflation of ‘humanization’ with complex division. e.g. ‘un-quantized’ versus ‘quantized’. it’s not a question of ‘humanization’, but rather a question of complex temporal subdivision—still quantizable, but not necessarily in a manner that treats subdivision as globally symmetrical or uniform (e.g. different sizes of polyrhythmic divisions throughout some composition). unfortunately many commercial electronic music devices either don’t account for this or do so in a rather obtuse, arbitrary manner (e.g. preemptive division specifications: quantizing ‘up to 32nd-note-triplet’, ignoring any further irrational values), perpetuating the aforementioned conflation.


I’m rather fond of the Humanize and Evolve functions in the Numerology sequencers. Apologies for not having any more in-depth analysis than that. I just like them.

This is an interesting discussion of the whole idea of “error” in a musical system. Seems to me that a big strand of recent musical history has been exploring errors that are unique to the machine and not at all human: from distortion to various kind of noise, low bit rates etc. It’s fascinating how these highly machine-specific “errors” are precisely the things we tend to think make the sound more organic, or characterful in some way. We prize them so highly that we build machines that can emulate the errors of other machines. (Modelling amplifiers for example.)

This all tends to be at the auditory timescale rather than, say, sequencing/timing. But maybe we can look out for ways of allowing machines to make their own mistakes at this level too? I’ve built several robotic musical installations and have been fascinated by the way they fail - the rhythms the robot drummers play when their motors are wearing out for example. It’s not human, and yet it feels somehow more approachable - more flawed.

I noticed the other day that my Teletype stuttered a bit when I was switching scenes. The clock it was sending clouds just slightly slowed, which meant my delay wobbled and the drum patterns slid against each other. I was bothered by this at the time - now I’ll be looking for ways to recreate it!

1 Like