Stochastic Fractal Linkage

Or computers playing human.

I recently went down a bit of a rabbit hole after being pointed to this paper by Holger Hennig describing a method to “humanize” computer rhythms using a stochastic model of synchronization. I don’t full grasp the technique as a whole from the paper, but it lead me to some broader questions about computer music:

What do you do to humanize your electronic rhythms (especially curious about teletype scripts)? Is such a thing convincing or even possible using programmatic systems? Part of me wants to believe, that introducing a convincing method of error or randomness could lead to believable grooves. Another part makes me think this is like a “bad TV filter” from video equivalent in music


This reminded me of this article by James Holden:

It comes with a Max for Live patch designed to add human feeling to midi sequences. It is a “group humanizer” so it’s a effect you can put on multiple tracks and synchronize them to a master instance of the effect so that they feel like a band playing as loosely as you would like to a general tempo


This is a very interesting topic. @dianus thanks for posting this paper.

On a meta level I start to question myself, why I should use an algorithm to add human feeling to a computer groove. From a composer’s point of view, I choose human players for a human kind of playing and a computer for a non-human kind, or things that humans can’t play. This gives me a thing or two to reflect on while standing in my traffic jam. …

2nd thought: How do I get humans to play more like a computer?


I’m rather fond of the Humanize and Evolve functions in the Numerology sequencers. Apologies for not having any more in-depth analysis than that. I just like them.

This is an interesting discussion of the whole idea of “error” in a musical system. Seems to me that a big strand of recent musical history has been exploring errors that are unique to the machine and not at all human: from distortion to various kind of noise, low bit rates etc. It’s fascinating how these highly machine-specific “errors” are precisely the things we tend to think make the sound more organic, or characterful in some way. We prize them so highly that we build machines that can emulate the errors of other machines. (Modelling amplifiers for example.)

This all tends to be at the auditory timescale rather than, say, sequencing/timing. But maybe we can look out for ways of allowing machines to make their own mistakes at this level too? I’ve built several robotic musical installations and have been fascinated by the way they fail - the rhythms the robot drummers play when their motors are wearing out for example. It’s not human, and yet it feels somehow more approachable - more flawed.

I noticed the other day that my Teletype stuttered a bit when I was switching scenes. The clock it was sending clouds just slightly slowed, which meant my delay wobbled and the drum patterns slid against each other. I was bothered by this at the time - now I’ll be looking for ways to recreate it!

1 Like