an introspective drum machine that looks into itself and produces rhythm from its own self-examination.

(info on what is happening in video)


this drum machine introspects by looking at any of the current drum patterns and generating a new drum pattern based specifically on that pattern and which instruments they are (e.g. a snare rhythm based on the kick pattern).

these generative rhythms are accomplished using Google’s “variational autoencoder” for drum performances. their blog post explains it best (and their paper explains it better), but essentially they had professional drummers play an electronic drum-set for 12+ hours which was later used to feed a special kind of neural network. I used their model from this network and sampled it randomly to produce “new” groups of drum rhythms (>~1,000,000 of them). then I created probability distributions from calculating bayesian probabilities from each instrument to each other instrument, within each rhythm grouping. this probability table can then generate a snare drum pattern based on a kick drum pattern, or generate a hihat pattern based on a snare drum pattern, etc. etc.


the sounds for this drum machine come from a new engine which I call “supertonic” because it is a as-close-as-I-can port of the microtonic VST by SonicCharge.

the act of porting is not straightforward and the experience itself was a motivation for this script - it helped me to learn how to use SuperCollider as I tried to match up sounds between the VST and SuperCollider using my ear. I learned there is a lot of beautiful magic in microtonic that makes it sounds wonderful, and I doubt I got half of the magic that’s in the actual VST (so this is by no means a replacement). looking at the resulting engine you might notice some weird equations that are supposed to be approximating the magic behavior in the true microtonic. this script also includes a standalone SuperCollider drum machine to use with this engine (and conversion scripts to convert microtonic patches).

here is a demo comparing microtonic and this engine.

drummer in a box

in the end, this script is a little drum machine in a box and also a new drum machine engine for norns, a little like @21echoes’s cyrene, @pangrus’s hachi, or @justmat’s foulplay.

for me personally, this script is an experiment. to try to answer the question: what is it like to perform with an AI generated rhythm section (i.e. paralleling what its like to play with a AI generated piano)? is it good? surprisingly so, sometimes.


  • norns


all the parameters for the engine are in the PARAM menu, as well as preset loading.

on the main screen:

  • K2 starts/stops
  • K3 toggles hit
  • E2 changes track (current is bright)
  • E3 changes position in track
  • hold K1 and turn E3 to erase

this script automatically detects all midi keyboards and will start/stop based on midi start/stop events.

you can hold K3 and move E2 to lay down a lot of beats.


introspection requires downloading a prior table (~100 mb, not included in repo) and sqlite3. both of these can be installed by running this command in maiden:

os.execute("sudo apt install -y sqlite3; mkdir -p /home/we/dust/data/supertonic/; curl -L --progress-bar > /home/we/dust/data/supertonic/drum_ai_patterns.db")

once installed, you have some new button combos available:

  • hold K1, then press K3 to generate a new pattern based on the highlighted pattern
  • hold K1 and turn E2 to change the highlighted pattern (basis of the generation)
  • hold K1 and press K2 to generate beats 17-32 based on beats 1-16 for current instrument

using your own microtonic presets

if you have microtonic you can easily use your own microtonic preset. simply copy your microtonic preset file (something like <name>.mtpreset) and and save it into the /home/we/dust/data/supertonic/presets directory. then, you can then load these presets via the PARAM > SUPERTONIC > preset menu.

converting microtonic presets for use with SuperCollider

you can also use the engine directly with SuperCollider. the engine file is synced with a SuperCollider script, lib/supertonic.scd. an example drumset is in lib/supertonic_drumset.scd. you can easily get a SuperCollider file with your microtonic presets by running this lua script:

lua ~/dust/code/supertonic/lib/mtpreset2sc.lua /location/to/your/<name>.mtpreset ~/dust/data/supertonic/default.preset >

known bugs

the supertonic engine is pretty cpu-intensive, so if you have 4-5 instruments all doing fast patterns (or fast tempo) you will hit cpu walls and hear crunching. any ideas to improve cpu usage are welcome :slight_smile:

the pattern generation (k1+k3 or k1+k2) runs asynchronously but I’ve noticed that sometimes it might cause a little latency when using it while performing (generating patterns while playing).

if you aren’t seeing any new randomly generate patterns when pressing K1+K3/K2, it could be that the pattern that you’re using as a basis doesn’t exist in the database (and therefore won’t produce any new patterns).


the ex-dash patterning functions are from @license from the collaborative song project. the flying confetti is from @eigen’s brilliant pico-8 wrapper. also thanks @dan_derks, our little discussion helped me figure out the beginnings of this thing. finally, big big thanks to @midouest who shared their microtonic supercollider project which showed me some tricks I had missed and also showed me I was on the right track (because our implementations had a lot of parallels).


install via maiden or


make sure to restart after installing because it includes a new engine.

note, that to use the introspection you must also install the probability database (instructions here).


YES!! This is what I’ve been waiting for! Thanks so much, looking forward to trying it out <3

1 Like

You can use microtonic presets??? You are a damn wizard.


wow. you just continue to amaze! :exploding_head: :exploding_head: :exploding_head:

can’t wait to dig in :cowboy_hat_face:


yep! as long as its “MicrotonicPresetV3” it should work. if you have a different version / it doesn’t work, lmk and I can patch the patch reader. there’s also a lua script to convert microtonic parameters to supercollider synths.

enjoy :slight_smile: lmk if you find :bug: :bug:


there should be a super-zack category in here


I had a few minutes to try this out and a couple thoughts:

  • I definitely ran into CPU limitations very quickly. I wonder if making each track monophonic could limit these glitches? i.e. sounds don’t overlap themselves? or otherwise limiting polyphony?

  • could use a “clear row” function

  • Definitely definitely wants grid implemetation.

All 3 of these things are things I am super down to take a stab at if you’re interested, but I’m quite a beginner at scripting norns!

Also, sidenote, the sound itself of the script is awesome. Nice work!


this is like a dream
how did you even find it?


definetly a :bug: here! I just pushed a fix for it - v0.1.1.

turns out I had implemented polyphony limiting but at some point I forgot to change a variable and it wasn’t actually doing anything lol. I added it back in, the limit is 5 voices, after that it will start to free synths and should be better on the cpu front.

I also have a parameter that “tunes” freeing - each synth checks its amplitude and frees if the amplitude drops below a certain threshold (this only kicks in after 30ms so there is type for slow attacks). I’m not sure this is optimal yet, so it could use more fiddling.

in general, this is an expensive engine. I made five tracks available, but this is probably a little eager on my part - its probably better with just using four (or five tracks with sparse instruments).

in my next phase of SuperCollider education I will look into how to optimize, I haven’t done much of this yet. I have ideas of how (maybe creating a WhiteNoise bus that all the synths can share???) but not too many (and maybe no good ones) so could use any help with ideas to reduce cpu burden.

noted :slight_smile:, it used to be there but I dropped it for another thing with a button combo. I could try to put it back using a “hold” combo or something.

yeah! would be cool, it’ll probably happen.

the Google magenta project is doing some cool stuff with music AI - I periodically check in on their project but somehow I overlooked their groovae generator until now! btw the code for getting all these drum patterns into a sqlite3 database is here: GitHub - schollz/drummy: ai drums (not really documented, but check the Makefile for basics).

1 Like

the single biggest thing that pops out is all these usages of

etc. each channel in the selected signal is computed in parallel. unfortunately it’s not trivial to actually swap these in and out with replacement.

it would involve breaking up the signal flow into multiple synthdefs, and replacing synths for e.g. different filter types. (or i guess generating a combinatorial number of monolithic synthdefs, but that sounds more weird/hard to me.)


Nice idea, I really appreciate this “outside the box” thinking. I’ll be looking for the least human drum patterns possible :wink:

I might be hallucinating it, but shortening noise and tone decay seems to help with CPU crackles – trying that was based on my (meager) knowledge of how SuperCollider envelopes work, and how they are freed.


Thanks for the amazingly fast patch and response!

maybe creating a WhiteNoise bus that all the synths can share???

This seems wise. That’s how I would patch this kind of thing on a modular.

For “clear row” - top key + both bottom keys is what I intuitively tried, which could be worth something :laughing:

On the CPU front, I would be gladly take 4 or even 3 voice polyphony. I’m a big fan of sparser drums though so maybe my use is an edge case. I would also take a low sample or bit rate, though I’m not sure how much of a difference that makes. I’ll come back if/when I have more thoughts as I experiment with the script more!


Mmmmm spending a bit of time sculpting sound this engine has some pretty awesome sounds on its own.

Spectacular work @infinitedigits and a load of fun.


Oooooh. I do love Microtonic, and the Pocket Operator version.

Fantastic stuff. Thanks for coding this! :slight_smile:

1 Like

Super cool drum machine you’ve created. I’ve been having fun with it. I’ve noticed that depending on whether clock is internal or external midi there is a change in the feel of the playback of the pattern timing. Internal clock is like a robot, but clocked from external midi it sounds like a drunk robot; it kind of speeds up and slows down a tiny bit. I tested external midi clock to awake with 16th notes just to be sure it wasn’t my external source that’s had one too many, but no it is super tight and solid.


yeah I was afraid of that. breaking up the signal flow seems good, I’ll experiment with that.

yeah this will help save CPU. synths are freed either by their longest envelope (attack + decay for noise or oscillator) or if they become silent. if they are not silent (but very soft) and have a long decay then it will still run up the cpu (until it gets freed from the voice allocation).

yeah! the “extremes” of the engine have some interesting artifacts that are fun to use for sculpting cool sounds. if you get microtonic running you can use their interface to design sounds and then save the preset to the ~/dust/data/supertonic/presets folder to load it.

I’m not 100% sure, but I’m pretty sure this is not the script but a small deficiency in the norns external clocking which was an issue brought up awhile ago but has now been fixed (thank you @artfwo !!!). but the fixes have not yet made it into a norns update…yet. to be sure though, I will try to see if I can get wonky external clocking behavior on the latest norns branch with this script (the latest branch had fixed things for my other scripts I saw this type of thing).


Welp, that settles it… I now need multiple Norns thanks to your amazing work.


Very cool - can’t wait to check this. I follow William Fields stuff on Twitter and he does something similar - he auto generates rhythmic drum stuff which then triggers auto generated bass / melodies at the same time. I think he imports samples for drums and uses some FM synth for bass/melodies.

I know this is primarily a self contained drum machine and It’s just an idea - but would love to see some sort of auto generated synth lines being made and triggered by what’s happening in the drums lanes. Have the random synth lane output to midi or crow / w/synth or just friends.


so my thinking along this line is to do the following:

  • make a SynthDef for each control, e.g. one for SinOsc
  • replace with input bus in the main logic
  • build the synth by stapling SynthDef's together with buses and then making sure the last piece frees all of them

is that the gist (no pun intended) of how to break up the signal flow into SynthDefs?

edit: I worked out a version that uses buses and it decreases peak CPU usage by ~50%. cool!!!

that is really really cool. thanks for sharing that. in fact, the Groovae dataset from Google actually includes a model for a 16-bar melodies with drum and bass so its not a far throw to add melodies. generating melodies in realtime via some AI has been on my mind for a long time actually but I’ve yet to find some impressive results for it.

double the norns, double the fun :slight_smile: :slight_smile:


this is super awesome. thanks for sharing with the community. One question, if I wanted to have my external synth sync to this by sending out the same clock via crow, would I need to do any coding, or is this alread “baked in” via the norns global clock settings which I belive have clock out settings…

1 Like