Euro stuff triggered by drums (advice wanted)

(original posted this on muffs, but not a lot of useful suggestions there, with most of the leaning towards either drum sound modules, or things that could happen decently well in the computer. so reposting here for some different takes on stuff!)

So I’m planning and refining my eventual plunge into eurorack stuff and would like to get some opinions/advice on stuff.

I’m a big ciat-lonbarde fan, and would lean largely towards the IFM modules (primarily Fourses/Sprott/Swoop) and perhaps an ERD/SIR, as that’s the kind of soundworld I’m hoping to explore more of. Basically expanding on the stuff I’ve been doing with augmented hardware synth stuff (whisks, onset analysis, descriptor analysis).

What I want to do is build a somewhat compact system that is focused on using Expert Sleepers stuff to trigger/control some noisy sources (primarily IFM?). I plan on doing all the control/triggering stuff from the computer, so I wouldn’t need any clock stuff, and I generally dislike filters, so that rules out tons of stuff right there!

Other than a few (illuminating) conversations with PATremblay, I’ve not really gotten too deep into how to go about this. Based on my chats with PA I’ve put together a really rough modulargrid sketch:

I’d probably go with an ES-3/ES-6 combo for now, but based on the talk in this thread, it looks like the ES-8 might eventually work as a standalone thing, letting you use it with an existing soundcard.


What modules/makers would people suggest here?
(with the caveat that a computer is involved, so it can handle a lot of processes/functions)

What are some other good (noisy!) sound sources?

Would I need much(/any?) utility stuff? (An ES3/ES6 would give me 8out/6in)


Weird process/fx? (again, that wouldn’t be better served in the computer)

Any other thoughts/suggestions?

A side note(/car)

Depending on how things go I may eventually make a little sidecar or something with a few ‘manual control’ modules, for when I’m not strictly using the computer to control things. So would welcome some suggestions there too. At the moment I like the IMF Grassi and MengQi Lines, but I haven’t thought about this stuff too much yet.


Thinking out loud - and trying to be more useful than MW:

  • I think utilities are still handy because they become things you can fettle in the rack, rather than on the computer. Something like an offset/attenuation/mix module - a Mutable Shades, an Intellijel Triatt, that sort of thing - gives you some attenutation or inversion zuhanden, which can be handy to tame modules that prefer narrower or wider ranges of input. But the Blasser modules seem to have decent amounts of attenuation already on them, so it might not be a priority.
  • If you’re not into filters, waveshapers/folders might be more interesting for timbral control beyond what the oscillators can offer - I’m afraid I don’t know the IFM modules well enough to determine if that’s necessary or not.
  • I don’t know if the IFM stuff has any VCAs built in; if not, they’re probably a must if you want anything vaguely percussive beyond drones. It does look like Swoop will give you some kind of CV/triggered shaping, but otherwise, you could do quite well with something like a dual or quad VCA (say, a Doepfer 132-3, or an MI Veils) along with some kind of more traditional shaper than Swoop - something like an Intellijel Quadra or a Maths. Given the dense features of the IFM modules, I don’t know a Maths is overkill, but it’d give you two channels of attenuation or offset, two looping/triggered functions (“LFOs” or “envelopes”), and some cross-patchable weirdness. A Maths, some VCAs, might be enough to go with what you drew on MG. Just enough to control/shape the IFM stuff, and just enough utilities so that you can reach over and attenuate something.
  • Beyond that: most ‘weird processes’ tend towards the digital, which is all doable in the computer as well (albeit not with the same modulation inputs: I know my MI Clouds is not far off anything I could learn to do on my laptop, but the immediacy of modulation options available - as well as the constraint - makes it feel differently).
  • You might like some of the Noise Engineering sound sources - the Loquelic Iteritas in particular - for noisy weirdness.

I guess the thing I find hard is where you draw the line about what’s going on in the computer and what isn’t - I know your work a little from here, but only a little. So, you know, you could send a voltage trigger out of the ES box - but you could also send an envelope out of a CV output. I just wonder how much you want to commit to making this instrument a playable part of your setup in its own right, versus something more like an external sound module for the computer.

I think, though, your suggestion that its triggers/timing/sequencing emerging from the computer, suggests that really, some kind of envelope/function generator in the skiff, and some VCAs to shape sound, is sensible. You’ll discover what you need to make it playable as you build it out, I imagine; it’s much easier to acquire the utilities you discover you need in due course.

1 Like

If you’re into weird noisy percussive stuff, certainly take a look at the folktek line of eurorack that’s out now, particularly Matter

Also highly recommend a Maths, albeit with the caveats @infovore mentioned.

You mentioned you don’t love filters, does that include Low Pass Gates as well? My Optomix is the module that made me decide to stick w/ eurorack and I find it pretty different from what you’d get from a computer.

A gozinta wouldn’t be a bad idea if you wanted to use the eurorack for processing line level audio and primitive signal following which can be fun.

From there you’ll probably just have to start building and find your own way, but it seems like you’re already well on the road :slight_smile:

1 Like

Actually, yeah, +1 on LPGs, especially if you’re doing percussive things - they’re not just percussive in sound, but respond nicely to slightly varying lengths of gate, IIRC.

1 Like

Oooh, tons of great info @infovore/@rbxbx!

[quote=“infovore, post:2, topic:4719”]
I guess the thing I find hard is where you draw the line about what’s going on in the computer and what isn’t[/quote]

At the moment I don’t have a clear delineation as to what’s inside/outside the box(computer). I guess it comes down to what would work best in each domain. I know the IFM stuff is quite analog/wild, and is stuff that wouldn’t be possible to do in the computer. This gets fuzzier for things like envelopes/VCAs as potentially that could happen ‘in the box’ too, for at least bread-and-butter stuff.

Coming from a place of inexperience, I suppose the same idea would apply to envelopes/VCAs, in that more unusual ones would be harder(/impossible) to replicate in the computer. So envelope/function generators with character, perhaps the same goes with VCAs.

Thankfully PATremblay has oodles of modules, and has done some expensive testing/comparisons of VCAs/VCOs and such, so once I have some core modules, I can likely play with some of his collection to see what works well with what I have in mind. So I wouldn’t have to play the buy/sell game to find something useful.

That’s a good question as well, hence my last little bit. The idea for now is that it would always be tethered to the computer, but perhaps that will change. If it does, I know the conversation then becomes a different once as things of control/mapping/etc… all become significantly more important. I guess the beauty of the hybrid approach is that I’ll never run out of ‘hps’ in the laptop.

From what I envision at the moment I would sending triggers from the computer (based on descriptor/onsetdetection/audio analysis) to generate sounds. It’s clearer now that there needs to be a shaping side of the equation, either as envelope/function generators + VCAs, or as computer-based envelopes. Perhaps a hybrid of both there as well? Using digital envelopes and analog VCAs, or vice versa. Hmmm.

That was suggested on MW as well. I’ll have another look but for whatever reason it doesn’t really excite me (especially given the size).

Hadn’t really thought about it, but lpgs would definitely fall into the ‘analog’ category when it comes to what would be best served outside-the-box.

Starting to narrow things down slowly. I’ve been thinking about it in the background for a while, but figuring out that first jumping in point is tricky.

I think the obvious thing to do is consider the modular as an instrument or discrete object that’s triggered by your trigger-detectors - a bit like an activated snare or something.

So to that end; I think VCAs and envelopes/functions live in the modular. If you want to manipulate the sounds the IDF modules are making, you’d lean over and adjust them; the VCAs, LPGs, function generators are all part of that process, so they feel like they ought to be controllable from there too.

I also will say: I don’t have any experience with the Expert Sleepers stuff, and whilst everybody who has it seems very positive (and Silent Way does masses), it’s a lot of extra windows/UI to manage. I dunno, I think making the instrument an instrument sounds like a good plan.

(To the extent that: if, one day, you wanted to take the euro box out and replace the computer triggers with a contact mic/comparator… that’d be totally doable! The more that lives in the laptop, the harder that kind of action becomes.)

have u considered mounting piezo triggers like the rimSHOT to your kit and then Mikrophonie modules to interface with eurorack? that could be pretty cool…

1 Like

Yeah that’s a sound way of thinking about it. Kind of a control rate vs audio rate type thing. Use the computer for analysis and triggering, modular stuff for sound generation and sculpting, then back to the computer for output.

I would largely use custom software to handle the computer side of things, with some BEAP modules in the mix as well, for bread-and-butter stuff, especially since it’s designed specifically for this kind of usage.

And for the most part I’d probably have a few ‘presets’ I would use, and just “play” them instrument-style, from the drums. So I don’t really see myself pulling/plugging cables in a performance at all.

Oh man have I ever!
I’ve used an SPD-S since it first came out, and have had a setup based on DDrum triggers for a bunch of years, mainly triggering samples.
I’ve wanted to expand that to synthesis for a long time too, along the lines of MoHa!'s drummer (though I believe his stuff is done with SuperCollider), but I’ve found it difficult to get excited about computer-based synthesis.

Recently I’ve been really amped about revisiting the idea since I saw two new things. The first was the new KMI BopPad. It’s still at the Kickstarter stage, but having a compact trigger which can replace my modified MPD18:

What’s exciting about the BopPad is that it has quadrants and then regions within those quadrants, so I could have 24 different samples on the pad, and be able to play them consistently. Not to mention being able to use the concentric circle positioning like a control source. Like a much simpler Mandala drum. (the lack of quadrants on the Mandala makes it useless for my purposes since I can’t map clear sample areas that aren’t tiny rings.

More recently I’ve been amped to shit about Sensory Percussion. Basically a newschool drum trigger based on fancy machine learning and dsp algorithms that lets you define 10 different trigger areas from an acoustic drum (center, edge, rimshot, etc…), without impeding the sound of the drum itself (it uses a tiny magnet on the head, along with the sensor).

This video in particular got me excited about the possibilities:

So to put things in clearer context, the idea is to use the data from these sources (audio analysis, BopPad regions/velocities, Sensory Percussion machine learning) to go hogwild in a modular!


Not a drummer by any means, and this probably goes a little off-topic, but have been pointing a dynamic mic at pots & pans, then banging on this type of hack percussion through aleph with different processing, boomerang looping and things. I guess my question is what kinds of acousto-electric sound-sources can we use to build ‘desktop percussion set’?

Pretty sure someone out there has taken the concept of contact-mic-on-pyrex-food-container to more interesting places than I could dream up in an afternoon. I guess acoustic guitar body is one obvious answer to this question!

What would really appeal would be DIY solutions that have little acoustic response, but an interesting/rich amplified tone, potential for rimshots & all that ‘drummer’ stuff without introducing the inherent complexity of software for gesture-detection & drum synthesis.

Would also be kind of cool to do an aleph karplus-strong module designed to be triggered by this kind of inherently expressive analogue source…


There’s this:

Which does the same kind of machine learning the Sensory Percussion sensors do, but is contact mic based. The cool thing with these systems is that you can train them on the specific sounds you want to use, then use that to trigger whatever sounds you would like.

Once you get some kind of analysis/trigger, then you can do all sorts. Most of what I’ve done has been using something like Kontakt and tweaked sample libraries to give me the samples I want in the range I want them (for the MPD18 above, the first 16 MIDI notes, ala typical MPC/drum type mappings). Percussive sounds work well, as they tend to blend in well with the original sources (if that’s something you want).

What I’m more excited about these days, however, is using descriptor analysis to play back samples based on audio analysis, rather than individually mapping samples to ‘notes’. So if I hit a different object, it plays back the sample that most closely matches its characteristics (loudness, timbre, pitch, etc…). This is the kind of shit that having a computer in the mix is good for, as something like that isn’t possible any other way.


[quote=“Rodrigo, post:10, topic:4719”] train them on the specific sounds you want to use

pretty impressive to see this kind of machine-hearing technology makes it into affordable products now! But yea pretty much the opposite of what I’m thinking of.

From what I see, mogees belong in the same category as something like axon midi guitar. I played around with axon for a bit but quickly lost interest due to many inflections & guitar habits that simply don’t translate to midi, mis-triggering etc.

currently focussed on a different route to explore synthetic sounds - i.e ‘synths’ that operate on acoustical soundsources. In other words patchable effects pedals I guess, hence my strong recent interest in developing software for the aleph. The results are somehow subtly tactile and satisfying to play, despite rather lo-fi & almost glitchy at times.

From the point of view expanding my horizons, exploring sequenced & live-looped percussion it’s simpler to stick in the same ‘effect pedal’ paradigm and would like to know what less well-known electronic analogue percussion inventions would be most similar in spirit to solid-body electric guitar. Maybe @glia knows something about this?

reading through everyone’s posts & links a bit more thoroughly I find out this thing has been invented & exists:

still kind of wonder about other ways to skin that particular cat - should probably go buy some piezos & bits of wood, just have a go! So yea seems that many people had some success glueing piezos to wood:

There is mentioned the latency issue - this is where no-latency aleph stands apart from a regular computer. E.g wood-DSP hybrid resonators - plenty to get started on there.

Wonder what transducers were used in those giant old plate reverbs? Maybe there are some interesting hacks on a regular spring reverb tank (I mean subtler than just kicking the bugger)

1 Like

re: IMF barre - you could take advantage of shnth which to my understanding is kinda like HID barre + sound engine, been thinking about doing something with shnth as a controller for trilogy modules. i think @zebra at one point mentioned something along these lines with regards to aleph?

and if i remember correctly @wednesdayayay did some experiments using shnth as a controller with Max?

edit: HID, not OSC…

1 Like

on aleph, you can use shtnth as a controller with the generic HID operator in BEES. this is pretty low-level, you just get 8- or 16-bit values at specific offsets in the HID packet, but it will do the job. i never did make a shnth-specifc operator (my shnth is in a box somewhere at the moment) but that is a simple exercise.

be advised that shnth/barre output is kind of weird, maybe not what you expect. it is raw voltage from piezo transducer. that means when you press, there is a positive spike, it decays to near-zero as you maintain pressure, then there is a negative spike and decay when you release. if you modulate pressure you get more bipolar spikes / wiggles. (IIRC)

1 Like

The mogees definitely goes that route in that the sound playing back is pretty basic. Actual concat resynthesis would be much richer output (as it wouldn’t be so 1:1, or rather, it would be more 1:1, but that “1” would be significantly smaller (grain sized)).

But I do get what you mean. Maybe just building up on what you’re doing with the Aleph (I’m really unfamiliar with the specifics of what it does), and maybe adding some envelope following with other parameters/synthesis in the mix.

I had made a few patches for the Shnth (a fourses recreation with some people on the c74 forum, as well as Samplazzi, which is a sample playback engine). Having a quick google about it looks like that stuff might have died with the Shbobo forum, so I’ve attached them here just in case. (32.6 KB)

That’s exactly what I find most exciting about them as interface things. Works especially well on a drum head too! If I do add some ‘manual’ control to the Euro stuff, stuff like this would definitely be in the mix.


totally, it is a natural kind of excitation signal. but, it’s just a piezo. those are cheap! kind of crazy to use a shnth through USB if you are ultimately going into the analog domain…

aleph dsyn module is an exercise in generating tones and noise from raw impulses and SVF chains. it could be taken in other directions…

forgive the following tangent, but my dad has been on my mind a lot lately. he was very interested in percussion gesture detection and ways to reduce latency back in the 90’s (when you often had to deal with many milliseconds of latency added to the receiving end.) there are two instruments that do very clever things: ‘lightning’ and ‘marimba lumina.’

the former is an IR detection system that uses some sweet analog tricks to make a very elegant x/y/z sensor. it is very much like the wii, but backwards (wand is transmitter, not camera) and much more clever. (the nintendo guys actually came to a buchla demo at AES, way back when… i wouldn’t be surprised if there was a direct influence.) there is also some nice DSP on the receiver, which helps to eliminate latency by calculating velocities on the fly, before the completion of a “strike” gesture.

the marimba lumina exploits a very cool nearfield radio effect. the “bars” are radio transceivers and the “mallets” are passive radio resonators. this is even better because you can sense the mallet 6 inches or so away from the bar, and if all you want is proximity and velocity, you can actually have negative latency.

i think both of these systems are worth revisiting, as sources of future design ideas. i guess they came to mind because ML especially was developed as a better / more interesting alternative to contemporary FSR-based systems like the drumkat and malletkat… (rodrigo, i know that your specific performane practice has little to do with MIDI controllers! as far as i can tell.)

don was actually so obsessed with latency that he thought the speaker should be situated as close as possible to the gesture sensor… because on a big stage the acoustic delay would mess up your rhythm. funny guy! but i appreciate the attitude. now a lot of controller designs assume that 1khz is acceptable. (thanks, Max! :slight_smile: ) maybe it is, maybe it isn’t…


I wouldn’t use a Shnth via usb, I’d go with piezos as direct voltage (perhaps with something to clamp it to a sensible range).

Not tangential at all! I’ve been playing with my onset detection stuff again after coming across the Sensory Percussion stuff and seeing how responsive and accurate that was. What I had before worked really well, but didn’t handle rolls and super quiet attacks, and was sometimes susceptible to false triggers from quiet/ambient noise. So reading this stuff is suuper fascinating, and super relevant, as this is tied into improvement my machine listening stuff with a massive emphasis on latency. (more on this below)

The wand is super badass, especially for its time (hell, even for now!). A friend (Richard Scott) used one, along with LiSa as the ceterpiece of his setup for years.

Interesting! I guess it calculated the velocity/acceleration at the start and presumed an appropriate ‘end’? I’m currently waiting a fixed period of time (based on the volume differential enveloping) to take a maximum value, which serves as my velocity. For my current setup it’s 15ms latency after the initial onset detection, for the velocity to be calculated.

Also super fascinating, and incredibly creative solutions to that ‘problem’ too. Especially with the added perk of ‘negative latency’ here, allowing for more latency heavy processes down the line that could be then compensated out (more complex/detailed fft stuff). God, negative latency, what a luxury!

Heh, yeah not MIDI-based at all, and in fact that’s the big downside to the Sensory Percussion stuff. I sent them an email with a bunch of questions (which they were super helpful and quick with responding), but the bottom line is that there’s no lower level access to the machine learning/training stuff, and it only spits out MIDI (no OSC or 14bit MIDI), so it wouldn’t be possible to do any descriptor-based stuff, or 2nd order feature extraction on the data from the system (unless I do it on the ‘raw’ sensor output, which I’d happily do, I would just lose out on the dsp side of their stuff).

Back to the dsp stuff, I’ve been reworking my onset detection based on a bunch of info in this thesis (section 2.4 specifically). Essentially trying to recreate the system they are describing which works like this:

Audio -> (DC-removal) -> (lowpass filter) -> (moving median) -> (moving mean) -> (thresholding) -> (peak picking)

I’m limited in that I’m working in Max and my ‘real’ dsp chops are pretty low, but incorporating the hipass/lowpass/median stuff in the mix has helped quite a bit. I still have to implement more stuff (dynamic thresholding, as what I have is super glitchy/nervous in quieter bits). Here is what I have if anyone is curious.
onset.maxpat (18.1 KB)

[quote=“rick_monster, post:9, topic:4719”]
Would also be kind of cool to do an aleph karplus-strong module designed to be triggered by this kind of inherently expressive analogue source…
[/quote]you familiar with the ios app impaktor?

sounds like a similar idea

[quote=“rick_monster, post:11, topic:4719”]
From the point of view expanding my horizons, exploring sequenced & live-looped percussion it’s simpler to stick in the same ‘effect pedal’ paradigm and would like to know what less well-known electronic analogue percussion inventions would be most similar in spirit to solid-body electric guitar. Maybe @glia knows something about this?
[/quote]please elaborate on the italicized part

i might know but don’t fully understand what you’re comparing

it’s funny, i was going to comment on @Rodrigo’s quote above - i agree computers are more suitable for such analysis but i like doing it in modular as well, using filters / envelope followers / comparators (or dedicated modules like ADDAC Filterbank or 4ms SMR). it just feels more intuitive experimenting with translating some sound characteristic into a completely unrelated property, say, higher frequency content into a faster clock. just plug envelope followers into whatever and see what happens. of course there are other ways to analyze audio, and to extract additional information from envelope followers, something that detects rising or falling voltage, for instance. doing in modular as opposed to computer could be also beneficial if you’re planning for more live control.

with shnth what i had in mind was a specific application with something like the trilogy modules as then you can use their microprocessors to do something more interesting than just generating a voltage from piezo. something like a firmware that could be trained for specific gestures. it might be easier to write something that would analyze HID, especially considering you have other controls on shnth so could have more complex gestures. also it’d be interesting to use shnth both as a sound generator and a controller, i imagine shnth+aleph combo could provide for some really interesting results, processing shnth with aleph while controlling aleph with shnth as well.

i guess both cases are really just ways to extract meta information from a data stream - and now i wonder what would happen if HID output was converted directly to voltage and then you run audio analysis on that…

it’d be pretty funny, just as a concept to use shnth to control white whale module which would then be used to trigger gieskes vu-perc, so you’d have piezo -> microprocessor -> usb -> microprocessor -> cv -> piezo! :slight_smile:

That’s an interesting thought, and although I know very little about machine learning algorithms, I would imagine that there would have to be a time-sensitive component to the learning as the piezo would always produce the same “s curve”, just over different periods of time.

…which of course drives the next microprocessor!
A conceptual and hardware feedback loop!

yeah, you would need to accumulate some data over time. like say, you would train it to respond differently to faster/slower pushes, or how often they happen. with shnth you can also combine it with other controls, so, say, generate a trigger when 2 piezos are pushed at the same time etc etc