The mogees definitely goes that route in that the sound playing back is pretty basic. Actual concat resynthesis would be much richer output (as it wouldn’t be so 1:1, or rather, it would be more 1:1, but that “1” would be significantly smaller (grain sized)).

But I do get what you mean. Maybe just building up on what you’re doing with the Aleph (I’m really unfamiliar with the specifics of what it does), and maybe adding some envelope following with other parameters/synthesis in the mix.

I had made a few patches for the Shnth (a fourses recreation with some people on the c74 forum, as well as Samplazzi, which is a sample playback engine). Having a quick google about it looks like that stuff might have died with the Shbobo forum, so I’ve attached them here just in case.

shnth_apps.zip (32.6 KB)

That’s exactly what I find most exciting about them as interface things. Works especially well on a drum head too! If I do add some ‘manual’ control to the Euro stuff, stuff like this would definitely be in the mix.

2 Likes

totally, it is a natural kind of excitation signal. but, it’s just a piezo. those are cheap! kind of crazy to use a shnth through USB if you are ultimately going into the analog domain…

aleph dsyn module is an exercise in generating tones and noise from raw impulses and SVF chains. it could be taken in other directions…

forgive the following tangent, but my dad has been on my mind a lot lately. he was very interested in percussion gesture detection and ways to reduce latency back in the 90’s (when you often had to deal with many milliseconds of latency added to the receiving end.) there are two instruments that do very clever things: ‘lightning’ and ‘marimba lumina.’

the former is an IR detection system that uses some sweet analog tricks to make a very elegant x/y/z sensor. it is very much like the wii, but backwards (wand is transmitter, not camera) and much more clever. (the nintendo guys actually came to a buchla demo at AES, way back when… i wouldn’t be surprised if there was a direct influence.) there is also some nice DSP on the receiver, which helps to eliminate latency by calculating velocities on the fly, before the completion of a “strike” gesture.

the marimba lumina exploits a very cool nearfield radio effect. the “bars” are radio transceivers and the “mallets” are passive radio resonators. this is even better because you can sense the mallet 6 inches or so away from the bar, and if all you want is proximity and velocity, you can actually have negative latency.

i think both of these systems are worth revisiting, as sources of future design ideas. i guess they came to mind because ML especially was developed as a better / more interesting alternative to contemporary FSR-based systems like the drumkat and malletkat… (rodrigo, i know that your specific performane practice has little to do with MIDI controllers! as far as i can tell.)

don was actually so obsessed with latency that he thought the speaker should be situated as close as possible to the gesture sensor… because on a big stage the acoustic delay would mess up your rhythm. funny guy! but i appreciate the attitude. now a lot of controller designs assume that 1khz is acceptable. (thanks, Max! :slight_smile: ) maybe it is, maybe it isn’t…

9 Likes

I wouldn’t use a Shnth via usb, I’d go with piezos as direct voltage (perhaps with something to clamp it to a sensible range).

Not tangential at all! I’ve been playing with my onset detection stuff again after coming across the Sensory Percussion stuff and seeing how responsive and accurate that was. What I had before worked really well, but didn’t handle rolls and super quiet attacks, and was sometimes susceptible to false triggers from quiet/ambient noise. So reading this stuff is suuper fascinating, and super relevant, as this is tied into improvement my machine listening stuff with a massive emphasis on latency. (more on this below)

The wand is super badass, especially for its time (hell, even for now!). A friend (Richard Scott) used one, along with LiSa as the ceterpiece of his setup for years.

Interesting! I guess it calculated the velocity/acceleration at the start and presumed an appropriate ‘end’? I’m currently waiting a fixed period of time (based on the volume differential enveloping) to take a maximum value, which serves as my velocity. For my current setup it’s 15ms latency after the initial onset detection, for the velocity to be calculated.

Also super fascinating, and incredibly creative solutions to that ‘problem’ too. Especially with the added perk of ‘negative latency’ here, allowing for more latency heavy processes down the line that could be then compensated out (more complex/detailed fft stuff). God, negative latency, what a luxury!

Heh, yeah not MIDI-based at all, and in fact that’s the big downside to the Sensory Percussion stuff. I sent them an email with a bunch of questions (which they were super helpful and quick with responding), but the bottom line is that there’s no lower level access to the machine learning/training stuff, and it only spits out MIDI (no OSC or 14bit MIDI), so it wouldn’t be possible to do any descriptor-based stuff, or 2nd order feature extraction on the data from the system (unless I do it on the ‘raw’ sensor output, which I’d happily do, I would just lose out on the dsp side of their stuff).

Back to the dsp stuff, I’ve been reworking my onset detection based on a bunch of info in this thesis (section 2.4 specifically). Essentially trying to recreate the system they are describing which works like this:

Audio -> (DC-removal) -> (lowpass filter) -> (moving median) -> (moving mean) -> (thresholding) -> (peak picking)

I’m limited in that I’m working in Max and my ‘real’ dsp chops are pretty low, but incorporating the hipass/lowpass/median stuff in the mix has helped quite a bit. I still have to implement more stuff (dynamic thresholding, as what I have is super glitchy/nervous in quieter bits). Here is what I have if anyone is curious.
onset.maxpat (18.1 KB)

[quote=“rick_monster, post:9, topic:4719”]
Would also be kind of cool to do an aleph karplus-strong module designed to be triggered by this kind of inherently expressive analogue source…
[/quote]you familiar with the ios app impaktor?

sounds like a similar idea

[quote=“rick_monster, post:11, topic:4719”]
From the point of view expanding my horizons, exploring sequenced & live-looped percussion it’s simpler to stick in the same ‘effect pedal’ paradigm and would like to know what less well-known electronic analogue percussion inventions would be most similar in spirit to solid-body electric guitar. Maybe @glia knows something about this?
[/quote]please elaborate on the italicized part

i might know but don’t fully understand what you’re comparing

it’s funny, i was going to comment on @Rodrigo’s quote above - i agree computers are more suitable for such analysis but i like doing it in modular as well, using filters / envelope followers / comparators (or dedicated modules like ADDAC Filterbank or 4ms SMR). it just feels more intuitive experimenting with translating some sound characteristic into a completely unrelated property, say, higher frequency content into a faster clock. just plug envelope followers into whatever and see what happens. of course there are other ways to analyze audio, and to extract additional information from envelope followers, something that detects rising or falling voltage, for instance. doing in modular as opposed to computer could be also beneficial if you’re planning for more live control.

with shnth what i had in mind was a specific application with something like the trilogy modules as then you can use their microprocessors to do something more interesting than just generating a voltage from piezo. something like a firmware that could be trained for specific gestures. it might be easier to write something that would analyze HID, especially considering you have other controls on shnth so could have more complex gestures. also it’d be interesting to use shnth both as a sound generator and a controller, i imagine shnth+aleph combo could provide for some really interesting results, processing shnth with aleph while controlling aleph with shnth as well.

i guess both cases are really just ways to extract meta information from a data stream - and now i wonder what would happen if HID output was converted directly to voltage and then you run audio analysis on that…

it’d be pretty funny, just as a concept to use shnth to control white whale module which would then be used to trigger gieskes vu-perc, so you’d have piezo -> microprocessor -> usb -> microprocessor -> cv -> piezo! :slight_smile:

That’s an interesting thought, and although I know very little about machine learning algorithms, I would imagine that there would have to be a time-sensitive component to the learning as the piezo would always produce the same “s curve”, just over different periods of time.

…which of course drives the next microprocessor!
A conceptual and hardware feedback loop!

yeah, you would need to accumulate some data over time. like say, you would train it to respond differently to faster/slower pushes, or how often they happen. with shnth you can also combine it with other controls, so, say, generate a trigger when 2 piezos are pushed at the same time etc etc

Yeah, could do all sorts of comparisons as well. Probably easiest to do without learning algorithms and just go with regular temporal measures (like baseline to max voltage over x amount of time gives you an ‘up’ value, and same for the ‘down’ side of the s curve.

yep! i just like the concept of “training” a firmware to your specific way of playing.

Dear god that Ian Chang video. This is terribly exciting stuff. Regretting never getting that drum kit… Guess it isn’t too late though :slight_smile:

I know with these things you need to get a bit crazy about the preamp to get reasonable low-frequency performance, shoot for 20Meg input impedance or something stupid. Wonder if it’s possible to ‘tune’ the inherent low-pass filter of this circuit down to 0.1Hz or so through judicious choice of transducer & preamp. Maybe simply measure the exponential decay associated with the spikes & apply a fixed Wiener deconvolution FIR filter in digital domain.

This makes me think gate extraction & adsr envelopes could be fun on aleph - just start by shooting for a reasonable-sounding autowah then follow the white rabbit…

one obvious but nevertheless interesting thing about solid body electric guitars is they convert vibration -> analogue signal without making much sound. What small hand percussion instruments have been designed which share this strange quality?

wasn’t familiar, but yes it seems to be essentially the same idea - what if that table they’re banging on was not a table but something like the springboard or other random stuff using different beaters & strikes to generate audible transients, then using those to trigger karplus, dsyn or other kinds of digital resonators.

2 Likes

I tried out Impaktor last night after @glia recommended it [thank you!], and it does take into account the source material fed in – I think it’s a combination of mixing it in and having it influence the character of the synthesized sounds as well, so I was using different strikers and hitting glasses and whiskey and scraping combs against the mic all to a pleasing mix of sounds and textures.

It might not quite as advanced as what you had in mind, but I know that I’m certainly going to get well more than the $5 I spent on this out of it. The synthesis engine is rather interesting/deep as well, I was pleasantly surprised. Pretty impressive physical modeling stuff, but also very nice for harsh noise/weird tones as well. Throw in on board looping and recording (which I didn’t realize came as part of the deal) and you’ve got a pretty capable little device.

I think ultimately I’ll get the most use out of it just treating it like any other instrument hooked up to the mixer and run through some send fx et al, but even on it’s own it’s a joy and could be potentially quite useful.

1 Like

After doing some more research (this thread was quite useful), and generally wrapping my head around stuff, I think I have a clearer idea of the modules needed/wanted.

As far as funky function generators Just Friends actually looks really badass. I had’t looked at it in detail until today, but that coupled with the swoop could cover a lot of function (and oscillator) ground.

So a couple dedicated sound sources to start with (fourses for analog gristle, and ERD for digital gristle), a filter (sprott), some function generators (swoop, just friends), a VCA of some kind (don’t know which yet, veils looks good, but kinda big), and then the ES stuff to tie it all together. (probably some utility spackle in the mix as well I’m sure)

3 Likes

it’s been said many times on this forum, but JF + Cold Mac is a great combination, and would give you a weird version of many of the utilities/function generating abilities you’re talking about. JF alone is wonderful for this as well!

I know this is an old thread but I’ve been considering incorporating the Sensory Percussion drum triggers into my live setup. I’m new to modular/electronics but have been playing drums for about 15 years now. Seems as if there is a lot of possibilities available using MIDI routing from their software to a Midi/CV converter.

Anyone else using modular/sunhouse together? So far it seems like Masonself is the only one I can find.

Check out this video from the amazing Masonself:

1 Like

He’s definitely really good at the combination. Much better than others I’ve seen trying to do the same thing.

I’ve got a single SP trigger which I’m looking to incorporate in a similar way (technologically, not aesthetically), but baby steps.

My main gripe with the SP stuff is that I would it would speak OSC, or at least 14-bit MIDI since if you’re doing something like in Mason’s videos, you’re only working off a 7-bit resolution going into (infinitely resolute) CV, which is a shame. I’m certain their internal computations on the machine learning side of things is really detailed and fine, but that data sadly doesn’t leave their app/ecosystem.

1 Like

Interesting, so you’re saying the MIDI information sent to ableton is not high resolution? Sorry I’m not totally up on all of the intricacies of MIDI.

In general, values - both the velocity of a note event, and the value of a control event, in MIDI are only 7-bit (0~127). MIDI also has a high resolution control event (NPRN) which gives you 14-bit (or ~16k levels).

However, as far as I can see, none of the Sensory Percussion controls are continuous - they are only measured and sent upon a drum hit. (I could be wrong about this - but nothing in the manual leads me to believe otherwise.) As such, the major concern is only if a 7-bit too coarse for the range being measured.

There are only three things that are measured in the SP: Timbre (really position on the head of the strike), Velocity (how hard you hit), and Speed (they say how fast - but I’m not sure what this means, nor the relationship between it and Velocity). In all cases, 7 bit range is probably well matched the degree of repeatable control you can get over the parameter.

The only concern comes if it is continuous and mapped to something with a wide range like a filter frequency with a range of over two octaves…

In short - I wouldn’t worry about the 7-bit mapping of controls on the SP.

The Jambé, for comparison has 10-bits out into its synth engine - but importantly, these pad pressure signals are continuous and can continuously affect the sound after the strike. Here, when mapped to pitch bend, for example, it matters.

3 Likes

Thank you for that explanation @mzero. A little bit of it this is still going over my head, but it makes sense I think. I’m not 100% sure of exactly how I’d like to implement the SP stuff, but it would be nice to leave drum duties to myself instead of the op-z while having some control over certain modules at the same time with some creative MIDI routing.

@Rodrigo: what have you attempted so far with the SP? Curious to hear about your experiments.

Nothing terribly interesting or significant. Mainly a couple proofs of concept to test things out in Max.

Both of these are using the SP software to only send MIDI, everything else is in Max.

This sampler one plays a bit from a loaded buffer with the position of center/edge determining where in the buffer it plays from, with the dynamics controlling the volume of playback and how long the bit of audio is:
http://rodrigoconstanzo.com/bucket/SP%20Sampler%20Test.m4v

And this one is controlling some synthesis stuff. I don’t remember what is controlling what, but it’s mainly things like center->edge, dynamics, with the rim click making a new random preset:
http://rodrigoconstanzo.com/bucket/SP%20Synth%20Test.m4v

1 Like