I’m interested in trying to create a more “instrument-y” interface with my Novation LaunchControl XL and VCV Rack. It has the typical buttons, knobs, and faders, but the VCV Rack patching utilities allow me to use them more-than-straightforwardly. For example:
I’m inspired by @_js.m_’s “strum” mechanic on faders. (Within a Rack patch, I could calculate a fader’s acceleration and use that as a parameter distinct from fader level.)
If a button is pressed quickly enough, it could send a distinct trigger instead of two quick ones (analogous to how a “double-click” message is distinct from two separate clicks, or how a drum flam is considered a distinct “note” not multiple quick hits)
Does anyone have examples or ideas for expressive, innovative MIDI controller macros/mappings like this?
(Side note: the demo video for the LaunchControl seems to pigeonhole it as a mixer—while the -Pad has clearly crossed over to shredable instrument territory! Feels like a missed opportunity…)
I’m thankful for anything this might make you might think of!
That’s quite a broad and interesting topic.
One can start here (« The Importance of Parameter Mapping in Electronic Instrument Design», Hunt et al. 2002): https://dl.acm.org/doi/10.5555/1085171.1085207
This article might be stating “obvious” things (although…), but it frames the problem in a useful way. Then there are like 20 years of NIME articles (Proceedings Archive | NIME) in case you haven’t stumbled upon that already.
To answer from a more personal angle, I did focus a bit on “single-fader instruments” at one point. I was not overwhelmed by my findings and would hardly call any of this «innovative» but some ideas in there might click with you.
I still use faders (insert praise for 16n faderbank here) as my primary “expressive” interface with a modular system, but it’s more like “two-faders” instruments now¹, one for the profile of the sound and the other for “secondary” expressive qualities.
The simple “mapping tricks” that i use regularly are:
using the same fader to control an instrument’s “main process” and it’s main VCA, ie. the fader has to be moved from 0 for something to start happening (with possibly some slight inertia): this fader is not just mixing in an ongoing continuous voice, but putting the voice in motion as well.
map the course of the fader to an arbitrary function that happens to be better suited (to avoid narrow sweet spots).
dividing the course of a knob/fader in various zones (that is a big vague, but for exemple such a knob would simply scrub a buffer/sample containing pre-recorded CV)
I tend to tune control depending on the sound i want to get and my expectations of expressiveness, but other approaches (which i’m sure will get shared here) would be interesting.
¹:in most cases, 3 faders controlling a single sound gesture is too much for me to keep track of in an instrumental way
Really interesting write-up, thanks for sharing. I’d really like to explore these concepts - I felt quite disappointed when I realized buying my 16 knob controller didn’t result in utmost control, but rather frustration that I couldn’t remember what knob affected what parameter (which to be honest still plauges me with my Grid too!) This more intuitive approach seems great to dive into.
Update: Here’s a real quick first try at making “knobs” (mouse X/Y) react to multiple parameters with varying curves, an empty center point, and differing behaviors (well, directions) +/- the center. Keep your mouse at the center of the screen for silence, nudge it around for noises. Certainly could get more interesting than this, but it’s a start… zones in particular are interesting to me - possibly easier to tackle in Lua for Norns than SC at least with my level of knowledge.
(
// filter pinger
// mouse x controls volume, rate, freq, res, reverb send
// center point is empty
Ndef(\pinger, {
var mid = 0.5;
var mx = MouseX.kr(0, 1);
var mxDir = (mx - mid) * 2; // -1 to 1
var mxNorm = (mx - mid).abs * 2; // 0 to 1
var clickRate = mxNorm * 32;
var clicker = Impulse.ar(clickRate);
var pingerFreq = 1000 + (mxDir * 400);
var pingerRes = 0.8 + (mxDir * 0.18);
var pinger = DFM1.ar(clicker, pingerFreq, pingerRes, 1);
var amp = mxNorm.lag(3, 3);
var reverbIn = amp * amp * amp * pinger; // cubed
var reverb = FreeVerb.ar(reverbIn, 1, 1, 0.05);
(pinger * amp) + reverb ! 2
}).play;
// fm drone
// mouse y controls volume, fm, pitch, reverb send
// center point is empty
Ndef(\fmDrone, {
var mid = 0.5;
var my = MouseY.kr(0, 1);
var myDir = (my - mid) * 2; // -1 to 1
var myNorm = (my - mid).abs * 2; // 0 to 1
var modAmount = (myNorm * 20);
var xmod = LocalIn.ar(1);
var pitchShift = (myDir * 8);
var osc1 = Pulse.ar(1440.2 + pitchShift + (modAmount * xmod));
var osc2 = Pulse.ar(64.7 + pitchShift + (modAmount * osc1));
var mix = (osc1 + osc2) * 0.5;
var filterFreq = 50 + (myNorm * 400);
var voice = DFM1.ar(mix, filterFreq, 0.8, 0.5);
var amp = myNorm.lag(3, 3);
var reverbIn = amp * amp * voice; // squared
var reverb = FreeVerb.ar(reverbIn, 1, 1, 0.03);
LocalOut.ar(osc2);
(voice * amp) + reverb ! 2
}).play;
)
Love this, thank you! Some great general theory, but also a few specific examples I liked:
a wiring mistake by the studentmeant that the ‘volume’ antenna only worked when yourhand was moving. In other words the sound was onlyheard when there was a rate-of-change of position, ratherthan the traditional position-only control. It was unex-pectedly exciting to play. The volume hand needed tokeep moving back and forth, rather like bowing an in-visible violin. … it felt as if your own energywas directly responsible for the sound.
“Ican’t get my mind to split down the sound into these 4finger controls.”
I like the point this makes about complex mapping being both harder and easier to play. Makes me think about the trope of filter sweeps in electronic music, a very linear modulation that makes sense as a performable gesture when your instrument has EQ params. The “filter this” feature on acoustic instruments is a complex result of technique, if it’s flexible at all.