Mapping VSTs to Controllers--does anyone else have this problem?

I don’t know if there’s a good thread into which to merge this, but I have a problem.

How do people include patch modification in laptop-software-controllers performance without mouse clicks or live coding?

Every time I open a VST synth and think about using it to make music, I get utterly confused about which parameters I should be mapping to controllers in order to make music in real time. The logical thing is to “make the patch and decide on what I want to control in performance” but that’s not actually how I make music with hardware synthesizers. I get a sound I like, then I play some and morph the sound at the same time. Most of the synthesists I listen to also do something like this–they don’t just pick a patch and play.

So if I’m using my Linnstrument and some CV modules to control my Magella Implexus, it’s easy. I get a sound I like, I play it, and I can adjust the sound in all sorts of ways in real time.

But I open up Aalto on my computer, which I love, and I can mouse around to make a sound and then decide how I want to play it, but that feels like a much more sequential way to do it.

Compared to a relatively WYSIWYG-ish hardware instrument, like the Buchla it’s based on, there are a bewildering number of parameters in Aalto’s very well designed software interface.

Let’s say I have 16 or 32 knobs plus aftertouch, velocity, pitch and Y axis. Do I map parameters like pitch and time? Modulation amounts? Sequencer steps? How am I supposed to decide ahead of time? How have others solved or negotiated this issue?

Edit to add: I know where this paradigm comes from, which is the “performing keyboardist” and there it makes sense. I play some synth in a post rock band and there I do use patch memory and a limited number of parameters per song. You could say the same thing about touch guitar and pedal settings. But that’s not how I want to do it when I’m making open form synth music.


That’s why I believe this whole paradigm is off.
Instead of mapping controls to parameters, we should have snapshots of states and use controls to morph between them. It’s much more agile and modular.


I have begun using Kore 2 (believe it or not) to address this issue. I have the Kore 2 controller, which is very fiddly (I think this may be from a driver update that made it so - seems to be an issue for many people). The software, however, is still very usable and useful. Basically, you can use any midi controller - let’s say one with 8 knobs for purposes of this discussion. The 8 midi controller knobs can be mapped globally to control the 8 knobs in the software, such that whatever VST parameters are assigned to the 8 knobs in the software on a given page would be controlled with the midi controller. When you change the page on the software, 8 different VST parameters are automatically available for the 8 knobs, without having to re-map the controller knobs.

Kore 2 assigns most VST parameters to different knobs and pages automatically, so when you open a VST, and you have assigned the midi controller knobs to the 8 knobs in the software, you have access to all the VST parameters without having to do anything else - just flip through pages to see the various parameters that you can adjust with your controller knobs. There are also many Kore 2 templates for many of the most popular VSTs that have been developed by users over the years.

I’m sure I am not explaining this clearly, but the bottom line is that by mapping 8 knobs from your hardware controller to Kore 2 once, you can access almost all of a given VST parameters as soon as you load the VST without doing anything else.


Many years ago I tried Novation’s Automap, which was a great idea that didn’t work. At least not for me. I also tried Ableton Push which does something like that with some devices, though not all.

1 Like

Push works pretty well for Ableton devices. Some people seem to enjoy the Native Instruments approach to hardware/software hybrids.

An option I haven’t tried but that I wonder about is using banks in a FaderFox controller. Maybe that way you wouldn’t have to make complicated decisions in advance about which parameters to expose. This is the only approach I can think of that would appear to work with any hardware/software combo. I just don’t know if the ergonomics of the FF controllers are amazing enough to be accommodate so many parameters as something like Aalto provides.

Another option that I have tried is to use Arturia Analog Lab’s 8 macro knobs. Basically, every patch designer has already picked out 8 macros they feel are most important for you to have fast access to. They are often great starting points for dissecting a patch too.

Another way I’ve dealt with this is using touch or pen displays, both iOS and Mac


you don’t really want parameters to be automatically assigned. There’s too many of them and they won’t logically group in a way that makes sense.

I use Bitwig and the built in remote/macro system is beyond belief great. Bitwig + a script called drivenbymoss and a controller like Midifighter or something with motorized faders is pure bliss.

You build collections of macros and then set them up to modulate single or mixed parameters to the scale that you want. Then you assign them to remotes that live on pages of 8 that you can page through with buttons on your controller.

It is easy to map parameters and you can do it as you go, and then save different parameter setups as presets… that way you can have different and flexible setups that you can switch between.

Anyway, if this is something that is really interesting to you there is a lot worse that you can do besides checking out a demo of Bitwig.


Although we are throwing around product names and stuff, I don’t really feel it’s a solved problem. Bitwig does a great job of providing a macro framework but you still have a fundamental problem of choosing which parameters to map, and the problem is complicated by the fact that the parameters aren’t likely to be the same ones from patch to patch.

Saving macro mapping with the patch is helpful, and you can do that in both Ableton and Bitwig. I have found that Bitwig is less likely to do surprising things than Ableton with regard to keeping macros and vst parameters in sync.

I think it’s a little premature to say “automap is an invalid concept”. I just don’t think anybody has nailed it yet. The vst parameter controller mapping problem will be a very valuable problem to solve for the company that eventually does nail it.


My approach so far has been: map random parameters until it sounds good. Of course this technique is valid only for exploration of different combinations of mapping. I’m currently trying to build a Max for Live device that can automatically map random parameters found in a device to a fixed amount of macros. What is also randomised is the amount and sign of modulation that’s been applied to the parameter. I think this approach brings a lot more fun in working with software, but I can understand that’s not a general solution and it’s not ideal for everybody.

1 Like

To me, the time consuming part is mapping the parameters. This is where Kore 2 does a great job, mapping essentially every parameter to 8 different knobs across multiple pages. At that point, I can simply delete any mappings that aren’t useful to me, once I have tried them out, and rearrange those that remain in a way that is useful to me.


i like ableton live push ( i still use v1 happily ) as it allows for full navigation of available parameters of each device / channel / fx etc via button press and knob turns. it doesn’t remove all mouse work for me but it does in my case usually solve mapping questions in practice.

realistically there’s no way for software to read our minds (at least in current future-present) to auto control only what we want out of so many options so i’d just like to be able to bop around and control what i want easily. and of course there are macros and savestates to keep something if you want to streamline or reuse a nice patch.
and nothing can replace simple time spent in sound design, setup and seeking prep for stuff that’s personally/musically useful.


My biggest gripe with this problem is that it’s often too tedious to do a lot of mapping manually. Basically all VSTs (or even stock DAW plugins) require mapping stuff using a mouse. Navigating all the various interfaces of different VSTs it’s always a pain for me. I searched for a long time a way to script all this stuff, but it looks like VST is just a graphical environment.


You can definitely access vst parameters programmatically.

I wonder how Kore 2 compares to the NI Komplete Kontrol stuff in terms of usability. Quite a bit more compact, so that’s a plus.


Sure, but it’s still something that you need to build yourself. There’s no off-the-shelves tool that does this stuff, especially in DAW-related ecosystems, that’s why I’m building my own M4L stuff

1 Like

This is precisely the issue for me. I’m not as good an instrument designer as the professionals. I’ve done the macro mapping thing before, but that means making all patch decisions ahead of time. Maybe I’ve just been ruined by modular synths and hardware interfaces.


I rarely decide ahead of time, but rather once the music making process has begun.

When I have decided ahead of time, it’s usually for instruments I use the most often. Operator for example, I have mapped to a MIDI FIghter Twister in such a way that each row of 4 encoders represents one operator. Pushing the encoder engages a secondary parameter. So each row is:

Level|Velocity - Coarse|Fine - Attack|Decay - Sustain|Release

…and this premapped instance of Operator always opens in my default set.

Another page of the MIDI Fighter Twister is kind of a generic set, with each row representing 8 params, which I then try to group logically for whatever instrument I’m using. It keeps things somewhat predictable for me:

Row 1 - 8 oscillator-ish params
Row 2 - 8 filter-ish params
Row 3 - 8 envelope-y params
Row 4 - 8 whatever params

It’s far from perfect, but honestly I made peace with this whole issue a while back when I just decided that it’s part of the cost of the (often extraordinary) convenience of using software.


yes, this.

My main instruments have preset remote pages that I can swiftly switch between on my midi fighter. Anything else that I want to grab control of I can set to a knob on a remote page in bitwig in two clicks. My favorite intruments have dozens of remote pages that I made that serve various purposes: one will have all the osciallator freq fine tune controls mapped, one will be a mixer between all the osciallators, etc. I can have the same control mapped to many pages that way.

The impulse to control is the right one though. Once I get going I often half close my laptop screen and just play without any visual aspect. That feels right.

Here’s a control setup for my looper/feedback matrix:

There are 11 pages of remotes for this that cover all the controls and then some special cases. I made these all pretty much on the fly while I was playing. It is easy to switch between them without touching my computer from the controller. Then to the right of the remotes you see all the macros that I have set up. These you click on, but they represent all the settings that I want access to for the larger preset, which has 22 vst plugins in it and a custom grid patch that I made that controls the whole thing. When I use this I never open up any plugins or other views. In bitwig, macros are able to be mapped to multiple controls with seperate curves and ranges for each mapping. If you look at the setup above you’ll see a lot of trim macros… these are set so that their whole sweep is a small fraction of a control’s… this way you can set the course control with one knob on the midifigher and then fine tune with another, etc. And since macros can be one to many, I can make a trim knob that can increase or decrease the feedback setting across multiple plugins relative to the different settings that those controls may have. Remotes are always a 1:1. So I make macros for the flexibility and then assign those to remotes. Also in Bitwig anywhere you can map a control you can easily insert a modulator: LFP, step sequencer, envelope, audio sidechain, CV in, and a bunch more. And all these features are very well implemented… not just “you can do this or that” – it is a joy to use in practice.


Out of curiosity, what controllers are you using with Bitwig? I have found Live easier to map to controllers (no script needed) but I do like Bitwig’s modulators.

if you use Bitwig, you gotta use the drivenbymoss generic flexi script. unless you have a controller that drivenbymoss wrote a specific script for… like the NI komple kontrols and several others.

It is a wonder of a script. Takes a bit to get it going and then that’s all you need.

I have used a Beringer BCF motorized fader controller with this script for several years and I really liked it. As you flip between pages and plugins and tracks with the buttons the faders SNAP into position to properly reflect the state of whatever controls are in focus. But then I broke the BCF and decided to try something else… I got the MIDI fighter because of the size and because it has internal “page flipping” of the controls… which made me think that I could do my normal controls, then a page or two for the mixer, a page for my master buss eq… things like that.

I haven’t used it long but I really like it so far.

With drivenbymoss and a little time I doubt you’d ever find yourself missing Live’s controller mapping scheme again. Just my opinion.


I don’t try to maintain any sort of consistent control schemes, it’s all ad-hoc, part of my patch-from-scratch approach.

A lot of my software synth patching is just drones, so they probably get a fader or two on the Sweet Sixteen, or expression pedal. Maybe a footswitch or two for transposing.

If I’m going to play a softsynth beyond that, I’ll use the Launchpad Pro mk3 to control it. Almost always I’ll use pressure for dynamics, and maybe something like filter, folding, timbre etc along with that, maybe also a vibrato/tremolo sort of modulation. That’s usually all, though sometimes I’ll also add a fader or expression pedal to some parameter as well – on the synth or an effect.

I tried the DrivenByMoss script and found I’d rather just use it as a generic MIDI controller. I understand there’s now a script specifically for the LPP mk3 that exactly replicates its functions as labeled on the device, but I haven’t tried it yet. I’m not really doing any sequencing in Bitwig outside the grid. I really just like the LPP to play notes or to sequence with. I don’t have anything with a piano-style keyboard anymore, unless the buttons on West Pest count (and they don’t) :slight_smile:


I agree. I’ve been very into this approach when I’ve had easy access to it, i.e., Audiomulch, Tim Exile’s SLOR and SLOO instruments, apeSoft apps on iOS—though with less interesting results on those last ones.

I’m curious if anyone else has suggestions of state-morphing systems, whether it be simply inside a single instrument or across multiple plugins or a whole DAW. Particularly interested in Ableton, Bitwig, or iOS solutions.

And to the main point of the thread, I’m drawn to this approach too because, at least for me, it lends itself to more interesting and performative results. I have controllers that can take advantage of multiple pages, but I’m not keen on paging through all that business when it comes to playing live or even in the “studio” for that matter. The more drastic changes I can make with the fewest knobs/faders is my ideal environment. If state-morphing is the game, then the need to assign a bajillion controls goes away—I think.