Hi all! I am new here so hopefully this question is in the spirit of the process forum.
My question regards workflow in computer modular programs. There are quite a few these days; I use VCV rack and Reaktor Blocks but this could apply to any of them. I was struck recently by Andrew Belt’s comment that he considers VCV rack to be a DAW in its own right, which is why the bridge from VCV to other DAWs is deprecated. This raises a philosophical question: what are the positives or negatives of running a modular program as a standalone app versus inside a traditional DAW (eg Live)? I’m used to running Reaktor Blocks inside a DAW and using the DAW to sequence MIDI as well as recording audio, but obviously VCV Rack users have a different point of view, one that I’m less familiar with. I’m interested in seeing peoples’ opinions on this from a compositional viewpoint (one could also view this from a technical viewpoint as well).
I haven’t really followed the progress of VCV Rack since its introduction a couple of years ago. Certainly what it was then was not what I would consider a DAW. At the time, I was unsure whether imitating the Eurorack look and feel was really a good idea – it’s kind of a neat thing to do and I like the aesthetics of it, but it also implied giving up the strengths that software instruments can have in terms of interface. But I suspect third-party modules since then have erased the borders a bit.
I suppose if VCV includes a sufficient feature set, either natively or through plugins, it might be considered a DAW of sorts now. I assume there are tools for audio recording, and maybe editing, and real-time sequence recording? Are there VST host modules where parameters can be assigned to CV inputs? If not, I couldn’t see myself using it; if so I guess it would have to come down to how it feels in actual use.
I’ve recently fallen for Bitwig Studio. I find its Grid mostly very nice to work with, though some of its modules are a little too minimal IMHO, and I hope for even more integration with the rest of the software in the future (e.g. plugins and modulators within the grid itself).
There is an audio recording tool in VCV rack; I don’t think there’s audio editing functionality or a robust real-time sequence recording tool, although I might be mistaken since there are a ton of modules in VCV. But you’ve hit upon the reason I asked the question – without these capabilities it seems that a modular VST that runs in a DAW would be preferable to VCV, but a lot of people use & prefer VCV Rack so I think I’m missing something.
Also, Bitwig looks really cool – the grid seems like an excellent tool!
I use VCV Rack pretty heavily these days, and while I did try using it with the Bridge while that was still supported, I quickly ditched that and went back to using it standalone.
My preference for this working this way? Freedom from the tyranny of the grid/timeline/master clock.
When I’m working with Ableton Live (or any DAW really), I tend to think in terms of time-based boundaries. This isn’t always a bad thing of course, but sometimes it’s an obstacle. As far as Ableton Live is concerned, there are definitely some simple and fun remedies for this with things like Max For Live.
Using VCV Rack standalone is the opposite of what I’ve just described…rather than giving consideration to how I might circumvent the master clock, or the rigidity of bars.beats.ticks, getting everything synchronized is something I can think about after the fact. Or not at all. I find this perspective refreshing to say the least.
Last night I had a lot of fun generating polyphonic MIDI from VCV Rack, sending that out to a few different MIDI channels on an Ensoniq VFX, and then recording the audio from the VFX back into Ultomaton. The results were a world away from what I likely would have done if I started generating MIDI with Ableton Live. I probably spend just as much time sending MIDI out from VCV Rack as I do using it to generate audio.
Freedom from the tyranny of the grid/timeline/master clock.
Yeah, this is a big one. I recently came across a Second Woman interview that made it clear their entire project’s guiding principle was in escaping from the rigidity of DAW grids and rules. They built / commissioned a software suite in Max that lets them manipulate all of the timing, meter, etc. in Ableton in real time.
But those of us who use VCV Rack are lucky because you don’t need to customize Ableton to make it do something it wasn’t designed to. Like, just use the Marbles / Random Sampler module and you’re already well on your way to nonlinearity, synced time-bending, and generative sequencing.
Ah well last night I was using the NoteSeq module from JW as my main sequencer (always my goto when thinking polyphonically), and then I also had two ADDR-SEQ modules from Bog Audio for adding some additional notes every now and then. I love the ADDR-SEQ because they’re tiny, simple, and useful.
I also use the Bene sequencer from dBiz alot because it maps rather nicely to the Midi Fighter Twister (Bene is sort of a MN Rene clone).
Polyphony and MIDI sequencing with VCV is really fun. You can get pretty weird with it. One of the things I like doing is taking the polyphonic gate output from NoteSeq, splitting it into 4, delaying each of those 4 split triggers by different amounts (or maybe alternating amounts care of an ADDR-SEQ), and then merging those 4 delayed triggers back with everything else before sending out to a MIDI port. It basically splits apart chords coming out of NoteSeq into little strums and or subsequences.
I use VCV (@ht73 I use an ES-8 to patch out to an analog filter), Ableton Live (mostly for max for live), and Bitwig (excellent MPE support) at different times for different reasons. VCV gets an increasing share of my time. Lacks an arrangement view though.
I would be curious to hear what kind of stuff you ended up coming up with and maybe hearing about your process a bit more. I’ve barely dipped my route in to Vcv track and have found it intriguing, but complex. I was probably going to move over to the iPad fork, miRack in the next week or two and begin experimenting.
I use Reaktor quite a lot. If I’m using Blocks, I prefer to use it outside a DAW as being closer to a modular paradigm (YMMV on that, I guess) - try to rely more on modulation & sequencing from the Blocks themselves etc.
here are various downsides (lack of straightforward multi-track recording, lower sample rate recordings internally etc). Main benefit (other than different way of thinking as noted by people previously) is freeing up CPU not having to run a DAW at the same time (although obviously offset by Reaktor not being multi-core).
As for the complexity, I find it much more comfortable when I limit how many modules I have in any given patch. I enforce this by setting my zoom level to 164%…on my 27" display this limits me to two rows/6u and what looks like 104-ish HP. It’s also easier on my aging eyes.
It’s so very tempting to right-click and go searching through the enormous module library to try and find fun new shiny things, but I definitely get more stuff done when I restrict how many modules I have on screen and then focus on really exploring what fits in that limited space.
I feel horizontal arrangement of audio clips on a timeline is pretty orthogonal to the modular mindset.
I think it’s OK to have different mindsets (and toolsets) at different times for different reasons.
My workflow (when using modular systems) is: record a bunch of audio and either call it “done”, or, occasionally, bring the audio into a timeline view for a little cut/paste/edit. I typically don’t enjoy timeline editing (feels like using excel to make music) but there are times when it’s easier than the alternatives (if I keep it to the bare minimum).
Pitches from the NoteSeq16 are sent directly to Operator in Ableton Live. The gates are being split apart and delayed independently before being merged back together. The timing for each of the delays applied to the gates is being modulated by an LFO. The first and fourth gates are being subjected to probability by the Bernouli gate. The ADDR-SEQ is sending velocities to Operator (mapped to the volume of the modulator in the FM algo).
Sounds like this so far:
If I continue with this tonight, the first thing I’d do would be to tighten up the timing difference of the gate delays. But hey, I gotta get back to work.
Oh also, this all runs smoothly on my 2010 (!) Macbook Pro. Core 2 Duo haha. I’m pretty sure if I tried to put some DSP stuff in VCV Rack everything would blow up.
Wow… This is wonderful. This is real inspiration for me to get in to this. Modular’s are just tough to get in to a learn. VCV, having very little limitations get even MORE difficult. But I want to experiment with recreating what you’ve got here and just pay attention to the process. I really enjoy the result.
I have also downloaded miRack on my iPad. Basically just a portable version of VCV rack. Would be worth porting the midi through to some of the great softsynths on my ipad, or even to garage band… I’ve had a thing for portable electronic music making that allows me to get away from my desk, hence the OPZ, iPad and Norns being my main forms of music making.
Might, when I actually get a chance to sit at my computer/iPad and try this stuff, send you a question or two, if thats cool. You’ve had beautiful results and I’d love to continue picking your brain.