Development log for the monome-rack plugin for VCV Rack, with ports of the open-source firmware from the monome eurorack modules (teletype, ansible, and the ‘trilogy’ of white whale/meadowphysics/earthsea), support for hardware grids and arcs of all editions, and a software grid emulation. [GitHub link]
Original prototype post
Hi all! I had some quiet time in the Vermont mountains over the holiday and in between snowboarding and eating 1.5 entire apple pies I got a chance to tackle something I'd been curious about, as I know others have: porting monome modules to run inside VCVRack.
Aside from the usual reasons why this would be desirable—practicing & experimenting while away from your rig, exposing the tools to new people—I was even more motivated by the idea that this would make it easier to develop and test alternate firmwares for the hardware modules, and make it easier to build in compatibility for grid sizes/models that one doesn’t own.
To make the Rack version a good simulation environment for the hardware, I wanted to make the virtual module code have as few differences as possible from the firmware code. By hacking together a thin shim layer for (some of) the AVR I/O systems, I’m able to run a build of the WW firmware with only a couple of small cosmetic changes to main.c. (Large sections of libavr32 are mocked or omitted entirely, but important chunks like the event system, the clocks, and monome.c are used unaltered.)
I’m happy to report that this effort has met some initial success! I have a Rack plugin that provides a feature-complete white whale emulation, plus a virtual 128 grid module that simulates (some of) the serial messages that a real one would send. The WW module can connect to one of these virtual grids, or to a real serialosc-connected grid through the right-click menu.
Here’s a quick photo, I’ll upload a video as soon as it’s downloaded from the camera.
(The 256 and the virtual grid are showing different data because the 256 is showing an older state from when it was disconnected. Right now it’s one or the other, you can’t have the virtual grid mirror the real one, but that’s a possibility for the future.)
This is a lesser experience than the hardware, even if you have a physical grid; being able to simultaneously touch the grid and a physical param knob is a big part of the usability of WW. Having said that, I’ve done what I can to make the virtual experience tolerable; on the virtual grid, you can command-click keys to hold them down, necessary for alt/meta key combos, etc.; you can also click-drag on multiple keys to tap out runs, etc.
This is a proof-of-concept, not ready for release yet. There are a number of issues to fix and a bunch of questions I wanted to get feedback on before I go further. To get a couple of likely questions out of the way: yes, I suspect it will be easy to extend this to cover the other trilogy modules (and their alt firmwares.) Teletype should also be possible with additional work, and that’s something I’m super interested in. Ansible is trickier due to the increased complexity of the USB features.
Is anyone else working on this or related problems? Am I trampling in anyone’s flowerbeds?
Does monome have a position on use of brand/product names in derivative works, like Mutable Instruments’ (completely reasonable) desire to only have their name on the genuine article?
I started this as part of my fork of the whitewhale repo, but it probably ought to move to a separate plugin umbrella repo that links in the firmware repos as submodules. Anyone have opinions on where that repo should live?
Does this model of connecting to real & virtual grids with the right-click menu make sense? I think it would be pretty neat if VCVRack had a whole simulated serial connection ecosystem to take advantage of, but that’s a bigger conversation.
What other features do people expect from their virtual grids? Tilt? Multitouch display compatibility? Custom LED colors?
I started playing with VCV Rack about two weeks ago and fell in love with it. My physical rack is purely for studio play though I often wish I had something small and portable to noodle on when I’m sitting in a coffee shop. VCV Rack is the perfect extension for me and is currently installed on my Surface Book running Windows 10.
The new version v0.5.0 has been seriously stable for me. There’s about 35 packs of modules available with 97% of them free.
I was just thinking over this long weekend about how nice it would be to have a Teletype, a bunch of Telex, and a virtual Grid and Arc to play with when I’m flying light. While I’ve already run a connection between my main system and VCV Rack it could use some polish, but by January I think the developer Andrew Belt will have many of the nits worked out, so yet another way to extend the signal flow.
I would like to add my voice to the positive vote side of those who’d like to see Monome available on VCV Rack.
Is it possible that a hard fork of the source code might be easier to maintain?
AFAIK VCVRack is C++ too, a hard fork would mean you could take advantage of the C++ bits, and get rid of all the global variables, and generally not have to worry about maintaining compatibility between a microprocessor based C project vs a desktop CPU based C++ project.
Speaking of VCV Rack, has anyone found a good place where discussions are happening about it? There are at least two Facebook groups, one official and one non-official, and a fairly un-utilized one at switchedonrack.com/forum.html. Thanks.
Aleph codebase already has an avr32_sim library. It falls short of full hardware emulation. That always seemed like a tantalising prospect though - mostly just because flash cycles are quite long & I like using gdb when things explode. But for an aleph, full simulation is even trickier than for a eurorack module because the device has a DSP coprocessor (although we also emulate that to some extent). It’s still a lot of work to pull everything together, and I don’t think a VCV rack emulation of a full aleph would be that great a thing anyway. The main benefit of emulation there is dev tooling, and we already found more pragmatic solutions to the same problems.
Anyway I’ll be watching your progress on this with some considerable interest!
also I spoofed the monome driver at higher level to hook into serialOSC in order to run bits of our firmware inside of puredata. This hack definitely comes highly recommended! (should solve your USB hotplug issues)
A hard fork would have been a more straightforward way to start get one module into the VCVRack ecosystem. But it wouldn’t result in a workflow that would allow testing/debugging module firmware in a software IDE! Like I said, that’s my secret motivation here. I also didn’t relish the idea of reimplementing every single feature in WW, then starting over again from scratch with, say, Teletype. This strategy should hopefully be more maintainable than having to keep a set of hard forks up to date.
Thanks! Completely understand re: user expectations.
The shim layer is really nothing special, it’s literally the simplest, dumbest thing I could make work as I went along. I have some ideas on how it could be generalized a bit into a C++ interface on the VCVRack side that calls a set of exported hooks from the cross-compiled firmware .dylib.
Oh nice, I did not know about the avr32_sim library! Will look it over, thanks. From a quick skim it seems like we had the same basic idea, with a few different choices on where lines are drawn, and yours is a lot more extensive.
Spoofing the driver and hooking into serialosc was something I considered – it means all module connections could go through serialosc, which would simplify things – but I think that’s orthogonal to my current hotplug problem. That issue is because I added hardware grid support really quickly once I got home on Sunday (didn’t bring my grid to the mountians with me, poor decision) and all I did was drop in daniel-bytes’ excellent serialosc C++ example code. I’m not handling any of the exceptions it throws on writing to disconnected ports, etc., hence the crash. Should be easy to fix.
thank you for doing this! this is exciting on so many levels. allowing more people to use the module firmwares is great in itself, but i think this will also provide the incentive to keep them alive and developing further. of course, as a dev i’m most excited about being able to test / sketch ideas easier, i’m glad this is also your motivation.
i’m also hoping this will serve as an additional push for future firmwares to be developed in a manner that would make porting them to different platforms easier. one of the things i’m planning for the next version of orca is a rewrite that will separate hardware specific things for this exact reason. @sam did a lot of work of separating hardware bits in teletype too. emulating libavr32 is an interesting approach, haven’t considered that! (i need to check out avr32_sim).
That would be cool…, so long as you don’t end up with so many #ifdefs as to make the VCVRack version unrecognisable from the module version.
But it does throw up some ethical conundrums, IIRC there are only 100 of each of the trilogy modules in existence, there are bound to be many more users of the virtual version, which brings up the issue of support burden and new features (e.g. clock input for White Whale). At what point will the tail try to wag the dog!
I’m not suggesting you start from scratch, but make a fork of the existing code base with the intention that any changes you make to get it working in VCVRack are never ported back to the module version (that way it’s easy to add some extra inputs for clocking and pattern changing, etc, etc).
However, if you do want to keep with the shim layer (which would admittedly be very cool), one potential solution to the global variables issue that should require few changes in the module code would be for the shim to run it’s instance in a separate process (and thus separate address space) with an IPC to communicate between the child process and the parent.
The major downside is that it’s inherently not cross platform, so you need to find some sort of higher level wrapper. The plus side is that the same mechanism could be used to emulate the II bus…
Only on the “language” side alas, most of the UI code is very hardware dependent.
Adding a shim layer is a cool way to deal with an existing code base or if it’s purely for testing & development. If you’re starting from scratch though I’d humbly suggest that you put the abstraction layer much higher up. It’s something that C++ templates would have been good for, but I don’t think C++ is viable on the AVR32.
my plan is to separate the actual engine (so it will be essentially a completely hardware agnostic state machine) and then have a UI layer (which would be responsible for communicating with grid and in case of teletype, screen) and a hardware layer. the main module would be responsible for initializing everything and running timers/events queue, and should be hardware agnostic as well, except for the initialization part where it would plug appropriate instances for the UI and hardware layers. there will be another layer that would translate events (triggers, grid presses etc) into appropriate transitions for the state machine. for the UI i don’t think i’d want to use the event system (or maybe have a dedicated queue), but rather letting the state machine talk to the UI layer directly (for grids/arcs i think the existing libavr32 provides sufficient level of abstraction already). <- should probably start a separate thread for this!