Yes, I meant thinking ahead, if you’re going to do production, people usually don’t end up using a dev board (like Teensy). So in comparison a bare SAMD21 mcu is around $1 and a bare M4 is $14. A (pre-populated) custom board would be very very cheap in bulk this way
I don’t understand. @zebra asked how to map multiple keys with the same note number (as determined by the mode in which your hardware bridge is running). So you press two keys with the same note number, then what happens? Well, the simple thing would be to consider the note on as long as any one of the equivalent keys is held. But that’s not the only option: you could start a second (unison) voice. And that’s where MPE comes in to save the day. Is that wrong?
Regarding plug-and-play, I thought that was solved with capability inquiry.
I’m just trying to understand. Not trying to argue with you.
id say MPE is overkill, since although for sure you can press 2+ of the same note , you aren’t adding any more expression with each note on a monome grid.
as before, none of this new, I think if you look to the launchpads you’ll find a few solutions
have 2 modes;
- a full access mode, when every button is a separate midi note, bi-direction (for leds)
- a scale mode, where you have offset etc.
the former is to allow the host to control,
the second is for more ‘standalone’ use, so can have the leds already lit, scale selection…
you could even have additional modes e.g. drum pad modes.
(in this mode, 2 buttons can be same note… iirc, usually convention is to take first note on, and last note off - this allows you to ‘transfer’ the hold to a different finger)
the novations launchpad (I think pro in particular?) is pretty good at showing how you can mix a programmable interface , with a standalone mode.
as for switching modes… perhaps start in ‘standalone mode’
and then have an nrpn (or something) that allows to switch to host mode.
(similar to how it can be done with mpe)
that’s a great question. i dunno, i was thinking about the mobile use case b/c that’s where we started with this discussion and i figure power profile is important for that.
i’m interested in messing with a bare SAM D21 board because of a pure engineering feasibility question: can i do useful things in this and adjacent problem spaces, with a soft-realtime interpreter, on a 48Mhz chip with 32K SRAM? i ordered the atmel board and will find out.
(i’m also running all over the country for a while and don’t have a soldering iron in my pocket, so i’m gonna be a jerk and just get a proto platform with 2x usb ports already wired and shipped to me really fast.)
but to me the near-term endgoal would be making a (simple) custom board. to me, this exercise dovetails with some totally unrelated questions around larger-scale product development where both unit cost and current draw are super significant. i don’t think it’s unreasonable to plan on a small manufacturing run of these things if they seem useful.
i’m also proposing to use arduino so anything i make should be code-compatible with teensy to a high degree. so yeah absolutely use a teensy, great.
and since you’ve already made the midi/monome glue on teensy (awesome) i’m gonna focus on the “weird” part which is adding a minimal amount of customizable controller logic and the attendant stuff to actually configure it. i wanna see if this can be done with a “hotloaded” user script on something as small as SAMD21, but maybe it can’t and it will just be a more restricted set of options written in c/c++.
that would be rad!
i was thinking exactly of these two scenarios. i am also a caveman who has never implemented MPE, but allocating new unison voices across channels for multitimbral control is an old trick that i’m rather fond of. MPE seems like basically a formalization of these ancient patterns of multitimbral MIDI usage. it might seem ridiculous to whip out MPE for a bunch of buttons but i think it would open up some neat possibilities. i absolutely find note mapping on a grid infinitely more musically useful when note ranges per row can overlap and rows can be staggered e.g. an octave apart with an arbitrary scale. grid apps have been using such layouts for years and they are fun. splitting voice allocation across channels and assigning to different timbres is also fun. double fun.
we could just have different versions (assuming there are people who have the desire to develop them, of course), each best suitable for a specific set of requirements. there could be a lightweight cheap “ios passthrough” version, for instance. personally, i’m more interested in something that would have a more powerful CPU and would allow for easy development.
that’s one of the things that makes teensy appealing, easier to set up a toolchain.
i really want to get multipass working with this as i think there is some significant benefit for having a platform that would allow writing apps that could run on either monome eurorack modules or a “smart” dongle without requiring too much specialized knowledge. and sharing same apps on multiple devices (and i think there are probably more MIDI users than eurorack) will mean more incentive for creating new apps.
I feel this is an accurate characterization of MPE. It’s really pretty simple, and the benefits are huge. I’m an enthusiastic supporter of MPE, even for much simpler scenarios than those MPE is capable of satisfying.
hm, i’m a tiny bit unclear on the specifics of “multipass” (is there a link to point me to?) but hey, if teensy is more appealing for toolchain reasons and because of the extra power, i’m down with that.
ultimately would still want a custom board for nice/suitable form factor if nothing else.
all this stuff about MPE and mappings - this is why i propose a scripting layer. the simplest possible. lua functions to:
- send MIDI event (takes: message type, channel, payload.)
- handle MIDI event (same)
- send monomeserial event (command, payload)
- handle monomeserial event (same)
and maybe a couple things that are hardcoded like program change or NRPN. those swap the handler functions out. (maybe you get 16 “mode slots” or something.)
you also need a little OS-side tool that accepts a lua function definition, verifies/compiles to bytecode, and zaps it over to a “mode slot” in the dongle. (this part sounds tricky but is actually super simple because the dongle device port speaks raw serialOSC for monome protocol, and can just take (framed?) bytecode as a blob with a specific OSC string. working with the compiled bytecode is better for speed and memory on the embedded lua side.)
everything is easily supported:
- passthrough “host mode”
- “map mode”, arbitrary static 1:1 mapping of row/col -> chan/note/vel
- overlapping “monophonic” mode
- multichannel “multitimbral” mode
- arbitrary momentary/latch/radio behavior for buttons
- whatever else, within processor limits (and here, power/speed/cost tradeoffs do come into play, but first things first.)
and these modes are all dynamic and customizable (though “host mode” should be fixed.)
is there some controversial aspect to this proposal that i’m not seeing?
haven’t used MIDI controllers in a long time but i have played the heck out of all the buchla controllers as a kid, and their best aspect is their scriptability. Thunder had a full-blown little stack-based scripting language. (and yes, you could assign voice counter to midi channel.) Lightning had a “zone editing” paradigm that is not so different from teletype grid ops. it seems crazy to make a computer that talks MIDI without making it a little smart and interesting.
i’ll start a thread about it soon, the reason i mentioned it is because i just want to emphasize that if we could create another way for folks to try developing apps without requiring a lot of specialized knowledge it would be a great byproduct of creating something like this bridge (especially since it doesn’t feel like it’ll take away from the initial goal, even with various ways to configure the passthrough mode it’s all super trivial, even making it configurable with a grid itself).
+1 for short production runs - my soldering skills are laughable (& yes I should improve them but busy learning other things )
MPE is very easy to work with: note/channel/object plus control values - just simple routing
MPE isn’t necessary to start a unison voice on another channel. You could just start a unison voice on another channel. If it’s MPE, other rules come into play.
Again, I gave a very specific example: In MPE, channel 1 is global. If you receive a CC on channel 1, that applies to all 15 voices. And if you receive a note on channel 1, as stupid as this is, it’s a fifteen voice unison. Obviously, you should idiot-proof against that on the synth end, but we can’t count on everyone doing so because it’s expected behavior.
Fortunately, we are talking about this exclusively for non-default modes.
Something like a guitar layout, for example.
And in that example, I would probablly want every row on a different MIDI channel to begin with. There’s no overlap on any given string, so that handles the conflict transparently and the channels have meaning.
Me, I’d probably divide the grid into zones of four columns. And I explicitly want each of those controlling a different instrument, so once again, each zone gets it’s own MIDI channel, the conflicts are handled organically, and channels have meaning.
Something generative (otomata, game of life, etc), we should also handle on a case by case basis.
Let’s consider polygomé.
I see several choices, some more useful than others…
Option 1: single channel "piano style"
- If a note is active, pressing it again does nothing.
Option 1A: single channel "crazy pianist"
- If a note is active, new attacks are not sounded, but we keep track of how many fingers are on that key, and the note doesn’t release until that number is zero.
Option 2: single channel "one string"
- ignore duplicates if they occur simultaneously. If one of those voices occurs after the other, end the first note and reattack.
A very big side note here:
It’s less obvious that this should be handled at the app level and not on some global layer. But it should be handled by the app. Because the app can cleanly end that note without having to blindly suppress an unwanted note off event later. Making this global would either risk stuck notes, or creating unwanted gaps.
With any of those approaches, the big plus is simplicity – the user doesn’t require any multi-channel support on the synth end. They could use these on any synth.
Option 3: multi-channel "arbitrary"
- If only one note is active, that’s on channel 1. If an overlap occurs, the unison note sounds a channel higher.
Pro: a user can choose to ignore duplicates by filtering their input to receive only channel one.
Con: a user could be confused by unexpected notes popping up on another channel at seemingly random intervals, because channel selection wasn’t intent driven.
Option 4: "phrases"
- When one key is pressed, all notes originating from that press are on channel 1. press a second key, and each note generated as a result are on channel 2. If you press a third note, you’re playing three melodies, cleanly separated onto three channels. (Don’t shift higher channels down when a note is released. Just start new melodies on the lowest unclaimed channel)
Pro: the user can choose how they handle polyphony. Should these melodic lines play on different instruments? Same instrument, lower velocity? Multiple instances of the same synth without any changes? All of that - it’s none of our business.
Con: the user might have preferred one of the single-channel options.
I like supporting many options, but I don’t like having to configure them every time we load a patch.
I like @zebra’s suggestion to have program change nessages select between apps, or modes, but I think I want those to be more dynamic.
Like… a ridiculous JSON file full of configuration data. Maybe I want to use six of my 128 presets on one app in various configurations, so I never have to touch those menus again. It’s a thought.
(Using Teensy 3.6’s built in SD reader for that is probably overkill, but the option to pull the card out and edit a text file is appealing.)
Jeeze, I shouldn’t have typed all that on my phone. How many other people expressed the same things while I was thumbing away, unable to see?
Six or seven, it looks like.
Ugh. Scrolling up to review now.
Just wanted to chime in and say I very much like this idea Have been considering building something myself for the super simple use-case of using a grid as hardware MIDI input device based on monomehost, initially mainly with some scales and that’s it, but I like a lot of the ideas that have been mentioned in this topic as well
I didn’t really like using a Due for it because it’s rather large, so something smaller like a teensy, maybe with a small board it plugs into would be very nice
I have no experience with the SAM platform, but I think as long as the code is Arduino compatible it should be fine/rather portable I guess? Would be nice to be compatible with the boards most people use/already have for this to keep the barrier of entry low.
does anyone have thoughts / experience with arduino MKR zero? it seems like a SAMD part that is very closely comparable to teensy (shield-compatible?)
wait a minute… I’m really only reading with interest, but I wanted to tease out a connection I’m tracing between several people. It’s mildly off-topic and I don’t want to derail, though.
- the MCU in, say, Ansible is strong enough to support the Lua virtual machine and thus
- with proper scaffolding, writing grid apps for, say, Ansible could be as simple as scripting in Lua and it’s possible that
- Crow is similar to this?
crow is a great deal more powerful. but yes, it’s possible to run a (very minimal) lua on the UC3 avr32 family - did this on aleph as a proof of concept. i didn’t know a lot about lua at the time and it didn’t actually make much sense to slot it into the existing realtime app framework used in libavr32-based devices.
y’see, a stripped-down eLua that can run on cortex-m0 or avr32 is a different beast than the lua VM on norns or crow. for example, numeric types are int instead of float. you can’t import a lot of modules. so its application has to be pretty focused, not even the “out of the box” eLua that talks directly to registers, includes a filesystem and I/O abstraction, and basically acts like a rather slow OS.
so… i wouldn’t think of this as providing total continuity since things written for norns/crow can just do a whole lot more stuff, and here i’m literally talking about a couple of callbacks that each get maybe a couple thousand cycles to execute.
if there’s appetite for a heavyweight usbmidi “controller interface” than for a lightweight usbmidi “scriptable adapter,” that provides more continuity with norns/crow, then cpu should have native floating point unit at least. cortex-m4 part like teensy 3.6 makes more sense. that’s cool too, but i still want to revisit/explore the low end of the power spectrum given what we’ve learned since building libavr32 in a user vacuum in 2013.
ah I’m glad I asked! this is all very exciting!
I literally tweet-ranted about the MKR Zero over the weekend… I could find close to nothing of anyone blogging about using it in projects, or tutorials beyond the basics that arduino.cc provides.
It’s totally off topic, but the MKR Zero is an interesting look for me because of its I2S support, which could add another interesting dimension to a lightweight host.
Edit: other SAMD boards I’ve found recently, though I don’t have any experience with them:
This looks like basically a Teensy, but with a SAMD21 instead
On the other end of the power spectrum: https://www.adafruit.com/product/3382 this thing looks like it could handle some synthesis as well
@okyeron is there a git repo link somewhere for the teensy proof of concept you have? I would be interested to try it out
Hi. So now i have a bit of time to jump into the discussion about boards for development.
I am working with @lijnenspel on a USB-Host extension for my automat controller which will be of course open-source and i could see myself making a special dev board for all people interested.
I def. see the point using teensy for prototyping / development. The codebase Paul keeps for his boards is probably the best in the whole Arduino space.
But i am a big fan of the latest development mostly done by Adafruit for the SAMD51 MCUs. And SAMD21 also. Having two USB Ports on one board is cool. But you can also do debugging via SWD / JTAG as it is standard for all ARM MCUs.
The EDBG IC on the Arduino Zero was proprietary and never released to the public. Only that one allowed to have a second USB port for debugging on the Arduino / Genuino Zero (M0 Pro). Arduino is still selling those but i have no idea about long term availability: https://store.arduino.cc/genuino-zero
If you want to build on a platform where people can go from prototyping into production of even small scale DIY boards SAMD is the way to go. Also Adafruit is currently porting lots of stuff from Teensy to the M4 Express series boards.
Also nice with SAMD is that there is Micropython / Circuitpython and Rust.
And we are about to release a dev board with SAMD51 and a Lattice ICE40UP5K FPGA (with full open-source Yosys toolchain).
The current iteration doesn’t have host - but that could be added / also Ethernet for RTP-MIDI or OSC. Or WIFI / Bluetooth… so build a MCU / FPGA based “micro-micro-norns” Brain without Linux for stripped down applications. I would be up for that and happly do the hardware.
so i got hold of the SAMD21 xplained-pro board today. bad news and good news.
it does indeed support both modes, but not at the same time cause there is actually only one USB interface
the 2nd USB plug on this devboard is connected to Atmel’s onboard debugger thing, which is a composite USB device on one end (CDC plus debugger), and UART on the other (connected to the SAMD.) this thing is on the bottom of the PCB and mouser’s listing misled me into thinking both USB ports were on the MCU. d’oh! apologies!
it was super quick and easy to test both host and device modes using the ASF and visual studio. i don’t have a grid, but got it talking CDC and HID in both modes under ASF.
under arduino, i got board support installed and can use the UART; haven’t tried the USB modes yet but i’m sure it’s not a big deal.
so: it’s looking to me like there needs to be a second device on the board for the 2nd USB controller. either:
main MCU is CDC/HID host, and the 2nd device is maybe a really minimal 8-bit MCU doing the USB-MIDI device side. (in earlier days i’d use LUFA on AVR8 but maybe there are better options now.)
i am kinda liking the fact that it takes almost no extra effort to support HID protocols in addition to monome CDC. making this thing more broadly useful (use your shnth with your iphone i guess)
bad news: there’s also no off-the-shelf eLua platform support for cortex-m0. good news, this isn’t a huge deal since the thing doesn’t need the full eLua environment, just means a little more fiddling to get core interpreter on it (with low-RAM patches.)
( i can get Forth on it easily enough, but who cares )
[ed] ok after searching around the only MCUs i can find that definitely have 2x USB interface, are cortex-m4’s from ST micro. the cheapest is something like ~8usd.
Advanced connectivity – USB 2.0 full-speed device/host/OTG controller with on-chip PHY – USB 2.0 high-speed/full-speed device/host/OTG controller with dedicated DMA, on-chip full-speed PHY and ULPI
unfortunately the discovery board doesn’t have the 2nd otg port wired up and has instead another proprietary debugger thing
seems better to work around standard arduino + shield combo. (?)
so ok, next question: i would rather put the “main” MCU with more power in the host-side role, supporting a multitude of device formats. but the arduino USB host shield does the reverse, right? is there a diy-friendly or psuedo-standard way of adding a “USB-MIDI device shield” or is this something to be made?
so @nevvkid i honestly did not really grok your post, but now i see having encountered this EDBG thing. would the FPGA on your board be able to perform a connectivity role like USB-MIDI interface?
also thanks for pointing out micropython