Polyphonic earthsea for trilogy/ansible and er-301/just friends/txo


Sorry i’m not very good at explaining myself sometimes haha. But I was following the instructions for midi recording found in the github docs, specifically the line the that says to press the front button until notes are recorded if you’re not sure which state you’re in. But then I noticed that the lights on the UM-ONE weren’t lit up at all as if it wasn’t recognizing ansible as usb-midi device. And it does work with the normal anisible firmware’s midi mode, if that helps at all.


when you use it with the original ansible firmware, are the lights lit up?
and when using with polyearthsea, can you play any notes? is it just recording that’s not working?


The lights (on the umone) do activate when plugged in to the original firmware.
No notes get played in ployeaethsea through the umone, so i do not think its a recording problem.

This all came up as i do not have a usb midi keybaord on hand, so i was trying to send midi from my computer, through a midi interface, into the umone and then to ansible (again this set up works with the original firmware)

And this really isn’t a deal breaker for me as polyearthsea is amazing already, i just thought it could be fun to use norns or something like less concepts to record midi from.
I don’t want to distract from whatever else you got cookin


Finally got around to testing my Ansible I2C busboard. Everything works ok! Board files are here:

If anyone in Australia wants one let me know!

Im not using it with PolyES actually, but my Teletype moves around a bit, so its easier for my I2C stuff to be connected to Ansible instead of Teletype


@rdfm - strange, the midi code is based on what’s in the original ansible firmware. i’ll take a look - polyearthsea bugs are a priority, and i do want to make sure midi is working properly. as i don’t have umone would you be willing to help me with some testing? it’ll probably be next week before i have something to test.

@jimi23 - that’s great, thanks for sharing the files! i’ve added a link to the i2c setup page.


good question, i haven’t considered this. so there are several options that would work:

  • i can provide a modified faderbank firmware that will use a different set of SC.CVs to avoid overlap
  • i can add an option to polyes to shift which SC.CVs it uses

the latter is probably better, since it can be switched in polyes itself without having to flash another firmware.

another question is - what if you want to use faderbank as a controller for polyes as well (i’m planning to add support for using txi and faderbank as polyes controllers at some point). in this case it would also make sense to shift polyes addresses instead of faderbank.

it’ll basically be a button on the voice configuration page that when enabled will shift SC addresses used by polyes by 16.


The second option does sounds ideal. I really love the idea of it being addressed from polyes as it seems like a more flexible option. I also hadn’t considered using the 16n to control polyes, now that’s exciting!!!


yeah, one of the next planned features is being able to map external controllers to mod buses (right now they are pre-mapped), so you could assign a fader to each mod bus, for instance.


Now that sounds like fun!!!


I am not sure i understand the midi implementation.
It sounds like we are able to record different patterns for a multitrack sequencing experience just as with Grid?
So i could play/record a sequence and then record another to layer on it?
Not sure how midi ch // voice selection would be done and if a grid is needed or not?


you can’t do multi pattern recording using MIDI without a grid since there is no way to change patterns with a MIDI device alone. you can only record / play the current pattern.

it will use whatever voices are assigned to the currently selected pattern. so if you have voices 1&2 assigned to a pattern MIDI will use only those 2 voices (and it’ll output to whatever outputs are assigned to those voices).


About to get my 2nd ansible soon!

Is it possible to run Poly on one and Kria on the other with both connected to Teletype? I’m guessing as long as teletype doesn’t cross into what Poly is doing it should be fine?


sorry for the late response, i’d be more than willing to help with some testing


i’ve been reading more about roland um-one, and apparently there is a switch that controls what kind of driver it will use. when set to “comp” it’s not class compliant, so it won’t work with polyearthsea. can you check with the switch set to “tab” and see if that solves the issue? more info here: https://www.roland.com/us/support/knowledge_base/214330803/


yeah it shouldn’t be a problem as long as you don’t use i2c ops on teletype at the same time. it might actually work with no issues even if you do, just be aware it might potentially cause teletype to freeze. if you do run into that and you need to use i2c ops on teletype, just temporarily turn off i2c in polyearthsea (there is a button on the voice assignment page).


Seems like I’m missing something here, or maybe its a bug?

I want :
Pattern 1 / MONO voice / mapped to MODULE Note 1 -> Mangrove
Pattern 2 / MONO voice / mapped to MODULE Note 2. -> MI Rings

Voice Assignment :

Shouldn’t Pattern 2’s Module note be on 2? It only works when both are on note 1.

When I put it on 2, the patterns start only playing one voice and when i switch patterns, the note switches too. I


it works as expected now! I thought i had tried with the switch on “tab”, but i imagine it needs to be replugged to initialize


looks like pattern voices are properly set up. can you show how your 2nd voice is mapped? there is a small chance it’s somehow mapped to output 1 as well.

to make sure: when you play pattern 1 you see it output on CV/trigger pair #1, but when you play pattern 2 (which has only voice 2 enabled) it still gets output on CV/trigger pair #1 instead of 2?


Here is a video. Probably easier.


thank you, yeah this helps!

so it’s actually working as expected. i will clarify this in documentation but basically, mapping voices to outputs is shared by all patterns. so if you map voice 1 to output 2 then any pattern that uses voice 1 will output to 2.

in your case you have:

pattern 1 -> uses voice 1 -> mapped to output 1
pattern 2 -> uses voice 2 -> mapped to output 2

then you change it to:

pattern 1 -> uses voice 1 -> mapped to output 2 now
pattern 2 -> uses voice 2 -> still mapped to output 2

to change what voice 2 is mapped to you need to select it in the 7th column from the left:

by default voices 1-4 are pre-mapped to module outputs 1-4 but you can change it to whatever configuration you need (and you can map a voice to multiple outputs - and then transpose them to get chords!)