IIRC, the extra 3 bytes are part of the FTDI USB header. (just a guess, but it’s something like that.) i agree that this isn’t super clean and those should really be skipped before the RX buffer is processed by monome.c.

(no, i was wrong and the code is correct, as discussed below: device is expected to send 2x 3B packets in response to query.)

because in the serial format, the grid does not in fact take 64B per map, but 32B per map. each LED value is a 4b nybble. (in the OSC format, and in fact in the input to the driver in monome.c, it is one byte per LED.)

the cleanest and most complete reference implementation is the libmonome source:

Yes, I understand the historical reasoning why Teletype is sending like it does, but as we are talking about the mext protocol in both cases, wouldn’t the Grid need some logic then for determining whether the sender uses spec format (64 byte) or Teletype format (32 byte; using nibbles)? Monome Grid would need to do this, too, or not?

I captured the communication sent from the Teletype to the Teensy, to help improvement of the DIY Grid firmware. You can find it here: https://docs.google.com/document/d/1pURjY2Gyx8Nlc5izyGxA_HuSd2IaNFY7ZjSaRJWul_s/edit?usp=sharing

The Teletype is running the simplest script described on the Monome tutorial pages, a blinking square (also seen in the videos I posted above and below) https://github.com/scanner-darkly/teletype/wiki/BASIC-VISUALIZATIONS

Teletype seems to use only three messages with mext grids: 0x00 and 0x01 after plugging it in, and 0x1A for setting the LEDs. As mentioned, communication is at 57600, this is indeed the code in the libavr32 (libavr32-main\src\usb\ftdi\uhi_ftdi.c line 186)

0x00 is a one byte message, expecting 6 byte back.
0x01 is a one byte message, expecting n byte back (32 according to spec, but Teletype seems to wait until a FTDI busy signal is false)
0x1A is followed by two bytes for the start coordinates, and 32 intensity levels, making 35 bytes in total.

Teletype always seems to send four quadrants (64 x 4), even if they are not available, so they need to be ignored (while still reading the data)

I changed the code on the Teensy to read 35 byte 0x1A, but it still goes out-of-sync quickly, and starts interpreting the 0x00 in the 0x1A message as 0x00 commands. Having no “end of message” markers etc. makes it a bit harder, at least for somebody who is not working with communication protocols daily. I added while(!Serial1.available() ){} in front of any read, that seems to help a lot. The display is less noisy, but according to logs, still goes out of sync (interpreting 0x00 data bytes as messages) - as a result, my Grid seems to be missing updates. For now, I blocked setup message after data has been successfully shown, but that is not a good solution. I would appreciate if communication experts could give a few hints or make the existing code fail safe for communicating with devices like the Teletype.

New video:
https://drive.google.com/file/d/12a0WdWygiWHW0jXTMz6nA6HskZt_V_v3/view?usp=sharing

New log:


(out of sync portions marked red)

The Teensy grid code should already be doing this and it works as expected with serialosc (macos/max) and libmonome (linux/norns) with any size grid (8x8, 16x8, 16x16)

Try changing line 236 in the Teensy grid code to Serial.write((uint8_t)0x02); (for a 16x8 grid)

This section of monome.c makes me think it’s not identifying the grid correctly (and it does not seem to do a proper request grid size with 0x05)

I tried that, but it does not change anything. No idea why the Teletype seems to overwrite the value, or where it does so. According to debug info (the first link one I posted a few posts back), there were a lot of LEDs set in the last quadrants, LED indices >128, and I thought this would be the reason for the wrong patterns. Now I think it wasn’t (I did not check what the code does with the LED intensity array once it is changed; and it got changed).

I am pretty confident now that the layout definition isn’t the reason for the display issues. The major problem is the Teensy going out of sync with the Teletype time and again, treating data bytes as messages - even after fixing the message length issues (6 byte response + 35 byte map message). It works for three, four 0x1A, and then it gets messed up, until it again gets the beginning right. Checking whether there is data before doing any read somewhat helped, the wrong patterns almost disappeared, but now it is skipping some updates. Maybe I should upload the code with which it is more or less working for me ,so that someone with knowledge of network protocols can point out what to improve.

Or maybe the FTDI needs additional wiring for flow control. Unsure.

Personally I think this is part of the problem.

Consider - the DIY grid code works as is with libmonome and serialosc. Thus the grid side should already be acting correctly (communicating the same as a stock monome grid). This should not be any different for libavr32 devices - if it was, then the stock grid wouldn’t work on TT either.

Perhaps the FTDI adapter (or something that you’ve added to the mix) is adding some extra bytes/data?

Or, also… If there’s an errant Serial.print somewhere that’s going to throw things off.

Confirming against multiple machines (using libavr32/serialosc/libmonome) and multiple devices (like a stock grid) is probably necessary. I had to run a logic analyzer on a stock grid (with the help fo a much smarter friend) to get a reference for making the rotation code function properly for example.

You can check the libavr32 code yourself, I named all the functions. It DOES require 6 byte to get started, and it DOES send exactly 35 bytes. And if you think about it, it is more logical than the code in the Teensy, where every “odd” byte has its intensity value in the upper nibble, and the “even” byte in the lower. The idea was to half the bytes send, and according to code in libavr32 in github, that is what it does - that was pure intention, not a side effect of the FTDI adapter. And it is implemented in functions of the “mext” protocol. It is not the first time that protocols change over time, as long as one recognizes, who is talking, it can be taken care for. Is the firmware of the stock grid public, so we can see, how it is done there?

So it might be part of the problem, that the device is not acting to spec, and not as other devices do (but it is definitely easier to fix the Teensy side than to start reworking the Teletype firmware). Unless someone finds some other magic way, that’s what I am doing, because, as you can see from the last video, it is almost working now. But of course you are correct in that it would be “cleaner” to have one approach for all devices. As soon as there is an updated firmware for the Teletype one could make it work with an unmodified DIY Trellis grid firmware.


I guess the interesting question is, that if the current Teensy protocol implementation does work flawlessly with Norns but not Teletype, and it isn’t clearly just a FTDI quirk but a protocol mismatch… why? Seeing that Teletype gets regular updates and so does Norns, I’m assuming one could (at least at this point) talk to both devices in a same way whether it’s libavr32 or libmonome doing the work.

Or is it possible that practically none of the Norns scripts use the intensity map command via serial, instead setting individual LEDs, so the behaviour of the Teensy implementation would actually be incorrect there as well? (I’m assuming this has been tested but just to make sure…)

I believe the the grid_map_mext function as shown in that image is for /led/map (0x14 command) rather than led/level/map (0x1A command)

pattern:	/prefix/led/map x y d[8]
desc:		set 8x8 block of leds, with offset
args:		x = x offset, will be floored to multiple of 8 by firmware
			y = y offset, will be floored to multiple of 8 by firmware
			d[8] = bitwise display data
serial:		[0x14, x, y, d[8]]

vs.

pattern:	/prefix/led/level/map x y d[64]
desc:		set 8x8 block of leds levels, with offset
args:		x = x offset, will be floored to multiple of 8 by firmware
			y = y offset, will be floored to multiple of 8 by firmware
			d[64] = intensities
serial:		[0x1A, x, y, d[64]]

For reference here’s the libmonome mext_led_level_map function - which is what the Teensy code was written to and it does account for the upper/lower nibbles.

Note this bit is commented out in libavr32 for grid_map_level_mext. No idea why that’s not implemented.

As @zebra said above:

That’s what I’ve been working from.

norns (libmonome) uses nothing but the intensity map (/prefix/led/level/map above) for most things, but libmonome supports any of the monome serial functions.

I believe the the grid_map_mext function as shown in that image is for /led/map (0x14 command) rather than led/level/map (0x1A command)

Maybe according to the “new” spec, but the Teletype clearly sends 0x1A

So, I think it is safe to say that we have various versions of one protocol. The question is how to handle that. If you want to fix it on the side of the Teletype, that is ok, and definitely the “cleaner” solution. I simply don’t want to wait for it, as currently we do not have neither a working CDC driver, nor do we have alternative implementations for the libavr32 serialization functions above. So I personally would go for a branch compatible with the current 3.2 (and 3.1) firmware.

I believe the the grid_map_mext function as shown in that image is for /led/map (0x14 command) rather than led/level/map (0x1A command)

Note this bit is commented out in libavr32 for grid_map_level_mext. No idea why that’s not implemented.

so, the libavr32 code (formerly aleph code) is messy. the function yall are looking at sends 0x1A - it corresponds to the serialosc path called led_level_map, it uses one nybble per LED, and it sends intensity values for a quad. (to be clear: i authored this code, never intending it to be used outside the aleph firmware.)

i agree that it’s confusing that the function appears to be named for led_map but actually implements led_level_map, and a function called led_level_map is dead. like i said, messy.

there is no new / old spec. mext is the latest protocol and has not changed for a long time now.

libmonome of course does also pack nybbles into a 32B payload for led_level_map. (CMD_LED_LEVEL_MAP command nybble = 0xA).

i’m just chiming in here to clarify that (1) to my knowledge there is no monome firmware that expects 64B payload in the serial stream (i agree that the linked document is confusing on this point) , (2) the extra bytes in the RX buffer on teletype is probably low-level USB packet stuff and is specific to FTDI and the avr32 host stack.

(again, i was wrong on this 2nd point. apologies. the libavr32 code is looking only at protocol bytes; but it also makes some assupmtions about device config )

2 Likes

there is no monome firmware that expects 64B payload in the serial stream

That is an important point. Teensy DIY firmware currently expects 64 bytes, there is no way the current implementation (two nested for loops over 8 rows/columns, each iteration reading a byte from the serial buffer) would handle 32 byte encoding correctly. If the standard behaviour is to expect 32 bytes (with one LED per nibble aka 2 LEDs per byte) I think this should be changed in the current Teensy DIY firmware, at least as a possibility.

the extra bytes in the RX buffer on teletype is probably low-level USB packet stuff and is specific to FTDI and the avr32 host stack

To put it another way: it does not follow the spec, but it is there in the USB FTDI part of libavr32, which is currently used by Teletype. If the aim is to support current Teletype firmware 3.2 and 3.1, the option should be there in the code to add send 6 bytes (like #ifdef TELETYPE)

yeah, i’m not sure how the device firwmare would work at all if it’s not decoding nybbles when receiving 0x1a messages.

i would want to check the other assumptions: maybe the 3 “extra” (non-protocol) bytes are always present for USB-serial devices and not just for FTDI devices. if they are FTDI-specific, and the goal is to emulate an FTDI device, then its probably best just to add them when TXing from the device.

maybe the 3 “extra” (non-protocol) bytes are always present for USB-serial devices and not just for FTDI devices.

I am not sure which devices you are talking about, but the requirement for the extra 3 bytes is implemented in the grid setup function in monome.c in libavr32 - there is no alternative setup function, and it is also not located in the ftdi subfolder. Thus, this is the format demanded by all grids, FTDI connection or not (and as far as I see, FTDI devices are currently the only option Teletype supports for Grid anyway, the other USB device classes are present for saving scenes to flash disks and connecting keyboards). So, I would state more simply:

If the aim is to support unmodified Teletypes as they are currently in use, the format needs to be 6 bytes on connect. If the aim is to extend the Teletype firmware for direct connection via a future Grid CDC adapter, one can leave it at 3, but needs to re-implement parts of monome.c interface in libavr32.

yes, looking more closely now at the setup function i’m not remembering how the rx byte count logic work. its weird.

the devices i’m talking about are monome grids. they all use FTDI parts. yes, libavr32 does not know about other devices which want to use the monome grid protocol, but are not monome grids. it attempts to recognize and work with FTDI, HID, and MIDI, and that’s it. i’ve already PMd ny suggestions to okyeron regarding how to add CDC support and glue that to the monome protcool modules.

i’ll repeat this, to clarify: the project to make mext-compatible grids which are not FTDI-driven, is creating a new class of devices, which did not exist when libavr32 was written. most things should work, but some things as you say may need tweaking. it’s important to check further down the stack in the USB host code to see if what you’re actually getting in the USB rx packet is what you expect to get. the protocol specification does not attempt to describe framing bytes or other stuff that the transport layer might be adding.

i’ve PM’d with okyeron a bit on this already.

as far as i know, it has not changed since the mk kit:
[https://github.com/monome/mk/blob/master/firmware/default/mk.c]

1 Like

Here we go - both functions in the (supposed) stock Grid firmware are as I had to change them in the current Teensy code - the setup code also reveals, why six bytes are used - to register both grid-led and grid-key types.


1 Like

ok good - then two things come to mind

  1. the libavr32 code is not in fact assuming anything about the data it’s seeing beyond the protocol
  2. this behavior could be better documented somewhere (the idea that a device could support LEDs but not keys and vice versa)

reposting this: https://monome.org/docs/serialosc/serial.txt

thanks @zebra for offering insights

@tehn just to follow on that

re: multiple responses to query, this is indeed stated clearly enough, i had just forgotten (sorry!):

0x00 system / query response
bytes: 3
structure: [0x00, a, b]
a = section (grids, encs)
	1-15 (0 is system)
b = number (how many connected)
	0-255
description: response to request for component availability. multiple responses will be sent, one for each section. for example, if two grids are connected and 8 encoders, the response would be [0,1,2] [0,2,8]
OSC: /sys/query a b
	where a is a text translation, b is number available. a = [null, "led-grid", "key-grid", "digital-out", etc]

(“multiple responses will be sent”)

so - again the libavr32 code is rather messy. it is making several undocumented assumptions:

  • any FTDI device is a monome grid or arc
  • any mext device is going to respond to query with two packets describing keys+leds, or ringts+leds. (though mext protocol is designed to be extensible to more arbitrary controller layouts

what is less clear from that document is the expectation that LED state should packed as nybbles in payload to 0x1a command. it really looks like it expects 64 bytes and i can’t find anything about nybbles for grids.

whereas of course it is 64 bytes only on the OSC side. (but the document is not called osc.txt :slight_smile: )

pattern:	/prefix/led/level/map x y d[64]
desc:		set 8x8 block of leds levels, with offset
args:		x = x offset, will be floored to multiple of 8 by firmware
			y = y offset, will be floored to multiple of 8 by firmware
			d[64] = intensities
serial:		[0x1A, x, y, d[64]]

incidentally, the listing for the arc equivalent is (what i would call) correct:

pattern:	/prefix/ring/map n d[32]
desc:		set leds of ring n to array d
args:		n = ring number
			d[32] = 64 states, 4 bit values, in 32 consecutive bytes
			d[0] (0:3) value 0
			d[0] (4:7) value 1
			d[1] (0:3) value 2
			....
			d[31] (0:3) value 62
			d[31] (4:7) value 63
serial:		[0x92, n d[32]]

there is also some general semantic confusion between the “old” (pre-varibright, 1 bit per led) led_map and the “new” (since 2006 or something! 4b per led) led_level_map, which is sometimes just called LED_MAP or grid_led_map or whatever in code. (apparently this even confused me when i was working on aleph!)

it would be cool to at least prune dead code and fix comments in libavr32 sources.

1 Like

Would you please educate me how this is incorrect in the Teensy code?

Isn’t the if (z % 2 == 0) { portion of that (line 416) getting the alternating upper and lower nibbles (odd/even bytes)?

Maybe there’s a better way write that? I dunno, but it works.

thanks for the link, that makes things easy. it looks correct to me. (yes, you could write it with only 32 iterations, without if, and without %, and that would save you probably a significant number of cycles since this is a pretty hot function.)

1 Like