Preset save to usb disk

bump, one year later: ansible preset saving

i’m considering not serializing at all, just doing a bin dump-- but of course this isn’t forward thinking. should it be a key-value map? JSON etc? we’ve hashed out this conversation elsewhere, re: msgpack, serializing, etc: (Teletype) USB Disk Mode Interface but didn’t hit a resolution (i think?)

whereas in TT we probably won’t add more state data (though we might?), Ansible will certainly have some new state data added over time, ie a new Kria param or something. so the presets need to be forward-compatible, so bin probably won’t do. the whole point of saving presets is to preserve them across firmware updates (@lloydcole requested particularly)

regarding the interface, we have three buttons and one LED that goes orange/white/off.

i propose:

  • key 1: read (hit once, LED goes white, hit again, LED blinks white during read, LED off when done)
  • key 2: write (hit once, LED goes orange, hit again, LED blinks orange during write, LED off when done)
  • MODE key: cancel (turn off LED)

Serialization got spun out to another thread, although much of the discussion starts here.

1 Like

would this save/load all presets across all Ansible modes? might be useful to be able to save/load for the active mode only…

this is a good suggestion, thanks

hi tehn was this implimented? Can I save patches directly via Ansible USB port?


not implemented yet.

Here is a preliminary Ansible firmware which can save presets to a file on a USB disk called ansible-preset.json when key 2 is pressed and restore nvram_data_t from such a file when key 1 is pressed. This is a pretty rough draft but I’d love to get some testing and feedback!

(much improved version below)
binary: ansible.hex (304.0 KB) 2019/01/23 915ad25
ansible fork: source
libavr32 fork with JSON support: source - tests
ansible.hex -> ansible.preset.json extract tool: python 3 source


  • this is experimental firmware that fiddles with flash, and it would love nothing more than to lock up your Ansible. I am sure there are bugs that can wind up with some real weird stuff in flash, resulting in incorrectly loaded presets or broken app state. But that’s okay, because you’re going to:

  • make a firmware backup. Crucially, the backup should contain the contents of preset nvram. The procedure that works for me is:

    1. power off Ansible
    2. insert the USB A-A cable to your PC
    3. hold the button next to the USB port as you power the module on
    4. run:
      dfu-programmer at32uc3b0512 read > ansible-backup.hex
      dfu-programmer start
      to create a backup and restart Ansible. Note that this uses at32uc3b0512 rather than at32uc3b0256 as the device code. The at32uc3b0256 has a smaller EEPROM and if your Ansible has the at32uc3b0512 (not sure if this is true of all of them?), the 0256 command will run fine, but save a firmware image that does not contain any preset data! An ansible.hex file containing presets I expect to be >800KB.

    You can use the python extract tool for turning an ansible-backup.hex firmware image into an ansible-preset.json file that this firmware can read. Currently this tool only knows how to read images from the 1.6.1 firmware, but I can easily add support for specific older versions if you want to import presets from an older firmware.

    As long as you have such a firmware image, your presets are recoverable. Ansible will write a binary dump of its presets to ansible-backup.bin before loading ansible-preset.json. It will attempt to restore this backup if it detects that it can’t load the JSON file, but there are lots of ways that might crash and burn. It should also be possible to recover presets from ansible-backup.bin if necessary but this will need a slightly different procedure than the hex recovery.

  • it is pretty slow. With the random USB disk I grabbed from my desk drawer (granted a vintage, slow one) I measured something like 45 seconds to save and up to a half an hour to load! Just Kria state takes about 20 minutes to load, I imagine Ansible Earthsea presets would take even longer. The good news is that Ansible will do its best to load only the apps included in the JSON file, so a file edited to contain only one app’s state may load a lot faster. It will also (try to) skip sections of the document it doesn’t know anything about, so backups from newer versions or alt firmwares may also work. Currently Ansible will always save the entire flash contents when creating a backup, however.

  • the ergonomics could use some work. In particular the following is not currently the case:

    Currently if you press key 1, Ansible starts reading. If you press key 2, Ansible starts writing. There is no cancel button. If there was an error, it will try to restore presets from a binary backup file, but there is no indication that JSON reading failed. I did try to mutex the disk access code to keep from registering multiple button presses, and turn the LEDs on during each operation, but I didn’t write anything to make them blink. I’m also not sure if I correctly wired things up so that it will boot into USB disk mode if there’s a disk plugged in? I think I could probably use a hand making the feature feel more at home UI-wise.

  • all files Ansible writes to the USB disk will have a modification date of 1969-12-31. Ansible, being capable of faster-than-light communication, has little use for earth timekeeping.

  • your bug here??


The libavr32 branch implements some routines which can serialize/deserialize a JSON document into nested fields of a struct. The mapping between struct fields and JSON trees the parser knows to expect is defined by a “document definition” data structure, json_docdef_t (Ansible’s is here). I have the start of another Python tool for helping to generate this data structure based on the struct definitions, but I did have to do some manual cleanup on the output from the current version. The docdef is expected to include valid pointers for all the parsing state that will be needed - most types of JSON objects require some kind of state tracking for reads, but writes are all stateless.

The parser uses a fixed length text buffer which is passed in with the call to json_read, as well as a buffer for storing JSON tokens the tokenizer finds. The major constraint is that this text buffer should be large enough to contain the largest string you want to read. This is a limitation I’d like to get rid of, I think the tokenizer might require additional modifications/state to remember that it’s in the middle of a string. Tokenization is done with the jsmn library (big thanks to @zebra for the library recommendation and the advice on writing this feature). jsmn has been slightly modified to allow for operation on a JSON stream without requiring any dynamic allocations - the modified version does not attempt to find the matching open brace when it encounters a closing brace, and keeps an additional depth count on each token that allows the parser to skip over entire chunks of the JSON document it doesn’t recognize. With the json_docdef_t in hand, the serializer is pretty trivial.

I’m a little unsure what can be done about reads being so slow. I think it’s kind of a consequence of doing basically random-access flashc_memcpy calls because we don’t know that the addresses corresponding to the JSON we’re deserializing will be contiguous. If I comment out the actual flashc_memcpy invocation the code for loading the whole file takes about 5 seconds to run as opposed to 20 minutes, as indicated by the time the LED is on. I’d like to do some more detailed investigation but I definitely need to populate the UART header to get a better idea what’s going on, forgot to grab one in my last electronics order.

With a little more work I think this will be a decently usable implementation, and hopefully the JSON code can be reused by Teletype or other modules. I also think a tool for graphically editing preset files would go a long way towards improving the workflow for this sort of thing. I’ll probably start writing a simple browser app for editing at least Kria settings… eventually.



just thinking about the speed— have you disabled all of the timers and interrupts so that “normal” operation is effectively disabled while doing these processes? i apologize i don’t have a minute to actually look at the code

also i think the link above should be ?

this is massive!!

what if you load each preset one by one into RAM and generate JSON from that? this would necessitate saving the current preset first but i think it’s not a big deal.

Yep, sorry about the broken link, fixed now.

I saw that Teletype did an irqs_pause before doing USB disk access, but I figured I’d need to handle button presses and use some timer for blinking the LEDs, so I hadn’t done that. I tried adding irqs_pause/irqs_resume (assuming that’s all I need to do?) in the functions I use to lock/unlock the mutex, and it unfortunately doesn’t seem to have an impact on the read time.

I think it basically will have to operate like this to allow doing block copies. At some level of granularity – like in Kria, maybe a track – we can stack-allocate the kria_track_t, build it in RAM as we walk the JSON tree, and then flashc_memcpy that whole data structure into NVRAM. I haven’t figured out quite how to manage this extra complexity yet, probably worth prototyping to confirm that it actually solves the speed problem before doing much rework.

I poked at this again and with an FTDI cable in hand I quickly found that I had changed the function to copy a buffer to flash (e.g. a whole track’s worth of note values) so that it would copy the whole buffer at once, but I accidentally left the function in the loop that was copying the hex-decoded buffer to flash one byte at a time. After taking this out we are at about 40 seconds to save presets and 1 minute 45 seconds to load, down from 30 minutes!

(much improved version below)
ansible.hex (304.8 KB - d3c3527)
libavr32 @ 54c1713

This could probably be further improved by batching fields together to get a larger flash write. This I think would be achievable with an alternative implementation of json_read_object that stores a struct for the whole object in its state (would have to be dynamically allocated somehow) or params (would have to be statically allocated and referenced in the json_docdef_t) and passes a different json_copy_cb down to its children for them to copy into RAM. Then it would have to call the original copy function to do the flash write right before returning JSON_READ_OK. Similar struct-level caching might improve json_write_object by writing a larger block to the USB disk at a time. Hopefully pretty doable, I’ll look into it cough “soon”.

Edit 2019/05/13:

ansible.hex (313.4 KB - 37cbeec)
libavr32 @ 094fb5d

  • Implemented preset-level RAM caching during reads, using the “live” app state structs to store the cache. Load and save now each take under 10 seconds.
  • Fixed a number of bugs during parsing long buffers, saving decimal values, and generally made things a lot more stable. The tokenizer has been modified so that tokens can span multiple reads. There is still a limitation on maximum buffer size imposed by the size of the working buffer used for hex encoding/decoding. I would like to eliminate this but it works fine for now and there is a possible performance tradeoff.
  • The UI described in the original post in this thread has been implemented. “Cancel” is really more like “disarm” - if you have pressed either Key 1 or Key 2 once to arm reading or writing, the Mode key will return you to the disarmed state. Cancelling during disk access would complicate things considerably to ensure that we bail out safely and so the key is ignored. If an operation fails (e.g. malformed JSON, file not found/could not create file) then both the orange and white LEDs will turn on. Diagnostic information about the failure is written to the UART.

You’d be right to be curious about the memory footprint all this extra parsing code, document definition structures, buffers (including a 4k disk buffer), and state. Here’s the size summary from master:

   text    data     bss     dec     hex filename
0x1579a   0x654 0x39334  323874   4f122 ansible.elf

and from this branch:

   text    data     bss     dec     hex filename
0x1a54e  0x1844 0x38140  343762   53ed2 ansible.elf

Still room for improvement, but should be reasonably usable in this state.

Edit 2019/05/16: Eliminated requirement for a fixed size hex buffer and reduced the size of the disk buffer without an apparent impact on speed. Made a libavr32 PR.

Edit 2019/05/20: updated with new track direction & TT clock enable state ansible.hex (313.1 KB - 4420970 - PR)