Kria -- Freezing When Saving Presets

Running an i2c configuration with – TT, TxB (16n), Ansible (Kria), JF.

When saving new patterns on Kria, occasionally everything on the i2c bus freezes. The patterns are successfully saved (my changes are loaded upon a power cycle), but immediately after the press-and-hold-to-save is received, Ansible and TT freeze + JF stops emitting sound (it’s only being triggered by TT). Upon power cycling, TT boots up but frozen (similar to behavior when 16n is plugged into TxB but not powered), and Ansible is also frozen (no buttonsl it on grid). At least, that’s what happens half the time. The other half of the time it comes back and everything’s groovy (I script runs on TT, Kria starts cycling, JF makes pretty sounds).

This freeze issue occurs ~75 percent of the time, and is fixed half the time with a power cycle. Unfortunately my work-flow involves a LOT of Kria editing, and I like to save often as I go, so the power-cycle diversions add up.

Running a Make Noise PSU well below maximum, latest firmwares on all involved (the issue started after updating firmwares on both Kria and TT to the April editions, which I was late to…).

So upon my last power cycle all of my Kria sequences were deleted. This was after an attempt to use the FB.S command, switch one of the Kria patterns to Teletype clock, and save the Kria sequence.

I’ve confirmed that Ansible is NOT running in Leader mode.

Well that’s a bit puzzling. What if any I2C stuff are you doing with Teletype? And is this reproducible if nothing is connected to the TXb?

I2C scripts used:

FB X
J QT KR.CV…
JF.NOTE…
KR.PAT…

(Here X is used as a stand in – the actual script uses a variety of values to perform these functions)

Going to try to reproduce without 16n on the TXb.

Thanks for replying :blush:

Total crash, all data lost while attempting to save progress on the morning tweaks before testing without 16n. Gonna take a sec to cool down (physically – system is on a screen porch and emotionally – > 50 hrs writing evaporated) then go back to testing.

Upon further reflection, I have not updated my 16n firmware (p. sure it shipped w/ 1.3), and will do that as Task 1. Apologies for initial mis-reporting; had not realized there was a 16n update. Full crash was after testing out FB.S commands.

Does an outdated 16n firmware seem a likely culprit?

WOW, I’m so sorry to hear that! And I cannot account for how this would happen unless something is going very wrong and corrupting flash, in particular it would have to overwrite the very first byte in flash memory which shouldn’t happen for any reason. Please take a hex backup using this procedure and PM me the ansible.hex file and I will try to do some forensics .

I am not familiar with the faderbank at all and will have to go read some code to make any guesses about what is happening here. However I believe that the FB.S type ops do not actually send any I2C messages and instead just store these values locally to Teletype.

My initial thoughts on this I guess are: it’s possible the module’s flash is failing entirely (I think this would take lots of write cycles, many thousands), or else this is something similar to the Teletype bug from a while back where scenes were getting wiped, don’t think we ever really understood why. The freezing behavior you describe is especially odd, especially that this only popped up when updating to Ansible v3. What version were you on before updating? There were not really any changes to how preset storage works or any of Ansible’s i2c follower behavior in quite a while. Would like to get @scanner_darkly’s input.

might be 2 separate issues.

so ansible is not running as a leader but it freezes too? when it freezes, what does it look like, no response to grid presses? are outputs still being updated? is the behaviour the same when it’s frozen after power cycling? did you ever experience a freeze upon powering up when it didn’t previously freeze during a save?

i doubt faderbank firmware version matters. “before testing without 16n” - so you have a scene that uses FB ops but you test it without FB connected?

totally wild guess: perhaps with the latest update there is more being written to flash, and flash operations block i2c causing the bus corruption. weird that ansible would freeze while not being a leader but possible i guess (and sounds like you also control kria from tt).

another wild guess: is it possible that ansible switches to leader mode while in preset mode? also might be a good idea to disable i2c temporarily before saving to flash and enabling it after.

i would say this sounds to me like the bus is not stable in general. try using shortest cable possible to connect fb and use the correct sequence of powering (iirc you have to connect fb first and then power it? not sure, it’s in the fb threads somewhere). also could you try disconnecting txb (so, just leave tt / ansible / jf connected) and see if you still get the freezing issue.

yeah, we never found out what caused the issue on teletype, and without a screen i don’t see any elegant way to add a similar safeguard we did on teletype without confusing users. one thing that seems common to both scenarios is that there is something being written to flash outside of saving presets - on teletype it was the last screen used (which we since removed), on ansible it looks like it updates the follower config? perhaps there is some really edge case scenario where flash becomes corrupted or the fresh byte gets accidentally overwritten. perhaps we could add an extra guard, use a longer marker or have more than one marker… not sure what else we can do with this issue to be honest.

final note - sorry to hear this, it must be super frustrating. i assume you’ve seen you can also backup to USB / computer? not a proper solution but at least not as painful as losing your complete work…

on backup: Definitely! I just manically disregarded that bit cuz I’d gone through weeks of re-configuring after a months-long moving process. Def not best practices but the sheer joy of really digging into Kria outweighs this data-grief.

I’m going to circle back around to this this evening (it’s hot out!) since it’s looking like there’s some more involved configuring to be done to try and hash it all out.

Thank you both so much for responding so promptly / thoroughly to these issues. Best support one could hope for :slight_smile:

2 Likes