from a quick test the behaviour seems pretty consistent. with this script:

L 5 8 : CV I SUB 16000 CV I
L 5 8 : TR.PULSE I

running it by itself seems fine with M set to 10ms. running it while sending triggers to the inputs locks both ansible and tt after a min or two. adding rx buffer protection and increasing the number of buffers does not seem to help.

going to grab a bite (almost said a byte!) and will do more testing once i’m back.

4 Likes

@scanner_darkly - How many devices are on your bus?

My script just ran for about 2 hrs at 10ms with just the Ansible and TT on the bus. Still running after getting back from lunch. I just updated my previous script to the following:

A WRAP ADD A 1 0 10
CV 5 V A
L 6 8 : CV I CV SUB I 1
L 1 4 : CV I CV ADD I 4
L 5 8 : TR.PULSE I

I’m at about ten minutes now. Seems stable. Going to give it a few more and then start adding devices.

1 Like

i’d like to point out that huge bundles of i2c packets take time to send out, so it’s actually possible to schedule more packets than is reasonable for the transfer rate, and have enough time in the main loop event processor to get actual tt-things done.

i’ll do some scoping to point out some numbers, but with i2c at 400khz, it’s not the fastest-- 10x faster than midi and that’s great, but let’s remember that this is a control rate environment we’re aiming at.

that said, if things are crashing for reasons unrelated to overwhelming the time constraints of actual transmission, let’s fix them!

ansible, just friends and ww.

i’m still able to get it to lock when running the script i posted at high rates while simultaneously hitting the inputs, which is strange since i’m not reading the inputs, just CVs, so in theory it shouldn’t really affect anything. but the same script with nothing connected to the inputs ran for over an hour with 10ms M with no issues.

agreed that the extremes we’re testing are indeed extremes that are not likely to be used, but would be good to get it to the point where it just misses some of the packets instead of locking. i added some boundary checks in process_ii but still getting locking, so i think locking is not due to the i2c data getting corrupted and causing some issues in the actual ansible code, but more likely due to i2c transmissions getting corrupted, tt trying to send when ansible is sending as well etc.

@tehn - a couple of questions:

  • you mentioned upthread you were going to change things on tt side to write to i2c immediately instead of queuing but the code still uses both tele_ii_tx_now and tele_ii_rx - was there a reason you decided to not use it?
  • i’m looking at twi_slave_tx - why does it return 27 when tx_pos_write == tx_pos_read? this should mean there is nothing to write, but i’m confused why 27 specifically.
2 Likes

THE GOOD

I’m able to successfully run the test with my Teletype, Ansible and one TXi on the bus. I can make it blisteringly fast for a long time without locking up. :slight_smile:

I can also add in four reads from the TXi to set the voltages:

L 5 8 : CV I TO.PARAM SUB I 4
CV 5 V A
L 6 8 : CV I CV SUB I 1
L 1 4 : CV I CV ADD I 4
L 5 8 : TR.PULSE I

With that much i2c reading, I have to reduce the clock speed of the metro as I see the TR.PULSEs falling out of sync with each other if we are attempting to sample too fast But, it is stable. It does not lock up. It is awesome to finally be using the TXi on a Metro event reliably. :slight_smile: :slight_smile: :slight_smile:

THE NOT SO GOOD

In the previous configuration, I have noticed that the Ansible will stop displaying changes to the CV values in its LEDs and will stop updating the voltages coming out of the jacks. Oddly, the Teletype’s CVs (which are reading the values from Ansible) will still update and output the proper values.

THE WORSE

If I put a second TELEX device on the i2c bus (TT, Ansible, 2xTXi) and the basic script (without the calls to the TXo) fails within a few seconds.

Based on these tests, it seems that the success of reads is tied to the number of devices on the bus. I’ll keep knocking around on it.

1 Like

i simply changed tele_ii_tx to be the same as tele_ii_tx_now in implementation, a sloppy fast fix. rx hasn’t changed.

27 is ESC, no particular significance except that it’s possibly easier to debug as 0 is the same typically as “uninitialized” so it’s a possible immediate sign of underflow


in general, agreed that it needs to fail gracefully and not crash. have you been able to identify if tt or ansible is crashing (likely not both, honestly. resetting them individually is very telling-- not sure if your reset buttons are populated, next to the uart header?)

3 Likes

so for this condition specifically (looping reading/writing CVs and then looping TR.PULSE in 10ms Metro while sending triggers to the inputs) TT definitely locks since it doesn’t respond to keyboard anymore. will be rearranging modules in a bit (newly arrived isms!) and will see if resetting TT fixes it (do have a reset button on the TT but not ansible i think).

1 Like

@scanner_darkly - Can you try reducing the devices on your II bus?

yep, will give this a try!

1 Like

well, this is interesting. so i have:

  • only ansible is connected to tt
  • M is running at 10ms
  • L 5 8 : CV I SUB 16000 CV I
  • L 5 8 : TR.PULSE I
  • clock out from Orca into IN1 with Orca running at max speed
  • clock out from MP into IN2 with MP running at max speed

it still locks. if i reset TT and then try doing a single read or a single write to ansible it doesn’t respond and TT locks immediately.

1 Like

@scanner_darkly - with your script and my TT + Ansible + TXo configuration, the Ansible stopped updating after a few minutes but the TT remained responsive. As before, I could poll the Ansible for CV values and it still dutifully sent them back. I would also accept and return updates over II but the LED and voltage out remained locked.


I validated that the i2c initialization for the two expanders sets I2C_PULLUP_EXT. So, that shouldn’t be a problem.

Based on a point up above about the TT’s i2c clock rate, I’m wondering if I should force-set the expanders to 400kHz. The max rate the Teensy i2c Library supports at the expander’s clock speed is 3.0M; they are currently set to 2.4M which, ostensibly, gets negotiated down by the reality of the bus.


I should reiterate the point that @tehn made above out for those that are following along - we are doing absolute torture tests to more quickly identify break points and potential issues. This isn’t real-life stuff.

What the Teletype has been capable of in these tests is seriously impressive - a testament to the well-designed hardware and software. It has only taken relatively small optimizations to unlock a massive amount of potential in the module and its growing ecosystem: the Trilogy modules, Ansible, JustFriends, Orca, and the TELEX expanders. Given the talent in the community - this feels like only the beginning for one of the most powerful modules in Eurorack!

3 Likes

i noticed too that in some cases the LEDs on ansible stop updating but TT is not locked and i can execute CV reads and writes without problems. didn’t check the actual CV output.

my plan was to check with the picoscope next to see what happens when it locks, managed to lock it almost right away and verified the scope was capturing everything properly, got ready for the actual run and, of course, now i can’t get it to lock :joy:

2 Likes

managed to lock it a couple of more times, but not much new info from decoding the i2c bus. everything looks totally fine and then it just stops. this is likely due to how i have the picoscope configured, it captures a buffer after the i2c start (falling edge on the SDA line), so it’s possible that the issue is that the SDA is getting pulled down and then nothing happens after that. edit: reproduced again and this time checked SDA after the lock and indeed it was low.

i’ll try to find a way to capture what happens but the problem is that it’s getting very difficult to get it to lock in the first place - which is good news really!

1 Like

I know I don’t have any clue what you all are doing there but I appreciate the activity on this very much. The above mentioned part about Ansible code reminds me on something that irritated me when trying the new firmware (and the one before from last week):

After I had installed the upgrade Ansible alone worked okay (apart from the input voltage bug) but after a few crashes the voltages cycles emits are not continuous anymore but stepped. My very limited knowledge now deduces that the Ansible code can change when it locks up and a power cycle is necessary and I wonder if this is to consider while trouble shooting.

i feel that you’re experiencing issues i’m completely unable to replicate-- e-mail info@monome.org and i’ll get you a replacement

I disconnected Ansible from i2c and gave Earthsea another try with the new firmware. It does not seem to skip commands anymore as it did before when clocking from teletype but on intervals of few minutes it slows down for a few seconds as I did experience before.

And when switching the grid between Earthsea and Meadophysics via Switch or hot-plugging there is a short pause when switching back to Earthsea and after some back and forth switching Meadowphysics does not recognize the grid anymore nor reacts to the speed knob but plays on. Still switching the grid to Meadowphysics in this state speeds its tempo up a little bit…:neutral_face:

Looks like this (steady medowphysics trigger to teletype with II ES.CLOCK 1):

EDIT: The lock-ups on grid switching happen on earthsea too - atm (which is since I do not clock it form teletype anymore) more often then on Meadowphysics.

my first test with TT & ansbile i2c com started well but unfortunately it’s totally locked up now.

script (from memory):

1:
CY.RES 1

2:
CY.RES 2

3:
CY.RES 3

4:
CY.RES 4

5:
CY.REV RAND 4

(6-8 unused)

I:
M 10

M:
P.N 0
X RSH CY.POS 1 5
P.I X

the above script seemed to be running very smoothly and without any noticeable problems for about 10 minutes. then i got excited and added 3 more lines to M:

P.N 1
Y RSH CY.POS 2 5
P.I Y

basically adding a read of cycles position 2 and using that value to update pattern 2’s index.

TT locked up real quick and became totally unresponsive. i rebooted a few times with no luck. in fact, the screen remained completely black (eek!). i pulled the i2c cable and rebooted. nothing. then i pulled all the incoming triggers (from meadowphysics) & rebooted. it came back to life. now i can use the keyboard to open other TT scenes, but if i try to open the scene shown above it will crash as soon as it receives any input - even hitting TAB on the keyboard.

/// FOLLOW-UP ///

I just tried to replicate this scene from scratch and i was able to freeze up TT with the same exact script without adding the last 3 lines to M. it seemed to be working fine at first, but i cranked up the speed of cycle 1 and it froze.

Tried again with a slower metro - 100ms - and it froze up after about 60 seconds.

/// ROUND 3 ///
another attempt, this time with a M of 250ms. all ran well for about 10 mins and then TT locked.

i’m running your script at 10ms metro, all cables patched, with all 6 lines in the metro. no crashes, running 5 mins so far.

did you update ansible as well as tt?

also-- pretty crazy scrubbing the tracker with arc knobs–

update

tt crashed after 15 minutes. wasn’t a stack overflow. ansible continued running fine. ok then.

yeah! is this asking too much of TT?

another discovery:

i was sending random position changes to random cycles on a script:

CY.POS RRAND 1 4 RAND 255

and although it worked wonderfully, i observed random leds lighting for a split-second on the arc rings. ansible cv out wasn’t affected by these glitches so it seems to be purely visual.

ok-- back to reality check on the i2c transmission times.

8 TR.TIME
8 TR.PULSE
8 CV reads
8 CV writes

= 5.8ms transmission time. (that is a long time).

this runs reliably at 20ms metro time. but, the 20ms interval of the metro timer gets totally hosed. off by almost 5ms.

10ms is not reliable.

so, time for some hard realities, i think. with a 400khz bottleneck for the avr32, using high-ish speed polling is not a practical application. there simply isn’t enough processor overhead to get things done.

i’m going to think about this, and i hope you guys can think along these lines also-- let’s try for a logical assessment of what’s possible.

1 Like