Ok; did a simpler experiment when I got home based on the suggestion from @Galapagoose (more accurately, after a couple of hours of alternating fetch with a dog, then cat and then the dog again).

5 Devices Connected to the Teletype (added the Ansible to my previous configuration)

  • TT + 2xTXi + 2xTXo + 1xAnsible
  • I enabled the internal PullUps on the two TXi (I2C_PULLUP_INT)
  • All TX devices set to 400mHz i2c

M Script at 10ms:

L 1 8 : TO.CV I TI.PARAM I
TR.PULSE 4

RESULTS:

  • No hanging on bad ports!
  • Been running for over 10 min without issue!!!

This is fantastic!

I’ll report more when I’ve played with it some more.

(Yes - the cat plays fetch. She just brought me a Q-Tip and wants me to throw it as I type this.)

Thx again, @Galapagoose!!

5 Likes

First - my previous test is super stable. It runs and runs and runs and runs. Energizer bunny kind of stuff. I let it go 30 minutes. No problems.

Ok; more testing.

I’ve added the rest of my BETA devices to the bus (2xTXi + 4xTXo + Ansible + TT). If I turn just one of the Teensy’s internal pull-up resistors on for the i2c bus (the closest to the TT) - the whole thing is rock solid.

My new Macbook is having more problems than the Teletype, Ansible and Expanders at this point.

I have 7 devices hanging off the end of my Teletype and am pounding the ever living snot out of them. Sending commands to devices not on the bus is not a problem. All devices respond to commands. Reads are fault-free. The whole thing is super-responsive at crazy-high polling speeds.

I’ve made my script much more evil - reading all eight TI param knobs and updating all 16 TR and CV values on the output expanders every 10ms. It is NUTS!!! And…

…it is completely solid.

Hell yeah!

9 Likes

So excited to hear this. Great job everyone making tools to blow minds.

Fantastic. Great job by all involved in getting to the bottom of these issues

excellent news!

could you also try the following - when running the scripts try sending high rate (something like 200bpm) triggers to both of the ansible inputs. i think there might still be a separate issue there as i was getting locks with just the ansible attached. so, basically, a 10ms Metro script with both reading and writing, and fast triggers into both inputs.

good news!

what is the internal pull-up value on the teensy?

it’d be good to have some numbers for figuring out how to move forward in expanding the i2c bus.

@tehn:

According to the Teensy-Optimized Wire Library’s Documentation:

The internal pullup resistances of the Teensy devices are as follows:

  • Teensy 3.0/3.1/3.2 - ~190 Ohms (this is believed to be a HW bug)

    The Teensy 3.0/3.1/3.2 value of ~190 Ohms is very strong (it is believed to be a HW bug), however in most cases it can work fine on a short bus with a few devices. It will work at most any speed, including the max library speeds (eg. breadboard with 3.0/3.1/3.2 device and a few Slave devices usually works fine with internal pullups). That said, multiple devices configured for internal pullups on the same bus will not work well, as the line impedance will be too low. If using internal pullups make sure at most one device is internal and the rest are external.

@scanner_darkly:

I’ve been torturing the Ansible this morning. At high rates, I’ve seen the Ansible CV outputs lock again; this is where after a while the LEDs and output values stop responding - but the i2c reads/writes are still seeming to work as before. All other devices are happily pulsing and CV-ing away.

1 Like

thanks for testing, that’s encouraging! and sounds like it’s not necessarily related to i2c, perhaps heavy writes/reads just create a condition for it to happen.

i’ll try this scenario again but with some additional boundary checking on the ansible side.

@tehn - what are your thoughts on this? could it have something to do with changing the interrupt levels? (i’m not familiar with that area at all)

i don’t have an immediate guess.

previously the I2C irq was the same as the TC irq, and the TC was getting enabled/disabled constantly due to how the event system works, hence the I2C was also getting on/off and likely messing things up.

now the I2C irq is on the “UI” level which is for trig inputs.

i’ll do some deeper investigation. but i had some sense that we were at “real-world” usability. definitely need to check out the watch dog timer for graceful restart on lock

a watch dog timer should be trivial but it seems the non trivial part is getting the bus back into a healthy state, judging from the links i posted above. but maybe it’s just for more complex scenarios, i wonder if it’s as easy as just putting twi back into the initial state, since we don’t really care about recovering anything in this case, just clearing it for future transactions. just need to make sure that in this case the rest of the transaction is not sent.

For the Ansible thing I’m seeing:

  • I2C stays healthy with the Ansible; it still responds to read requests.
  • Set values are persisted and returned when reading.
  • More often it is the CV LEDs and output values that get locked; rarely it will be the trigger outputs (same symptom - LEDs frozen and no trigger voltages on the jacks).

It feels to me like the Ansible’s i2c is healthy and that some other process on the unit is loused up, which is causing it to stop updating the LEDs and CV voltages. I haven’t dug around in the code - so I really don’t know what I’m talking about here. So, sorry if I’m wasting ASCII characters. :slight_smile:


On a side note, I want to thank EVERYONE who has helped with this troubleshooting. I’ve spent the last year building out these two expanders (TXo and TXi) and was starting to sweat heavily about the integration. With the work that you have done and supported with your tests and thoughts in the last week, the expanders are now totally solid with the Teletype. I’m a month or two from being able to ship out a bunch of them to folks and, when that happens, each and every user will be indebted to you for helping ready the platform for their arrival.

THANKS!

Here is a video I posted over in the Teletype Expanders thread with 24 Triggers, 24 CV Values, and 16 reads (8 knobs and 8 CV values) running at 100ms:

6 Likes

I’d just raise the point that the internal pullups are an order of magnitude smaller than typical values in this context. I don’t know the real-world impact this has, but I wonder if it will stress the uC pins, having to pull that much more current to ground when sending data. Would be interesting to see scope traces of the bus in this context.

I wonder if the AT32 pullups are larger values? If so, might it make sense to turn on internal pullups on the TT rather than the expanders? That would avoid having to put a specific firmware on one of the expander modules. I’d encourage some further hardware testing now that the software seems to be working fine.

3 Likes

Yeah; totally hear you. I was a little surprised by the value on the Teensy. My Picoscope just arrived; I’m going to get some views of what is actually going on with the bus “in context” as soon as I plug it in and figure out how to use it.

I would much rather have the pull-up somewhere other than on a single expander. As the proper amount would vary based (primarily) on the number of items you have on the bus, I keep imagining some way to dial it in on the back of the Teletype. Like a modified version of @tehn’s bus adapter that allows you to use jumpers, dip switches or a trimmer to adjust the pull-up to the value needed by the # of devices on the bus.

https://cdn.shopify.com/s/files/1/0265/4183/products/IMG_2598_1024x1024.JPG?v=1476999943

Is pin #3 on the unpopulated FTDI connector 3.3V? Perhaps one could bootstrap off of that?

I’ll shoot some pics when I’ve got them.

2 Likes

Just to follow-up:

Dropping 3v3 through a 190r resistor dissipates 17mA(!) via the pins on the microcontrollers. I believe that’s just in spec, though a little uncomfortably close for my liking. Even if that resistor were external it’d be much better as the heat wouldn’t be inside the uC!

//
AT32UC3 datasheet suggests internal pull-ups are 15k(typical). That’d be 6k in parallel with the 10k hardware resistors. I’d hazard a guess that might be enough to solve the issues you were seeing with many devices connected, though there’s one easy way to find out!

1 Like

keep having this exact problem with my TT and ansible. Also, if ansible is used in another mode previously, it starts up in that mode again and if my TT starts up with a script using CV 5-8 it instantly locks up without even showing anything on the screen (scary moment). only way out seems to be reboot with usb storage, select another script, change the mode on ansible and switch back to your script.

1 Like

newest firmwares on each?

i’ve seen the startup mode bug for tt, i’ll get that fixed. it always starts in either a grid or arc mode, which isn’t right.

yes, 1.3.2 tt and 1.3.4 ansible. took me a while to find out ansible is not in the right mode on startup and this is the cause of the freeze

1 Like

Oh, I thought the i2c issue was fixed now after the glorious collaboration of the last two weeks but feel a little at a loss now - should it work or do we have to wait for a final firmware upgrade?

My Ansible seems to be malfunctioning anyway and I am waiting for an exchange though there still are the remote issues with Earthsea which seem to not have been addressed yet.

be patient-- we’ll work it out. much progress has been made but there are still some bugs.

2 Likes

Okay - I’ll try to be patient! :slight_smile:

(just wanted to inquire if I have to…)

1 Like