Ok; did a simpler experiment when I got home based on the suggestion from @Galapagoose (more accurately, after a couple of hours of alternating fetch with a dog, then cat and then the dog again).
5 Devices Connected to the Teletype (added the Ansible to my previous configuration)
- TT + 2xTXi + 2xTXo + 1xAnsible
- I enabled the internal PullUps on the two TXi (
I2C_PULLUP_INT)
- All TX devices set to 400mHz i2c
M Script at 10ms:
L 1 8 : TO.CV I TI.PARAM I
TR.PULSE 4
RESULTS:
- No hanging on bad ports!
- Been running for over 10 min without issue!!!
This is fantastic!
Iāll report more when Iāve played with it some more.
(Yes - the cat plays fetch. She just brought me a Q-Tip and wants me to throw it as I type this.)
Thx again, @Galapagoose!!
5 Likes
First - my previous test is super stable. It runs and runs and runs and runs. Energizer bunny kind of stuff. I let it go 30 minutes. No problems.
Ok; more testing.
Iāve added the rest of my BETA devices to the bus (2xTXi + 4xTXo + Ansible + TT). If I turn just one of the Teensyās internal pull-up resistors on for the i2c bus (the closest to the TT) - the whole thing is rock solid.
My new Macbook is having more problems than the Teletype, Ansible and Expanders at this point.
I have 7 devices hanging off the end of my Teletype and am pounding the ever living snot out of them. Sending commands to devices not on the bus is not a problem. All devices respond to commands. Reads are fault-free. The whole thing is super-responsive at crazy-high polling speeds.
Iāve made my script much more evil - reading all eight TI param knobs and updating all 16 TR and CV values on the output expanders every 10ms. It is NUTS!!! Andā¦
ā¦it is completely solid.
Hell yeah!
9 Likes
So excited to hear this. Great job everyone making tools to blow minds.
Fantastic. Great job by all involved in getting to the bottom of these issues
excellent news!
could you also try the following - when running the scripts try sending high rate (something like 200bpm) triggers to both of the ansible inputs. i think there might still be a separate issue there as i was getting locks with just the ansible attached. so, basically, a 10ms Metro script with both reading and writing, and fast triggers into both inputs.
tehn
107
good news!
what is the internal pull-up value on the teensy?
itād be good to have some numbers for figuring out how to move forward in expanding the i2c bus.
@tehn:
According to the Teensy-Optimized Wire Libraryās Documentation:
The internal pullup resistances of the Teensy devices are as follows:
ā¦
- Teensy 3.0/3.1/3.2 - ~190 Ohms (this is believed to be a HW bug)
ā¦
The Teensy 3.0/3.1/3.2 value of ~190 Ohms is very strong (it is believed to be a HW bug), however in most cases it can work fine on a short bus with a few devices. It will work at most any speed, including the max library speeds (eg. breadboard with 3.0/3.1/3.2 device and a few Slave devices usually works fine with internal pullups). That said, multiple devices configured for internal pullups on the same bus will not work well, as the line impedance will be too low. If using internal pullups make sure at most one device is internal and the rest are external.
@scanner_darkly:
Iāve been torturing the Ansible this morning. At high rates, Iāve seen the Ansible CV outputs lock again; this is where after a while the LEDs and output values stop responding - but the i2c reads/writes are still seeming to work as before. All other devices are happily pulsing and CV-ing away.
1 Like
thanks for testing, thatās encouraging! and sounds like itās not necessarily related to i2c, perhaps heavy writes/reads just create a condition for it to happen.
iāll try this scenario again but with some additional boundary checking on the ansible side.
@tehn - what are your thoughts on this? could it have something to do with changing the interrupt levels? (iām not familiar with that area at all)
tehn
110
i donāt have an immediate guess.
previously the I2C irq was the same as the TC irq, and the TC was getting enabled/disabled constantly due to how the event system works, hence the I2C was also getting on/off and likely messing things up.
now the I2C irq is on the āUIā level which is for trig inputs.
iāll do some deeper investigation. but i had some sense that we were at āreal-worldā usability. definitely need to check out the watch dog timer for graceful restart on lock
a watch dog timer should be trivial but it seems the non trivial part is getting the bus back into a healthy state, judging from the links i posted above. but maybe itās just for more complex scenarios, i wonder if itās as easy as just putting twi back into the initial state, since we donāt really care about recovering anything in this case, just clearing it for future transactions. just need to make sure that in this case the rest of the transaction is not sent.
For the Ansible thing Iām seeing:
- I2C stays healthy with the Ansible; it still responds to read requests.
- Set values are persisted and returned when reading.
- More often it is the CV LEDs and output values that get locked; rarely it will be the trigger outputs (same symptom - LEDs frozen and no trigger voltages on the jacks).
It feels to me like the Ansibleās i2c is healthy and that some other process on the unit is loused up, which is causing it to stop updating the LEDs and CV voltages. I havenāt dug around in the code - so I really donāt know what Iām talking about here. So, sorry if Iām wasting ASCII characters. 
On a side note, I want to thank EVERYONE who has helped with this troubleshooting. Iāve spent the last year building out these two expanders (TXo and TXi) and was starting to sweat heavily about the integration. With the work that you have done and supported with your tests and thoughts in the last week, the expanders are now totally solid with the Teletype. Iām a month or two from being able to ship out a bunch of them to folks and, when that happens, each and every user will be indebted to you for helping ready the platform for their arrival.
THANKS!
Here is a video I posted over in the Teletype Expanders thread with 24 Triggers, 24 CV Values, and 16 reads (8 knobs and 8 CV values) running at 100ms:
6 Likes
Iād just raise the point that the internal pullups are an order of magnitude smaller than typical values in this context. I donāt know the real-world impact this has, but I wonder if it will stress the uC pins, having to pull that much more current to ground when sending data. Would be interesting to see scope traces of the bus in this context.
I wonder if the AT32 pullups are larger values? If so, might it make sense to turn on internal pullups on the TT rather than the expanders? That would avoid having to put a specific firmware on one of the expander modules. Iād encourage some further hardware testing now that the software seems to be working fine.
3 Likes
Yeah; totally hear you. I was a little surprised by the value on the Teensy. My Picoscope just arrived; Iām going to get some views of what is actually going on with the bus āin contextā as soon as I plug it in and figure out how to use it.
I would much rather have the pull-up somewhere other than on a single expander. As the proper amount would vary based (primarily) on the number of items you have on the bus, I keep imagining some way to dial it in on the back of the Teletype. Like a modified version of @tehnās bus adapter that allows you to use jumpers, dip switches or a trimmer to adjust the pull-up to the value needed by the # of devices on the bus.
https://cdn.shopify.com/s/files/1/0265/4183/products/IMG_2598_1024x1024.JPG?v=1476999943
Is pin #3 on the unpopulated FTDI connector 3.3V? Perhaps one could bootstrap off of that?
Iāll shoot some pics when Iāve got them.
2 Likes
Just to follow-up:
Dropping 3v3 through a 190r resistor dissipates 17mA(!) via the pins on the microcontrollers. I believe thatās just in spec, though a little uncomfortably close for my liking. Even if that resistor were external itād be much better as the heat wouldnāt be inside the uC!
//
AT32UC3 datasheet suggests internal pull-ups are 15k(typical). Thatād be 6k in parallel with the 10k hardware resistors. Iād hazard a guess that might be enough to solve the issues you were seeing with many devices connected, though thereās one easy way to find out!
1 Like
derz
116
keep having this exact problem with my TT and ansible. Also, if ansible is used in another mode previously, it starts up in that mode again and if my TT starts up with a script using CV 5-8 it instantly locks up without even showing anything on the screen (scary moment). only way out seems to be reboot with usb storage, select another script, change the mode on ansible and switch back to your script.
1 Like
tehn
117
newest firmwares on each?
iāve seen the startup mode bug for tt, iāll get that fixed. it always starts in either a grid or arc mode, which isnāt right.
derz
118
yes, 1.3.2 tt and 1.3.4 ansible. took me a while to find out ansible is not in the right mode on startup and this is the cause of the freeze
1 Like
Oh, I thought the i2c issue was fixed now after the glorious collaboration of the last two weeks but feel a little at a loss now - should it work or do we have to wait for a final firmware upgrade?
My Ansible seems to be malfunctioning anyway and I am waiting for an exchange though there still are the remote issues with Earthsea which seem to not have been addressed yet.
tehn
120
be patient-- weāll work it out. much progress has been made but there are still some bugs.
2 Likes
Okay - Iāll try to be patient! 
(just wanted to inquire if I have toā¦)
1 Like