re: i2c - i saw the sequence for getting the bus back into healthy state somewhere, i’ll post if i find it. i think addressing a non existing device is a separate issue and should be easy to fix (since in this case no ACK coming from a slave whatsoever, vs a slave keeping SDA down).

pretty sure i was able to get TT to lock when addressing a non existent CV too. also when trying CY.RES i managed to get TT to lock immediately - i think this was because ansible was in the TT mode and not Cycles, but not sure 100% since i only had time for a very quick test. i’ll check again when i get a chance.

so what about the idea of interrupting a script if it gets triggered again before it finishes executing, and indicating it with a flashing icon or something?

1 Like

fyi, ansible changes i2c addresses based on what mode it’s in. so these are potentially the same issue.

that’s probably a rational thing to check out, good call

1 Like

Interrupting the script seems wrong to me – who knows what you’re interrupting and what effect it will have on the scene? I get that it’s in the territory of bad-user-script, but it seems wrong to not treat a script as atomic, at least to itself. Triggers from a random source aren’t always predictably sparse.

I’d rather see the second trigger dropped, or maybe queuing up the script to run a second time with a maximum queue depth of 1.

Just my 2 cents.

(Of course you can effectively give your scripts self-atomicity by muting itself at the top and unmuting at the bottom.)

that’s why i suggested a visual indication as well. you will know your scripts are not executing fully and have a chance to fix it by lowering the rate or editing the script.

1 Like

found a description of a method to clear the i2c bus:
http://www.forward.com.au/pfod/ArduinoProgramming/I2C_ClearBus/index.html

potentially related to the issue we’re seeing:
http://processors.wiki.ti.com/index.php/I2C_Tips#External_Slave_Device_Hanging_the_Bus_by_Holding_SDA_Low

2 Likes

Since I am on the latest teletype firmware teletype occasionally gets in a state where the trigger pulses don’t go down to zero anymore. It is still running scripts (relatively simple ones w/o i2c commands) and it is possible to turn the pulse off with TR.TOG but it will get high with the next input trigger running the script and then stay high again. It’s a bit strange since it is running for quite some time before this suddenly happens…

I think that you guys nailed the code for i2c; it is solid for both reads and writes with a small number of devices. In this configuration, if you type the wrong address for an output or input - no foul. Things just keep on running.

As the device count grows - the performance starts to diverge. I’m assuming that this has to do with the aggregate bus resistance and the pull-up resistor value that is in the Teletype. Here is what I am seeing:

  • Writes are solid at high rates up to a large number of devices. I was able to PUMMEL 6 TXo devices while 8 total TX devices were on the bus (+2 TXi). Writes were fast with hardly any perceivable timing discrepancies. We’re talking 24 Triggers and 24 CV values updating every 10ms. Runs and runs and runs. Wow.

  • The stability of read operations decreases with the addition of devices to the bus.

  • The odds that the TT will freeze on the input of an address that is not available (for read or write) goes up precipitously with the number of devices on the bus.

TESTS (all TX units hard-set to 400mHz i2c bus)

2xTXi + 1xTT [AWESOME FOR READS]

  • no hanging on bad ports
  • 10ms reads from 4 pots to CV 1-4

2xTXi + 1xTXo + 1xTT [GOOD FOR READS]

  • no hanging on bad ports
  • 10ms pulses to TO.TR 1-4
  • 10ms reads from TI.PARAM 1-4 to TO.CV 1-4
  • 10ms reads from TI.PARAM 5-8 to CV 1-4
  • froze only after ~5 min

2xTXi + 2xTXo + 1xTT [FAIL FOR READS AND MISTAKES]

  • hangs on bad ports
  • only got to test 1000ms reads from TI.PARAM 1-4 to CV 1-4
  • froze after 2s

I am running the shortest cables I have between units.

This doesn’t seem to be a software thing anymore - it is rock solid for 2 external devices plus the TT. It is only when you add units on the bus that things go downhill. I saw the same behavior with the Ansible the other day; I don’t think it matters what is connected - just how many devices.

6 Likes

how is the situation with less frequency? I’m asking because TT + three monome modules already isn’t a rare scenario… adding txi+txo… hmm

nevertheless, great progress everyone, thank you!

Frequency doesn’t seem to matter - just # of units. Frequency is just a good way to get it to lock up faster in that “almost stable” state. It’s also a good way to identify if a certain configuration is stable.

I should be super clear by the way that I’ve only been testing with configurations that have my expanders, the TT and Ansible in the mix. The expanders are a completely different platform then the trilogy modules. It could be that they are the wildcard causing the problem.

We should have some more definitive answers soon as more tests are performed by more people. I will probably pull my other trilogy moduses and bring them into the mix.

2 Likes

do you see any significant drop in voltage level when adding more devices?

1 Like

I’ll scope it tonight with my junkyScope. Picoscope hasn’t arrived yet. :wink:

BTW: playing around before leaving to work - added Ansible back into the mix. TT still borked out on nonexistent ports (eg CV 9 V 10). With Ansible + TT + Expanders on the Bus, the TT seemed to be more resilient when reading from Ansible - but I was still able to get it to lock without making any calls to the expanders (just Ansible).

2 Likes

You could try hanging an extra set of 10k pull-up resistors onto the bus lines. Of course would be best to have a definitive test-case for “this particular set of commands takes x-minutes to crash when n-devices are connected” and see if that changes.

1 Like

Good idea; I’ll give it a shot. :slight_smile:

1 Like

Ok; did a simpler experiment when I got home based on the suggestion from @Galapagoose (more accurately, after a couple of hours of alternating fetch with a dog, then cat and then the dog again).

5 Devices Connected to the Teletype (added the Ansible to my previous configuration)

  • TT + 2xTXi + 2xTXo + 1xAnsible
  • I enabled the internal PullUps on the two TXi (I2C_PULLUP_INT)
  • All TX devices set to 400mHz i2c

M Script at 10ms:

L 1 8 : TO.CV I TI.PARAM I
TR.PULSE 4

RESULTS:

  • No hanging on bad ports!
  • Been running for over 10 min without issue!!!

This is fantastic!

I’ll report more when I’ve played with it some more.

(Yes - the cat plays fetch. She just brought me a Q-Tip and wants me to throw it as I type this.)

Thx again, @Galapagoose!!

5 Likes

First - my previous test is super stable. It runs and runs and runs and runs. Energizer bunny kind of stuff. I let it go 30 minutes. No problems.

Ok; more testing.

I’ve added the rest of my BETA devices to the bus (2xTXi + 4xTXo + Ansible + TT). If I turn just one of the Teensy’s internal pull-up resistors on for the i2c bus (the closest to the TT) - the whole thing is rock solid.

My new Macbook is having more problems than the Teletype, Ansible and Expanders at this point.

I have 7 devices hanging off the end of my Teletype and am pounding the ever living snot out of them. Sending commands to devices not on the bus is not a problem. All devices respond to commands. Reads are fault-free. The whole thing is super-responsive at crazy-high polling speeds.

I’ve made my script much more evil - reading all eight TI param knobs and updating all 16 TR and CV values on the output expanders every 10ms. It is NUTS!!! And…

…it is completely solid.

Hell yeah!

9 Likes

So excited to hear this. Great job everyone making tools to blow minds.

Fantastic. Great job by all involved in getting to the bottom of these issues

excellent news!

could you also try the following - when running the scripts try sending high rate (something like 200bpm) triggers to both of the ansible inputs. i think there might still be a separate issue there as i was getting locks with just the ansible attached. so, basically, a 10ms Metro script with both reading and writing, and fast triggers into both inputs.

good news!

what is the internal pull-up value on the teensy?

it’d be good to have some numbers for figuring out how to move forward in expanding the i2c bus.

@tehn:

According to the Teensy-Optimized Wire Library’s Documentation:

The internal pullup resistances of the Teensy devices are as follows:

  • Teensy 3.0/3.1/3.2 - ~190 Ohms (this is believed to be a HW bug)

    The Teensy 3.0/3.1/3.2 value of ~190 Ohms is very strong (it is believed to be a HW bug), however in most cases it can work fine on a short bus with a few devices. It will work at most any speed, including the max library speeds (eg. breadboard with 3.0/3.1/3.2 device and a few Slave devices usually works fine with internal pullups). That said, multiple devices configured for internal pullups on the same bus will not work well, as the line impedance will be too low. If using internal pullups make sure at most one device is internal and the rest are external.

@scanner_darkly:

I’ve been torturing the Ansible this morning. At high rates, I’ve seen the Ansible CV outputs lock again; this is where after a while the LEDs and output values stop responding - but the i2c reads/writes are still seeming to work as before. All other devices are happily pulsing and CV-ing away.

1 Like