ok i’m seeing the glitch, but it seems like an LED glitch, not actually messing up the CV output.

can you confirm this?

I tried - on the first trial Ansible and Teletype instantly froze. On the second trial I cannot replicate the behaviour. This fits with some of my observations that the state is not always the same on power on. I had it for a short time in a state where cycles have been continuous instead of stepped voltage but it froze then…

how many modules are on your II ribbon chain?

which commands are freezing?

i can confirm that parameter reads inside M scripts will bring it down. are you seeing others?

Earthsea, Ansible and Meadowphysics are on my i2c bus. I think all are on the latest firmwares.

I did not find any special commands by now that are freezing the modules, I used reset and position mostly to try by now for all Ansible apps. Teletype/Ansible is reacting absolutely erratic.

for the time being during testing, can you switch your ii cable to ansible and limit the setup to just tt and ansible?

@tehn Yes I disconnected MP and ES now. When I powered on the case today I found Cycles being stepped again and CY.POS range is definitely 0 - 64 with 16 at east, 32 at south and 48 at west…

…And teletype just froze again while doing this M CY.POS 1 63 with M = 250

:neutral_face:

EDIT: Okay, it freezes when you forget the Cycle channel on the command - CY.POS 63 outputs 27 a few times then freezes teletype or both.

it sounds like you’re repeatedly running into a known bug which is param reads inside the metro.

is this on a fresh flash of ansible?

I am not sure what you mean: fresh flash of ansible?

apologies-- newest firmware update, with default settings

Ah, yes, fresh flesh - I am just scaling the output range down. teletype the same, nothing saved and just single or two line scripts or live mode commands.

@tehn Now with i2c running better I did a short video exploring the range behaviour of cycles.

This script pushes cycle 1 forward from 0 - 64 which is one full cycle and puches cycle 2 forward by the read value of cylce 1 which is four cycles as cycle one read values have a range from 0 - 255 (to clarify, the movement is just the metro script, cycles it self are setup to move very slow/stand still as you see on three and four):

X ADD 1 MOD X 64
CY.POS 1 X
CY.POS 2 CY.POS 1

Also mind the pattern of the LED glitch - it’s always one cycle doing it after the other. On the first two cycles it is masked by the programmed positions.

I am happy that i2c remote has made a step forward and hope this does help on debugging Ansible.

:+1:

@tehn I am still struggling with cycles. See the strange position read/write behavior above and how cv gets stepped after powering the case down here:

It is possible to fix it by reflashing the firmware but only until the next power down. This does not occur with the Ansible 1 firmware from Dec. 2.

It already is the replacement module I received.

@Leverkusen does the stepping happen if you don’t downscale?

thanks for posting an exact replication video, this helps.

@tehn Yes, it happens regardless of downscaling the range or not - just did it to fit the oscilator range better for the video.

@tehn another thing that frightened me a bit:

When I plug in the update cable as long as the case is powered and then power it down the whole case gets powered via the USB cable and Ansible - all LED’s of all modules in the case are lit then.

This does not happen when I plug in the same update cable when the case is already powered down.

1 Like

definitely power down the case first, and then plug the cable in.

can you confirm that you’re getting stepped voltages for all 4 channels?

Yes, on all four channels.

Could something got damaged by plugging the USB in when still powered?

I’ve noticed this as well, it’s frightening indeed. Any damages possible ?

no, your modules are fine.

@leverkusen can you try flashing new, power cycling, and testing cycles with an attenuator instead of using the scaling? then power cycling and seeing if the stepping happens?

@tehn Don’t know if I got you right - you mean to test if the intermediate use of the scaling function does harm anything?

If so it doesn’t - reflashing fixes everything and a short power cycle breaks it again, even if nothing was touched (apart from nudging the Arc encoder…)

As a similar issue occurs with Earthsea where I experience the old and previously fixed clamping freeze again with the new i2c proven firmware while you can’t reproduce it I wonder if there is something wrong with the update routine?

I just downloaded the homebrew thing again but it did not make any difference (on a MacBook Pro Retina late 2013, OS X 10.11.6).