While here, do you see any issue in using PolyES with grid and anisble or trilogy with a doepfer 192-2 for midi conversion? http://www.doepfer.de/a1922.htm
Mostly asking bco this:
“Important notes: The CV voltage has to be present at the rising edge of the gate signal. It seems that for some sequencers on the market that’s not true. If a sequencer generates the CV after the rising edge of the gate the A-192-2 will output apparently wrong Midi data because an “old” CV is taken to generate the Midi data. But that’s a problem of the sequencer, not the A-192-2. To fix this problem the A-192-2 would have to “wait” a couple of milliseconds (depending upon the delay between Gate and CV of the sequencer in question) before the CV is measured. But this would degrade the timing because then the Midi data are sent later then the rising edge of the gate signal. Please clarify if you want to use the A-192-2 with a sequencer if the CV/gate timing is correct (for the A-155 that applies of course), i.e. please ask the manufacturer of the sequencer in question if the CV is generated before the Gate and not the other way round.”
polyES sends gates after it sets pitch CVs, but it will still take some time for CVs to “settle” when their values change (from my limited hardware understanding), so it is possible that you will need to use an additional module to delay the gates. the only way to say for sure is to try it i’m afraid.
Need to ask one of my annoying cheap ass questions again, sorry in advance : )
I used to use ansible Poly ES just fine for my applications with a 64 greyscale in the past.
I want to make sure if i assume correct the 40h will have the same dimension of compatibility or is there really sth that will get in the way between 40h and Ansible?
i don’t have a 40h so can’t say for sure but according to this thread it should work. in terms of how the app itself will behave on a non varibright grid - it should be the same experience as using a greyscale.
what is sent to module outputs is determined by how they are mapped on the voice assignment page.
by default, module outputs are mapped to notes (pitch and gate): notes from voice #1 will be sent to CV/gate output #1, notes from voice #2 will be sent to CV/gate output #2 etc etc.
it will use whatever outputs are available. on modules that don’t have the equal number of CV and gate outputs you might only get partial note information (pitch only or gate only). on meadowphysics you get 8 voices - but only gates. on white whale you get 2 full voices (pitch and gate) and 2 voices with pitch only. on earthsea you get 1 full voice (pitch and gate) and 3 voices with pitch only.
instead of outputting notes you can also configure module outputs to output CV voltage based on the key (note number) or x/y coordinates of the note pressed. this is a convenient way to use polyES to also provide modulation for something like filter cutoff. as an example, you could set up filter key tracking this way:
map voice 1 notes (pitch/gate) to output 1
map voice 1 key to output 2
connect CV output 2 to filter cutoff
you could then map voice 2 x coordinate to output 3 and y coordinate to output 4, now if you switch to voice 2 your keyboard becomes an x/y pad controller with CVs output on 3&4.
The modulations via grid /ES wouldnt be so important for me but even by your information i cant really see if its safe to assume i could play a 3 voice (3 osc) Paraphonic Synth that has 3x CV IN (1 for each Osc) but only one Gate since its paraphonic not poly?
I forgot ES wasnt designed as a poly module unlike Ansible so i oversaw this.
Would my application work out fine?
With midi i meant Midi to Cv wich im sure it does, just not sure if only monophonic or in the same way as i assume above.
Actually i am starting to see why you mentioned the other parts since this makes indeed for a pretty complex mono voice too alternatively. Thanks as always!
imagine voice 1 is your main voice, this is your sequence. now you can use voices 2-4 for modulating various voice parameters - and because you can record these voices, this means you can record your modulations into patterns! and you could apply all the usual pattern transformations to them - speed them up / slow them down / sync them to your main sequence etc.
for completeness sake - since you can also send these modulation sources to mod buses, you can do the same for any of the available parameters for just friends / er-301 / telexo. voice assignment and output parameters are complex pages (necessarily so) but once you spend the time learning them it will open some pretty incredible possibilities.
yeah i’m afraid to do this properly would require reworking the voice assignment page which is already pretty complex, so hard to justify doing it for what is a very limited use case.
So the just type poly functionality would be given with trilogy ES just as with Ansible right?
What will happen with the physical cv outs when using just type, would those be able to trigger sth else in parallel?
If i manage to decide for 128 grid but monobright for a start (lets see where the update goes), what limitation would i face by not having varibright?
I realise that the pattern playback field of polyES is allready missing for 64 so kind of makes sense to chain my 64 with a second one for 128 at least, afaik that can be done (40h).
any supported i2c devices (just friends, txo, er-301) will work regardless of which module you’re running polyES on.
what happens with the physical outputs is up to you - it depends on how you map it. it’s completely flexible. you can map different voices to i2c and physical, you can map same voices, you can send a voice to physical output 1, er-301 port 4, just friends output 3 etc etc
re: monobright - really hard to tell. some functionality might be difficult to use with non vb.
i don’t think that’s supported, unless there is some hardware mod i’m not aware of.
Sry to bother but my setup is right now far from finished to try…need a lot of soldering.
I am not sure i understood voice allocation of trilogy ES running PolyES completely.
Lets say i play voice 1 wich has gate, i play voice 2 while gate is still high, if i remove my finger from voice 2, it wont stop unless voice 1 sends gate off, same for a 3rd and 4th voice?
Beside cumberosme and not intuitive gate delay, there isnt much i can do in trickery to overcome this?
it depends on how many voices you have enabled for that pattern and how they are assigned.
voice allocation will use whatever voices are available for that pattern. when a new note is played, it will use the first available voice. if all voices are already in use, it will steal from the earliest note.
let’s say you have voice 1 and voice 2 enabled for the current pattern, and both are assigned to output 1. you play first note - the gate goes high. you play the second note - since you have voice 2 enabled, it will go to that, but since voice 2 is also using output 1 it will send pitch to CV output 1 and it will continue holding the gate high. if you release note 2 it will set gate to low, even though voice 1 note is still on - it doesn’t do a special check in this case to see if any other voices assigned to the same output are still active. could be done but it’s not how it works right now - and it wouldn’t make sense to implement something like it, since with just one output having multiple voices doesn’t make sense, you would just have one voice assigned to a pattern and sent to output 1.
I was assuming the 2nd and a potential 3rd voice would or should be assigned and patched through the several cv outs. I am not sure if in your example how note 1 would still play when note 2 is triggered since one CV output can not be duophonic?
it wouldn’t - once you play note 2 the CV output will output pitch for note 2.
for normal polyphonic application you would have multiple voices each assigned to its own output. so then you play note 1, it goes to output 1, you play note 2 (while note 1 is still active) - it goes to output 2 and so on. and if you play a note and all of the voices are already being used, it will steal from the earliest note.