yes this very much so. sometimes you gotta just kill the quantization
I personally really love the current implementation. You either get completely free, or clocked.
Sometimes I wish there was a pulse that occurred at the end of the recorded phrase, but then you loose a gate out for one channel. At some point, too many little features = a mess.
I forgot I came to the thread this morning specifically to share a very fun patch/setup:
Start with 1 voice allocation for live play and record. Patch the 1v/Oct and gate to at least one voice (I mult’ed cv 1 to 4 voices and mixed them at different levels manually).
Patch Gates 2-4 to envelopes and/or random voltage generators. Patch CV 2-4 to VCAs or other things that go along with those envelopes/generators.
Now, record a phrase on Voice 1. Once looping, turn off Live Play for voice 1, and enable live play on 2-4.
At his point, now playing additional notes will trigger envelopes/modulation which modulate the main voice. Depending on where you touch on the grid, you increase/decrease that modulation.
Before you ask - nope, I forgot to turn the recorder on. I will make myself a todo note to eventually do that.
this actually seems very useful and beneficial for earthsea free timing. and could be fitted in easily by adding a way to switch the clock output to trigger on each note or at the start (end?) of a loop (could probably put it on voice allocation page)
So long as you could choose which output you end up using for the “start/end of loop” output pulse. i.e. use output 4 and the loop was recorded on output 1.
Also, grain of salt, this feature seems really useful in my mind, but I haven’t tried simulating it. My initial ideas were feeding some kind of clock multiplier, but maybe that doesn’t end up with great results since there’s no guarantee the free-played notes are going to be aligned with a tempo based off a time multiple of the loop length.
I’m happy to try something out though if you end up with an implementation idea.
i clearly need to refresh my memory on how ansible earthsea actually works - i was thinking there is a separate clock output
so yeah, you’d have to choose which output is assigned for that. i do think it would be useful - you could use it as a clock for another earthsea, or use it to clock a sequencer that could provide transposition, or trigger S&H that would change filter cutoff on each loop etc, or reset some slow modulation source. even with clock mul could be interesting to have an interplay between strict and loose timing.
Since this idea would essentially also free up the CV output for that selected voice—or another way to think of it, grabbing an extra channel/voice to tag along to the one in the recording—it might be interesting to provide that whole voice as a double but just have the gate out on sequence end/begin and then copy the CV. That way people could use it for, say, filter cutoff or other expression. Pass it through a slew like Maths and bingo, extra modulation!
Not to sound like a broken record, but a column on the voice alocation page could work well for that “utility channel” as well.
I love the idea of an end/begin trigger, as this would also allow to sync to an external DAW in a more precise way. The CV of the channel providing the trigger could also perhaps used as a simple AD envelope of a gate from one of the other channels… or maybe I’m overthinking things, but this would be extremely cool for me in my small setup and free up things.
But I want to thank @scanner_darkly for his tremendous work as the missing clocking part of the Earthsea was something that I really found disappointing in the original module and making Ansible even more versatile. I just need to get hold on a second Ansible as my Arc is currently only rarely used…
Just wanted to add thanks to @scanner_darkly for all this work and second requests for a link to donate something. It’d feel good to be able to show support for all this.
+2 i would be happy to support @scanner_darkly amazing work!
any chance ansible earthsea will ever have a scale quantizer? perhaps sharing scales with the other apps? it would be a game changer for me personally.
It’s already quantized. You just have to learn the scales on the keyboard layout, like any other instrument.
okay, poly/mono thoughts (i’ll post about overlapping notes separately).
a couple of things first: i do plan on adding the ability to play multiple patterns at the same time. also, when we talk about poly/mono i think it helps to think of it in terms of voice allocation. poly would be where notes can be played using any of the allocated voices, mono is where notes are always assigned to a particular voice.
my initial thought was that using the ability to play multiple patterns simultaneously would provide a workable way to emulate mono. say, voice 1 is bass and voice 2 is lead. you could record a pattern and set it to voice 1 only, then record another pattern and set it to voice 2. pattern 1 is your bass line, pattern 2 is your lead. if you want to change just the lead part it’s easy to do so, just re-record that pattern.
what i like about it is simplicity and flexibility. you can freely reassign voices, so if you have 2 lead voices you could easily swap them, for instance. it’s easy to combine patterns, since each one should contain either one voice or several similar voices (if you, say, combine a pattern with one bass voice with a pattern that uses 3 other voices for chords). this makes it easy to remember what each pattern contains.
the downsides of this approach: it seems more cumbersome to switch patterns than to switch voice pages (although this could be probably addressed). and i agree that voice allocation will need to be improved (some way to select a single voice quickly). also, this can be a bit restrictive with 16 patterns.
now, let’s say we introduced a way to have patterns with fixed voice assignment. i’m going to say right away, i don’t think having it as a separate mode (so you would have to specify for each pattern whether it’s poly or mono) is a good solution, as then you have 2 different types of patterns, each one with its own way to record / overdub / playback.
instead, each pattern could allow a mix of fixed and free voices. let’s say there are 4 buttons in the bottom row that allow you to select a voice (one at a time). when a single voice is selected, and you’re recording, the notes will be assigned to that voice. during playback it could dimly light the corresponding voice button. a 5th button will select free voices. any of the voices that already have notes assigned to them as fixed will not be available as free voices.
pros: more intuitive recording / overdubbing. patterns can contain multiple voices.
cons: a more complex interface is required. it can be hard to remember which parts each pattern contains. fixed voices cannot be reassigned, unless a new workflow is introduced for copying between voices, so one more thing to have in the UI and to remember.
so this is where i’m at at the moment. i don’t have a strong preference for either approach, maybe a bit more for the 1st one as it’s more clear and implements the desired functionality in a more elegant way. but would love to hear everybody’s thoughts on this.
thanks for all the kind words - much appreciated! no patreon plans for the nearest feature but i’ll give it more thought.
I really like this aspect of your idea(s). I personally don’t have a need to ‘re-assign’ a pattern to a different output, I would be totally fine if “voice” 1-4 just meant “output” 1-4. It would also be great if multiple of these voices buttons could be held and the pitch would apply to all voices (say you want them all to be at the same note).
FWIW, I don’t think I’d ever use the “free voices” 5th button. I almost always have 3-4 voices in my patches, but I think of them as separate timbres moreso than multiple voices in a chord. As such I haven’t been able to make really good use of the poly functionality as is (I usually either don’t use voices 2-4, or I map them to control non-pitch things).
re: overlapping notes. to simplify the discussion, let’s say you are playing live with one voice only, so all notes go to cv/trigger pair 1.
the problem is when you accidentally overlap 2 notes. what happens is - the gate stays high, so if there is an envelope that is controlled by the gate it won’t get retriggered.
so there are 3 options:
if a note is triggered while another note is active it could insert a small gap. this means that the 2nd note will be delayed by however long this gap is. which means that some notes will be slightly delayed, or if we want consistency we could insert a gap for all notes. this is not ideal as it introduces latency.
the current behaviour, where the gate stays high. this option is the best for capturing expression as it plays it exactly as you do (and one thing to consider is if legato glide was ever implemented it would require using this option)
as above but with the option of being able to select triggers/gates. you lose the ability to control gate length if you use triggers (you’ll have to use your envelopes for that) but it does address the issue with accidental overlapped notes. i should mention, you can select triggers/fixed width/gates right now but it only applies to playback, the idea is you could apply similar option to live.
(there are other options using external modules like the one @desolationjones suggested).
#1 is a non-starter in my opinion.
#3 actually sounds useful even if not using it to deal with overlap, so I definitely vote #3
this comes down to whether the voice selection buttons act as toggles or radio buttons - i was thinking the latter as it would make switching faster. could perhaps allow multiple buttons to be selected while held?
one interesting way to use free voices even (or especially!) with different timbres is when you have live and playback share some voices. then as you play it will, say, use voices 1&2 and only occasionally 3 - a good way to inject variation that depends on how it’s played. this will be even more the case when multiple pattern playback is added.
A requirement of being held for addressing multiple voices is totally cool imo. Since they’d all get the same note, loosing one hand isn’t too bad.
I think because i often have really long releases, or are modulating amplitudes by hand, what often happens for me is voice 3 or even 2 will get left at a note that eventually will sound really ugly against another note that voice 1 ends up playing. When voices are fully gated, it sounds fine. But if they’re droning on, then often a voice will get “orphaned” at some note. My use case might be particularly specific… i’m not sure. I’ve been trying to imagine how i’d prototype with TT and GridOps… and find the time to try haha.
I vote option#2. The expression option. Although I’d be fine with option 3 too.