hey, thanks for the close look and the clear/specific feedback! very helpful.
these things all need a little work and tightening up. i just need a push to get it done.
right. when rate > 0, you want some post-roll material in the buffer after loop end. when rate < 0, you want pre-roll before loop start. so it’s probably good practice to leave both when loading samples. you need enougn pre/post to accomodate the fade times you want.
yes. really, this probably doesn’t need to be a parameter at all. it just needs to be long enough that the read and write interpolation windows don’t overlap. 8 samples is sufficient. offset sign is flipped automatically when rate goes negative.
if voices continually read from one point in a buffer and then write that data back (when pre_level = 1.0) at a slightly different point, doesn’t that introduce some small degree of ey, inaccuracy in loop lengths?
mm… i don’t think so. pre_level isn’t implemented by feedback through the read head. it’s implemented in the write calculation itself. feedback through the read head can be done with the voice matrix mixer. if you use that to implement regenerating delay with one voice then… yea i guess the delay time might be off by 8 samples from what you’d expect.
dead zone
hey actually, i forgot - this was removed at some point. the configuration wasn’t quite right and it wasn’t doing it’s job and it was eating cycles. i should fix it and bring it back. this shouldn’t be hard.
LPF
two things: 1) over last couple updates, default pre-filter settings were changed so that it is completely dry. this was because we had complaints that the softcut output tone didn’t exactly match the input, and particularly that some phase distortion was detectable on very low frequencies. this is an expected outcome of the filtering. so now the LPF, dry, and modulation settings have to be explicitly set in the script.
and 2) you’re right that the filter modulation is only a partial mitigation of the clicks encountered when crossing rate==0. firstly, it’s not a perfect brickwall. secondly, although rate is updated per sample (with slew) the filter is only updated per block. that is because the filter coefficient computations include tan and so are a little expensive. for the softcut~ max external i changed the filter update to per-sample. for norns i could do this but i’m concerned about CPU hit. to mitigate that i would try to find a decent approximation of tan for the coeff calc purpose. i did this for the implementation of a similar filter structure on aleph; it works fine but has the drawback of limiting the useful range of cutoff frequency - and also (importantly for this application, maybe?) low cutoff frequencies are less accurate.
finally, there is another de-clicking feature that i implemented in a branch but didn’t merge to master because it carries a significant performance hit. in this feature, each voice (call it V) tracks the position of the record head of each other voice (call it W). proximity of W record head and V play head causes V playback volume to duck.
to me, this is a simple and acceptable solution to clicks introduced when voices read/write overlapping buffer regions at different speeds. (better to my ears than the ‘switch-and-ramp’ integrator approach.) but, it requires each voice to be computed per sample, rather than per block (as is done now.) this introduces more, cache misses and doesn’t allow the compiler to vectorize inner loops.
so in short, i’d like to button up all these things better. but to make it really super clean, i think i’d want to drop the voice count to 4. (or 5, but that seems weird.) (alternatively, could keep voice count at 6 but limit the number of simultaneous voices that can be writing - write is substantially more expensive.)
maybe this thread is a good place to ask: would anyone not think that cutting back to 4 voices is an acceptable tradeoff for dang-near perfectly clickless audio in all conditions? (of course it’s still possible to deliberately create ‘clickful’ configurations if you want that.)