DIY Eurorack - Aleph inspired delay

Hi all,

I’m trying to make my own eurorack delay inspired by some of the Aleph demos I’ve seen, particularly this one by @dspk https://vimeo.com/97864348, as I love the sounds on this one. I had an idea in my head about how to implement something similar, but as is often the case with these things, the implementation was slightly less impressive than what was in my mind. I’d really appreciate some feedback from some Aleph experts, or anyone really.

I’ve thrown together a very quick demo of what I have so far below. What you are seeing is based on the Teensy board. I’ve implemented a short delay (the Teensy only has 64k of ram, so that’s about 0.5s, although I have an 8-bit mode which gives 1s). It has tap bpm for the delay time. On every beat (top flashing led), a random test is made (the frequency of which is controlled by the top pot), if the test succeeds, the module will decide to glitch. This is when the middle led comes on. At the moment, this ‘glitch’ just disables writing to the delay line, and loops a tiny buffer (a 16th note in size), for one whole beat. The buffer starts at the now ‘frozen’ write head position. It’s an okay effect, but it’s quite far away from the Aleph demo. I’m not getting those lovely almost granular sounds.

All code is open-source here https://github.com/cutlasses/GlitchDelay although I’m not sure it will be of much use to anyone yet :slight_smile:

Whilst I don’t think it’s going to be possible for me to mimic a lot of what is going on in Duncan’s video on the Teensy (the Aleph delay buffer is far larger), I’m sure I can get a bit closer. Things I’d like to establish are…

  1. What exactly is the Aleph scene in Duncan’s video doing? There seems to be an almost granular effect, is this purely the delay heads jumping around?
  2. Is there any way for me to view Aleph scenes in a human-readable form so I can get a better idea of what’s happening? I’d assumed these patches were written in C, but Duncan tells me they are created on the Aleph itself.
  3. I believe the Aleph has a built-in crossfade when delay heads move? I’m guessing this is implemented by moving the playhead overtime to its new destination, rather than snapping it?
  4. Is the Aleph code open-source?

Blimey, didn’t mean to write that much, if you got this far and have any ideas or comments, let me know! Thanks in advance!

2 Likes

Do you have Max 7? I can recommend two open-source algorithms contained in there for click-free granular delays and pitch shifting. They’re the same algorithm but with one tweak.

Look at “Smooth Delay” in the BEAP package and gen~.pitchshift.maxpat in the Examples folder.

You built it by using a Multi-tap delay algorithm with two taps. You use two cosine windows to alternate between the two taps, and you only change the tap position when its associated window is closed.

For pitch shifting, you do the same thing, but you use the window phase to scrub through that tap.

It’s confusing to type out, but seeing the patch visually really helps.

3 Likes

One other thing, you need a linearly interpolated delay line. It doesn’t sound as good with cubic, allpass, trunc, or round interpolation strategies. Allpass is especially bad, as it smears the discontinuity click beyond the window.

1 Like

all the code for the Aleph is on github at http://github.com/tehn/aleph

in terms of my scene it is just jumping around in the buffers. maybe with a touch of feedback, seeing that video again makes me want to rebuild that scene in the current version of lines actually :slight_smile:

2 Likes

as a second note, one of the reasons it sounds more tonal that yours i think is just the speed of the retriggers, in that scene they are very rapid (down to 5 or 10ms sometimes) but also the random function keeps slightly changing them, which gives it a richer /shifting feel to my ears.
the input probably helps too, it’s quite a clean resonant guitar tone, so doesn’t have the same kind of edges a voice sample has

2 Likes

[quote=“trickyflemming, post:2, topic:4894”]
Look at “Smooth Delay” in the BEAP package and gen~.pitchshift.maxpat in the Examples folder.

You built it by using a Multi-tap delay algorithm with two taps. You use two cosine windows to alternate between the two taps, and you only change the tap position when its associated window is closed.

For pitch shifting, you do the same thing, but you use the window phase to scrub through that tap.

It’s confusing to type out, but seeing the patch visually really helps.
[/quote]i’ll look at this later

is there anything you mentioned that is isn’t in the help folder?

I have Max4Live, hopefully I can open it in that. Thanks! I’m not very familiar with Max, but that sounds reasonably simple. I tend to get easily put off when I open a patch that looks a bit like spaghetti, give me code any day!

There’s actually no interpolation going on at the moment. It just plays back the samples in the same order they were recorded. My Audiofreeze module did use linear interpolation to speed up and slow down the samples though… https://github.com/cutlasses/AudioFreeze

Definitely worth revisiting in my opinion, @dspk! It’s inspired me to spend many hours at the compiler trying to create something similar! :slight_smile:

Is it the read head or the write head that’s jumping around? Is the write head still writing new audio whilst the jumping is happening? Do the read head and write head ever overlap/crossover? Because my buffer is so small, I’m wondering if I’ll get issues if I continue writing new audio whilst the read head is jumping around. E.g there’ll be jumps between new and old audio. Need to experiment I think…

So I’ve been beavering away at this for a few more weeks, thanks for the helpful comments! The first thing I implemented was adding multiple read heads that can cross fade between each other (this is simply running both reads at the same time and linearly blending between them over 4ms). I spent quite some time eradicating pops and (unwanted) glitches caused by crossing from old audio to new, some caused by not fading between read heads properly, and some caused by the way I was stopping the write head during the glitch effect period.

I’ve now changed the implementation so that the write head never stops (which actually simplifies things a lot). The main limitation at the moment is that I only have around 0.5 seconds of audio buffer to play with. So the play heads are constantly having to jump over the write head (leaving enough time to blend) so you don’t hear a pop when the write head runs over the play head.

Added a very rough demo, which starts with a dry loop, then entirely wet glitch effect. Dials from top to bottom are

  1. Glitch window size
  2. Glitch window speed
  3. Feedback
  4. Dry/Wet mix

The play heads are looping a small window (controlled by pot 1) which moves (forwards only currently due to lack of buffer size) through the buffer at a speed controlled by pot 2. There’s a certain element of randomness added too each time the loop restarts. It’s still a long way away from Duncan’s video, but getting a bit closer I think.

Next stage is to extend the buffer time by adding more memory chips to my Teensy based board. Should hopefully be able to extend the time time from ~0.5s to around ~7s. This should give a lot more time to hear glitches repeat before the write head loops back around and the play heads have to skip over it.

@dspk in your Aleph scene, is the write head constantly writing? How much of the Aleph’s huge buffers are you using? Just wondering whether you thought that 7-10s buffer would get me closer to the holy grail Aleph demo :slight_smile:

1 Like

this is what is happening on the aleph too.

big picture aleph architecture:
there are two processors. “scenes” run on the AVR32, and can be manipulated in the device. “modules” run on the DSP, and these are monolithic programs exposing a flat list of parameters that can be set from the control side.

there are two delay modules now, with kind of different philosophies.
duncan was using the one called lines. it’s very “raw”, in both implementation and interface.

there is also a module called grains that has both kinda more sophisticated in its implementation and has higher-level parameters.

both use dual crossfaded read heads.

in lines, buffer access is not interpolated. you can change speeds, but only by integer ratios, and aliasing occurs when you divide the rate. (kinda fun in a nasty way, imho.)
the parameters include raw access to the position of read head, position of write head, whether the heads are running or not, the size of the circular buffer, how much to erase or save old data on each write pass, &c.

setting the read head position initiates a linear crossfade, unless a crossfade is already underway (then it jumps and clicks.) you can change the rate of the read-head crossfade. there is no crossfading on the write head. there is a convenience setter for the read head position as an amount of delay behind the write head position. so you can make arbitrary structures that don’t click if you’re reasonably careful.

in grains, there is (i think) cubic interpolation on the buffer access, it does a lot of position management automatically, and you can therefore use it as a more traditional rotating-head pitchshifter. (i’m not real familiar with its use tbh.) the crossfades use quadrature oscillators instead of ramps.

i still have plans to do another version of lines. working a bit on a new version of the filters right now.

HTH. sorry i missed your original post. project looks awesome!

3 Likes

Thanks very much for your thorough description @zebra! Aleph Lines was certainly the inspiration for this project, I totally ripped off the cross-fading heads from that, although subsequently found it’s a reasonably common method for adjusting delay times. I have a few questions (I’m pretty new to audio DSP)…

there is no crossfading on the write head.

So if you move the write head, do you get a pop when the read head passes over the transition point (e.g. where the write head moved from?). That was a source of (one of many) popping bugs I was having, so had to introduce write head cross fading, although it’s not used now I keep the write head constantly moving.

in grains, there is (i think) cubic interpolation on the buffer access,

Do you mean, interpolation when playing the buffer at a different speed than it was recorded (e.g slower)? On my other project https://github.com/cutlasses/AudioFreeze I just used linear interpolation when slowing down, and just skipped samples when playing faster. It sounded ok to me, is cubic interpolation just a better sounding way of doing this?

the crossfades use quadrature oscillators instead of ramps.

As in, using half a sine wave to blend down, and another (out of phase sine wave) to blend the other signal up? I did try something similar, but my crossfades are so fast linear seemed fine.

It’s definitely been a learning experience. I’m used to debugging PC code, so developing complex code on the Teensy was quite challenging, when my only debugging tools were printf, and zooming into the output audio in Audacity! I have even more respect for you guys developing Alpeh now I’ve experienced this!

Cubic interpolation is audibly smoother way of ‘joining the dots’ between one sample and the next. It really seemed to sound quite a bit better (though I started with linear interpolation writing grains). Hard to put your finger on the difference (I should record some test samples!).

There’s an interesting topic here - how do you avoid aliasing when ‘skipping samples’? Pretty sure there’s also a way to optimise the cubic spline interpolation using lookup tables. I’ve seen methods with some kind of gaussian-looking weighting graphs, haven’t played around. May need to get your head round this to get higher-order interpolation running on a teensy - blackfin is a beast, really. You can use very innefficient methods & still have enough cycles to do interesting things…

The other interesting/tricky topic & something I’d like to play around with on aleph is varispeed write heads. Mainly because:

https://www.youtube.com/watch?v=6jumrMG7Owc

my hero! Wouldn’t it be awesome if we could keep up with 50s technology in 2016…

2 Likes

So if you move the write head, do you get a pop when the read head
passes over the transition point (e.g. where the write head moved from?

yes, if you move the write head, or stop it, or change the pre/write level, and then read over the point where you moved/stopped/changed level, you’ll get a discontinuity of course. but as you say, that’s wasy to avoid in a traditional “delay” type application, where the buffer is being addressed in a circular way and write head is generally always moving.

still, idea for lines was to allow other, arbitrary structures. disabling the write head can be fun. clearly in a sampling/1-shot or looping playback kind of situation, you want to be able to arbitrarily reset the recording position.

lines does lack a couple of features to really make these use cases convenient and glitch-free (namely fade-in/out when starting/stopping the read head, crossfading around a loop point, and preroll/postroll for accurate timing.) but there are also numerical issues with timing accuracy when using lines with BEES (control side application)… basically a tradeoff between buffer length and accuracy… and that’s a whole other can of worms.

sin/cos crossfade… linear seemed fine.

yeah, this gets more noticeable when you are constantly crossfading or “granulating” - as when using the delay system as a pitch-shifter. then you kind of really want the equal-power envelope.

you mean, interpolation when playing the buffer at a different speed than it was recorded?

yeah, that’s what i mean. of course if you are speeding up by a integer factor you don’t need interpolation, but if you are speeding up by an arbitrary factor (or slowing down at all) then you do need it. and yeah, cubic interpolation is “better” and causes less distortion, although i’m actually with you in thinking that there’s rarely a noticeable improvement from linear interpolation for live processing applications of typical material.

developing complex code on the Teensy was quite challenging, when my only debugging tools were printf

yeah, i love the teensy but that’s a big downside. i kind of don’t understand why they didn’t expose SWD/JTAG pins on the board. on aleph we actually do have JTAG exposed for both processors, so it is possible to debug the DSP. i used a gnice+ when developing the low-level framework, would have been very challenging without it. now those things are hard to come by, and the blackfin jtag header isn’t populated be default, and it’s inside the case, so it’s more convenient to forgo that path when possible. we’re gradually building out more tools to improve developer workflow (or rather, @rick_monster has been).

generally though, it’s easiest to do as much work on the PC first. rick has also made fract_math primitives and an OSC test harness for emulating DSP. i do the same kinda thing with pd externals. ususally it’s

  1. make the algo work in floating point
  2. convert to fixed point and deal with any resulting issues
  3. migrate to the blackfin (or teensy or whatever)
2 Likes

Thanks for the useful info @zebra @rick_monster I shall report back when I’ve got the board working with more memory. I think longer buffers are the key to getting the sound I’m after.

Seems like a great little module! Did you make PCBs for the teensy breakout? I’d love to build one!

1 Like

Yup, I made A PCB. It’s open source. The gerbers are the same as my previous AudioFreeze project here https://github.com/cutlasses/AudioFreeze

Very simple board which connects pots and LEDs to the teensy. I built it using the audio shield but you could probably work without that. I put eurorack power on it, but there’s no filtering or short circuit protection, and it needs 5v, so there definitely room for improvement.

I’d love to see photos if you do make one…

Thanks! That looks good. How about the music thing radio music? Could the code be adapted for that? it uses 12v and has circuit protection

1 Like

It should be possible, yeah. You should be able to re-purpose one of the cv inputs as the audio input. Not tried it myself though…

[quote=“zebra, post:14, topic:4894”]
it’s easiest to do as much work on the PC first. rick has also made fract_math primitives and an OSC test harness for emulating DSP. i do the same kinda thing with pd externals.[/quote]not anywhere near the top of your priority list but someday I’d like to see an example of this in pd compared to the migrated code

1 Like