//// day + night = : lcrp.2019.equinox.1 ////



Got it, thank you…


I’m in the middle of a render (2.25 hours elapsed so far) that feels far too simple… but I’m excited. I’m using the shorter of the two recordings as an impulse response to convolve the longer of the two. We’ll see if it’s anything, but I’m dorking out a bit on the “balance” theme and the methodology of convolution being great friends.

I used to do this with soundhack in os8/9 all the time – convolve a sound with an impulse that was just another sound maybe even as long or longer than the sound. It’s nice. It takes a long time. But it’s nice.

(Never did do it with a 45 minute sound and a 52 second impulse though)


I try not to fear results I like that were “too simple” - but I do sometimes feel like I’ve somehow cheated my way to a piece of music I like if it all comes together too quickly!

I thought the same! I tried doing this in real time with a VST and wasn’t actually very inspired by the result, but since I’m doing two tracks I might try it again with the other set of files.

I’ve also played around some with Mammut, a standalone app that I’m sure I found out about via this forum. It has that section down the bottom where you can load a second file “and multiply”. Again, not that excited by most of the results, but enjoyable playing around and maybe others are more into FFT-based sound mangling. :smiley: It’s certainly unpredictable. :control_knobs:


Will have to see! 5hrs 45mins into it now… :-/

Just doing a standard time-domain multiply-sum convolution on each channel.


I ended up stopping the process and rewriting it to parallelize over the 40 cores in my workstation after work today. I did some back-of-the-napkin estimates and realized even across 40 cores, it’ll need to do about 3.5 trillion multiply-sums for each core… if a full unit of work or whatever is lets say 20 instructions (I haven’t checked the inner loop on this) and each core can do ~2.8 billion instructions per second… that’s 700 trillion instructions for each core divided by 2.8 billion a second which means about 250,000 seconds or about 70 hours. As long as there isn’t too much overhead on the threads… I guess I’ll let this go for a few days!

OT: Is a multiplication in the frequency domain much more efficient? Maybe there’s another way to approach the segmentation of this? Is there a more efficient way to do a convolution of an entire 45 minute sound with an entire ~1 minute impulse?

Edit: this is the basic approach that I’m taking – http://www.dspguide.com/ch6/3.htm


My gut says doing it in the frequency domain would be much more efficient, since that’s typically what ‘spectral’ effects do, relying on the fast fourier transform. I haven’t implemented one, though. Maybe @zebra would have a thought?


yes, it is far, far more efficient to do this in the frequency domain. (convolution theorem: conv in time domain == pointwise mult in freq domain.)

input -> FFT -> multiply by spectrum of impulse response -> IFFT.

since this is non-realtime then you can make the FFT blocksize as big as you want. (maybe the whole signal, if you have enough memory.)

for realtime applications, see gardner’s classic paper on non-uniform partitioned methods. (you may want to take a look regardless; it nicely clarifies the performance difference between direct form and freq domain.)



Awesome, thank you!! Maybe I’ll see if I can do a rewrite before my time domain implementation finishes processing in a few days. :wink:

Edit: also of course the dsp guide book has a chapter later on the subject: https://www.dspguide.com/ch9/3.htm

This book has been so helpful in its plain speaking for a dummy like me.


Ladies and gentleman, I have no idea what anyone is talking about here, although it sounds fascinating. Back in my simple world of Audacity and Ableton, I’ve just sampled @jasonw22’s chicken as part of my attempt to compose a night-day song about one of Geoffrey Chaucer’s Canterbury Tales. Call me medieval!!


If you’re medieval, then we’re both on the road to Canterbury together! (I did, however, load up Mamut and mash some buttons using your sample just to see what would happen – and there were some fun results, even if was like Chaucer trying to operate an iPhone!)


@dnealelo I’m hearing you… Maybe you could base your track on Dante or Boccaccio?!

I had a few free hours this weekend and made some progress with my tracks.

How is everyone else getting on?


I’m happy you enjoyed the train travelling gently across the rain @dnealelo! I love hearing those sounds flow together over the hills around here. :slight_smile:


I have to admit that I’ve been slow to start, due to home/work schedule. But listening through my assigned “Day” track by @alanza (a bustling mix of dish clatter and excited conversation in a cafe) - I’m a little sound-giddy about the contrast between our two tracks. The Random Generator Gods chose well - this will be fun!


I, too, have been slow to start. And I also am assigned a recording by @alanza and have been listening through it with excited ideation.


eeeeeeeep!! I’m glad you both are excited about my recordings!! I have also been slow to start, but this is very encouraging. @dnealelo’s birds and @dansimco’s storm will make interesting material against my subway station and cafe.


I’ve got one track probably done - or I am at least letting it sit while I explore the second one.

While I did quite a bit with the first track, for this second i just have the two files slowed down to the same arbitrary length and am enjoying listening to the two of them at once. Pitched down chickens and chatter in the day meet pitched down bugs at night… Could I be done after maybe 5 minutes’ effort? :thinking:


I think that means that 5 minutes is, sometimes, more than enough!!


The convolution finally finished today after 214 hours, 51 minutes (oy, next time I’m following y’alls suggestion to do it in the frequency domain :slight_smile: ) – left me with ~46 minutes of low frequency activity… I just sped it up 15x and a pretty wild world emerged.

I’m going to live with it for a little while but I’m really pleasantly surprised with where this ended up… it’s like the soundscape from a jungle on a distant planet…


Even if you don’t end up using it for your piece, I’d definitely like to hear the results of this!


i have never been so excited about the prospect of hearing a processed field recording!