me and my partner just bought a house (first time for me, rented all my life).
we had to fully renovate it, luckily we have great friends who can do everything and that cut the costs by more than a half…
i have a room dedicated to my studio for the first time also…while we were working i had an illumination: why don’t i wire all the rooms to my studio (i have a solid gs3 24 channels inline allen heath desk) so i can use the rooms as recording booths and\or echo chambers?
and that we did! i have three boxes in my studio, each sporting 4 combo xlr\jack plugs. 2 sends, 2 returns to 3 different areas of the house.
in the bedroom (pretty big room) we have our trusty old hifi system, so i wired the sends to the inputs of the amplifier there, no need for extra monitors in that room :slight_smile:
i already recorded some vocals in there, pretty fantastic sound, subtle, natural reverberation with very basic small diaphragm condenser microphones :slight_smile:
in the decrepit attic there is one of the 3 boxes, there i put the former bathtub (half size) because i accidentally heard it as a resonator\reverberator, i’m gonna use it acoustically (speaker\microphone) in the beginning, then i was planning to transform it into a kind of plate(well, bathtub) reverb, with drivers and piezos.
i’m very very happy. my sound lab is called “Ipostasi - Laboratorio di Fonologia” and it will soon have its own website.

37 Likes

It comes close but doesn’t have that magic IMO

Back in March, Sean Costello of ValhallaDSP gave a talk on reverb history and design at Seattle Music Machines Salon. Finally I made time to stitch the parts together and get that talk online. If you’re into reverbs from a technical angle, this is unmissable stuff from a current master of the practice. https://www.youtube.com/watch?v=aJLhqfHrwsw

33 Likes

Have you ever considered doing an iOS version of this reverb, given how clean the UI is? I think it would catch on there like hot cakes, because everyone loves hot cakes.

2 Likes

I second this. Would love some AUv3 reverb love from Madrona.

1 Like

I haven’t done any iOS development yet. i agree this would be a good place to start!

8 Likes

A few weeks ago I started thinking that the Alesis Midiverb (the reverb I mentioned here a while back) is well suited for emulation via convolution, since it’s basically just a box of presets. So I tried making some IR’s with the Max for Live IR tool and they translated very well. Then I made some more, and some more, and now I have the whole box (minus the 4 reverse reverb presets) captured as a pack of IR’s. And I’d love to share them, so here you go!

All of these IR’s were recorded in true stereo with a 60 second sweep. If you find any issues, please let me know and I’ll re-record any patches that aren’t quite right.

Also, if anyone might have any suggestions for how to accurately capture the reverse reverb patches as IR’s, I could use some help. I tried both a sweep and an impulse but neither seemed to work right.

24 Likes

As a general rule, you can only capture a system as a convolution if it is a linear time-invariant system. You can find a lot of descriptions of this property by searching. Briefly, it means that each sine component of the system’s input can be transformed into one or more sine components of the output at the same frequency, each at a different constant amplitude and delay time.

That “constant” is the real limiting condition here. In the case of the reverse reverb patches, the Midiverb is changing the amplitudes and delays of different parts of the patch, parts which make different output components, to get the overlapping backwards repeat kind of sound. With one constant IR you can’t reproduce this changing or the reverse effect.

In other patches, the Midiverb is modulating the times of the output components to get a smoother tail or a chorused quality. This stuff also can not be captured by convolving with an IR. So if some of the other reverbs also fall short of expectations as IRs, this is probably why.

8 Likes

Thanks Randy, that definitely clears things up for me. I knew about the limitations of convolution with regards to tail modulation (thanks to Sean Costello on other forums) but couldn’t quite make sense of why the reverse algorithms weren’t translating. I could get it sounding very close when applying a long pre-delay to the IR in the Convolution Reverb device, but at the end of the day I figured it wasn’t really worth the hassle.

I definitely found that some of the Midiverb IR’s sound “better” with some modulation applied after the fact via the modulation section of the Max For Live Convolution Reverb device.

1 Like

Could you elaborate on this at all? I know a bit about LTI filtering but nothing practical as it relates to convolution reverb. Naively to me it SEEMS like a reverse effect could still be characterized by an LTI system. Wouldn’t the impulse response sequence just be reflected/inverted in the time domain?

I think the distinction is the the reverse reverb patches aren’t simply a reversed version of the standard reverb, but contain additional modulation.

Exactly. I’m not an authority on Midiverbs, and just remember, from a long time ago, what they sound like, so take this with a huge grain of salt. But I think it’s something like two (or more?) copies of a reversed playback that fade back and forth so there’s no obvious edit. Like granular synthesis but just two long grains, if you will. The single reversed copy would be LTI, but not the fading.

1 Like

I guess my followup question belongs here more than anywhere else: are most reverb IRs just simple time domain sequences? Or are they 2d time-frequency kernels?

the former. but they are often translated to a frequency response for efficient (usually partitioned) convolution in the frequency domain. so i guess that’s what you mean. one wouldn’t typically brute-force the time-sequence convolution in a real application.

(brute force requires O(L) operations per output sample, FFT requires O(log(L)), where L is the response length. deep dive)

i have seen parametric freq-domain reverbs with like, an arbitrary decay function per freq bin. miller puckette made a really nice one (i thought) that stochastically updated different freq bins with new input.

4 Likes

Yeah, that’s what I meant.

This sounds like it would… sound… very cool. So like, randomly sampling into seperate buffers per frequency band with different decay functions per band?

I haven’t tried any real-time 2d convolutions (only ex-post signal processing with time-frequency kernels) so have no idea the computational load, but I’m super curious about the idea of a 2d kernel reverb.

A lot of “reverse” effects work this way (to the enjoyment and confusion of guitarists everywhere, see the Line 6 DL4 or the EHX Memory Man with Hazarai) but I think most “reverse,” aka “nonlinear” or “inverse room,” reverbs from the '80s are really more like multitap delays with a large number of carefully tuned, modulated tap times, where the later taps are heard more loudly than the earlier ones – as opposed to a more standard reverb in which earlier taps/reflections are louder than later ones. So an IR should be capable of at least sort of reproducing a reverse reverb sound, but it would sound extra extra metallic due to the lack of modulation.

Incidentally: if anyone has a good lead on papers about reverse/nonlinear reverbs, pleeeease do share.

@zebra do you have a source for that Puckette design? Seems really intriguing. I’ve heard of a few reverb designs of his but I thought they were all FDN- rather than FFT-oriented.

2 Likes

it’s been a really long time. i thought it was in his book, but i don’t see it - maybe a different version or a different collection of online pedagogical materials.

but this patch from the PD examples employs the same idea - using the phase vocoder to implement recirculation directly and arbitrarily:

[ https://github.com/pd-l2ork/pd/blob/master/pd/doc/3.audio.examples/I08.pvoc.reverb.pd ]

maybe i’m being really dense, but can’t you just reverse an IR sequence and then convolve it, see what happens?

classically of course, you just 1) play the input backwards into a normal reverb, 2) reverse the result.

[ed] of course reversing is not TI… but you’re after a “non-causal” kind of effect

2 Likes

Do you mean contemporaneous reverse? Then yes, it would be non-causal, but with lag/time shift it should be LTI, no?

My comment above was wrong—the whole concept of “reverse” doesn’t fit into the framework of a convolution, so the idea of even a single reversed copy being an LTI operation doesn’t make sense. The convolution describes how the source at a single point in time should be spread out over some duration of time to get a result. So there’s no way to express reversing the source at all.

I spent a little while trying to translate the math into clearer English about this but am failing, pretty much. It’s a bit tricky, because you do take into account the source signal over a duration equal to the kernel size when calculating the output. So you might think you could reverse this source clip somehow. But the contribution of the source to the output at any instant t is always the same.

yea trying to anlayze the reversed source is a red herring. the “reversal” isn’t strictly LTI because a delay on input doesn’t produce same delay on output, but its negative value.

i just mean, reversing the impulse response should give you the “backwards decay” effect, perceptually… no? this should be easy to try.

… i’m gonna try it…

1 Like