Approaches to multichannel / spatial audio

So I’m working on a proposal for a multichannel piece, and I realized I really have no idea how this stuff works at a technical level.

In my case, it’s a pre-rendered sound piece of finite length. The venue has a machine for playback with a multichannel audio interface and up to 9 speakers.

For those of you who have worked in this word, how do you manage it? Do you use tools like the Ambisonic Toolkit or Envelop for Live for instance?

For these kinds of site-specific one-offs is it even possible to get a sense for it on headphones, or do you have to work in situ with the speaker setup?

I’ve also seen a few cool multichannel modular shows, which is obviously a whole different set of gear and methodology but also interesting.

I’m interested in getting a handle on this particular project, but more generally I’ve always been interested and kind of mystified by spatial audio, I’ve grown very accustomed to routing everything into a hardware mixer and two speakers.

2 Likes

Big topic!

I’ve seen many people diffuse a stereo track beautifully with just a fader for each speaker (right panned to all right speakers, left panned to all left speakers, with volume control for each speaker on mixer faders). If the piece is already pretty active in the stereo field, then controlling front/back and up/down can be really fun and fascinating.

An additional approach could be to have EQ/Crossovers on the track(s) so that you can direct bass, treble, mids etc to different points in space. Audium in SF is 4 channels with subs in the floor, woofers at ear level and tweeters up high suspended from the ceiling. This spatializes the spectral changes. I’ve also played 8 channel shows with highs, mids and lows moving separately and it can be really wonderful.

If I’m composing a new piece I’ll do a combo of ambisonics and more direct panning - one issue with ambisonics is that (in my opinion) it does middle- and back-ground stuff really well, but I can rarely get something to feel really close, like inside your face, with it. So I use “regular” panning in Max for the foreground elements.
(Note: the ICST toolkit in max has doppler, swarming, and all kinds of stuff to make movement of sound more psychoacoustically realistic. Highly recommended if you have the time.)

One thing I strongly discourage is swirling. I personally find it cheesy when sounds spin around the listeners; I much prefer to think about them as events in a spatial environment that will move but not in a gimmicky way. Just my 2c.

4 Likes

I DIY’d a multi-channel setup at a show (at a record store) this past weekend with the store’s 4 speakers set up in the corners of two rooms (small 2-5" woofer hifi speakers), and a keyboard amp. The sound sources going into these were a synth my friend was playing and a backing track (bass guitar also going through the keyboard amp on a different channel). I was playing guitar through two fairly size-able tube amps.

It sounds like what you are going with is much more sophisticated (and sounds super fun!), so it might be less of a concern for you, but I’m really glad we did a run through on location a couple days before the actual show. I had everything running through my RME Babyface Pro, and being able to set snapshots of the different levels needed for each individual song was very nice. Due to the nature of it being an untreated space, certain frequencies through certain channels were causing major buzzing, and some bass stuff was doing the standing wave boom-y thing. It would have been nice to have setup filters/eqs to the individual destinations (and I could see that changing the eq or filtering certain frequencies in certain speakers could be “performed” in an interesting way)

Also agreed on @wheelersounds point that swirling can be a bit cheesy (similar to in the stereo realm, autopanning left-right like a tremolo can feel somewhat similar, in most cases). The most striking use of multi-channel audio I’ve seen was @marcus_fischer’s installation in the stairwell of the Whitney for the biennial. It was like a drone where the harmonics would change as you walked up/down the stairs because different speakers were playing slightly different things. I think making changes like this that are slow and discoverable by the listener, as well as sort of have fun in the boundary between performance and installation (which maybe you could do by putting chairs or decorations or something in the space to encourage the listeners to interact with it certain ways) could be really cool.

Please share some info about your performance once you do it, I’m really interested to hear how it goes.

2 Likes

for tape music (“fixed length pre-rendered”) i just use a daw with on each track stereo sends to loudspeakers pairs. Then automate send levels + pan to locate any sound into the global loudspeaker arrangement. Works only when said arrangement is known in advance and with relatively few loudspeakers (for exemple quad.)

I also use the Reasurround plugin in Reaper + automation.
Ambisonics etc look fine but use lots of CPU and i don’t get all those plugins where you control a movement characteristics and not a direct placement.

For “previewing space” on headphones some multichannel-to-binaural plugins exist but it will always be far off listening in situ.

2 Likes

If you’re not going to have access to the space and speaker configuration it will be difficult to know exactly how the space and system is going to respond. I’d also go with previous recommendations of automating most of the spatial gestures but in addition to that it is always great to have some general sends of the different stems that you can spatialise in realtime. You could have sends for each of the speakers or pairs of speakers where you then control the return levels of each of those when you want to add extra movement into your sounds. This will allow you to play the space a bit and in my experience having this extra control over spatial gestures is very satisfying and effective.

As for the tools such as ATK i think that is a compositional choice. Hybrid methods are great if you want to experiment. As mentioned above ambisonics tend to be more blurred and good for filling up the space but combining it with traditional panning and also point source spatialisation will give you more dimension.

Since you are working with a pre-recorded piece you have more freedom to play with spatialisation live so capitalize on that.

The space (venue) itself will play a big role and I personally think that is one of the reasons why multichannel music is so great. Let the music be flexible in the way that it interacts with the space.

2 Likes

There’s some interesting stuff to unpack and think about here.

For one I had thought Ambisonics and direct panning/volume control of speakers were sort of two methodologies that would lead to similar results. Knowing that the end result differs means that’s definitely something to experiment with.

Definitely seems like if the venue is interested in this proposal I should get in there and try this piece out in that environment on those speakers, rather than working entirely on headphones and trying to adjust at the last minute.

2 Likes

Doing multichannel work will always involve last minute adjustments, improvisation, etc, in my experience. Getting into the space early is a HUGE help, but even with that, be prepared and willing to change things when it actually comes time for the piece. I think planning ahead for this and building yourself a flexible toolset that you feel comfortable working with and making adjustments to is crucial to doing multichannel work!

Understanding what areas you want to experiment with in the space, what ideas you are willing to change or abandon, what ideas must be fixed and accomplished no matter what, etc… multichannel can be really stressful, but also really fun!!

2 Likes