[quote=“gusp, post:13, topic:8260”]Luckily our hearing mechanism is robust enough to use any clues we get.
Not to be too pedantic, I hope, but the various clues we get are not all applicable in all cases.
Interaural delay works for low frequencies, but as frequency increases this clue becomes less useful. Eventually, more than one period of the waveform occurs in the distance between the ears, and our brains are powerless to calculate the delay.
Amplitude differences work for high frequencies, which are directional, but lower frequencies bend around obstacles like our head without much, if any, SPL loss. Thus, our brain gets no clue from low frequency amplitude changes, at least not for natural sounds.
The height cues are extremely individual, which is why so much work is being done with HRTF measurements. Although the clues are there, it’s difficult to take advantage of them in listener-independent ways.
So, yeah, our brains are fantastic computers that utilize many clues, but those clues aren’t always in play for every sound.
It is true that binaural is challenging, and doesn’t work everywhere. My point was not that binaural is easy, but that our hearing is working in binaural mode all the time, whether that’s easy to fool into surround perception or not.
[quote=“gusp, post:13, topic:8260”]Maybe an interesting approach to electronic modular music could be to apply control voltages that you are using anyway for musical gestures / changes to move the sounds around. You could use VCA’s for this, or even (to go the binaural route use a voltage-controlled delay line to shift the sounds in space.
Apple’s CoreAudio offers a 3DMixer that calculates the delay, amplitude, and maybe even some height cueing into a mix. You’d have to work in the digital domain, at least partially, but it would even be possible with a multichannel audio interface to input a number of analog sound sources and pan them around within CoreAudio before outputting them onto a stereo or quad output. CoreAudio allows the user to edit the position of the available speaker channels and will convert the surround stream to fit what’s available. I have not explored these options fully, yet.
When I worked with Alan Abrahams in quad, he found some Ableton effects that could be synchronized to different phases of sine or triangle LFOs, and this produced an automated swirling effect in surround that was quite effective. Now that Ableton has Max/MSP integrated, it might be possible to patch various control signals that have precise phase relationships across the various surround channels.