Synthesizing Environments

Hi all,

I’m working on a project that involves synthetic environments – constructed / expanded field recordings a la Luc Ferrari’s Presque Rien series, Michael Pisaro’s July Mountain or Francis Dhomont’s Signe Dionysos.

I wrote more about my specific project here: (There’s a test render at the bottom.)

Who here is working with expanded field recordings? (Or whatever you might call it.) I guess I would point to hyperreal music constructed from actual field recordings or from very clever simulacrum-synthesis (like parts of Bhob Rainey’s latest album) creating impossible spaces, or undulating real spaces in impossible ways.

I’d love to hear about your projects, and historical stuff… especially your impressions and process etc.

The synthetic environment thread! (?)


I’m currently working on a research project that is in this area, but not with field recordings. It’s somewhere between AR, immersive environments, generative music and spatial audio. I’ve been thinking of it as eco-systemic, as there are layers of feedback between real, AR objects and sound.

Here’s a work in progress clip recorded on a hololens:

The 8 speaker feeds are just spread out over L-R. You have to imagine them in a circle around you!


I have an LP of this sort of thing coming out next year, will send you some of the material now though if you’d like.

Essentially, I am working on the way I always have, with binaural field recordings, usually recorded ‘in situ’ and with a specific focus, and these are then layered at random to bring out interesting juxtapositions and coincidences, etc. After, I may hone the entire thing a bit more intentionally, and call it a day.

My 2008 album, ‘The Gland Canyon’ i basically this, though a good deal of the material is improvisation on objects, recorded in an attic, with a good 5 years of (minidisk and binaural mic) field recordings interspersed.

Anyway, I bought myself a Sennheiser Ambeo ‘Smart Headset’ this year, and have many more (binaural!) recordings due to the ease, readiness, and immediacy of that tool, and have constructed a suite of 3 pieces over about 35min that, as I opened with, are due to be released next year.

I was looking for some hardware that I could integrate these recordings into a live set, with hidefinition granulation, and made a post to solicit examples of hardware, but have gotten little back as yet, but no real time limit…

Anyways, yeah, there ya go… I generally listen to this stuff over regular speakers, since it’s so layered once it’s in a finished piece, that there’s so many frames of focus that it will sound vivid despite not listening on headphones, as one is ‘supposed’ to do to appreciate binaural recs I this fashion, it really can seem like a new environment has been incorporated into your listening zone, is how I feel about it anyways.




awesome topic! and very vivid sound in the horaflora recording :slight_smile:
i think it’s kind of low tech that a binaural mixer isn’t integrated in portable media players, like ipods smartphones etc. should be able to calibrate the transfer function to your own head as well.

it’s not exactly what you mentioned but still work in a similar direction in vr, :

vr would also require realtime, which i think is more useful for creative audio purposes as well. basically, copy the vr engine and play around with the head-tracking system and hrtfs. individual field recordings could be placed in the environment and integrated by inverse impulse response or something similar.

1 Like

holy hecc this is great! I’ve been doing some work in a similar zone with VR + generative structures and spatial sound via speaker arrays.

Out of curiosity, are you working on this within an academic institution/commercial studio/independent basis?

Always excited + curious to know where work in this field is getting done!

I think what you are looking for (aesthetically, not technically) is better defined as “soundscape composition”. You can have a look (if you haven’t already) at the work of Barry Truax and Hildegard Westerkamp.


also v interested in the possibilities of this - there’s some fantastic work by toshiya tsunoda, bryan eubanks & byron westbrook worth checking out…

the questions of ‘live-ness’ or situating one’s self in juxtaposition with this audio, basically in toying with the uncanny valley of ‘recognizability’.

(also currently working on a project altering degrees of resolution of this kind of ‘transcription’, both sonically (a la Ablinger) & conceptually…but no documents so far)

anyway, just rambling, mostly posting just to follow this! exciting stuff!



I’d be interested to hear more…

Yes, academic. I’m faculty at Ravensbourne University in London. I’m writing the work up after presenting it here: (search for: Bye Bye Privacy – Sonic interactions in Mixed Reality).

Hoping to develop it further when I can make time.


hey belated, but yes def!

I’ve been doing some work getting Unity to speak to Max over OSC, and trying to apply more ‘patch-based’ methodologies to working with VR - I’ve been having some luck doing world building and VR support with Unity, then sending compositional information over MIDI/OSC.

Here’s a short video from an early piece awhile back, and here’s a bit of writing on the subject. Picture from the same work below, too.

Super cool re Ravensbourne and the presentation you all are doing! I adjunct at some colleges here in NYC, but am mostly an independant - I’m looking into some PhD programs though, as it really is feeling like institutional support is a must if you want to research this seriously outside the commercial sector (for better or worse!)



This is great! Thinking of exploring virtual space in terms of patches is a wild and cool idea.

As a side note, I LOVE the idea of many implementations of PD – do we already kind-of have that though? I mean what is lacking as an abstraction in Max Matthew’s unit generator paradigm? You could think of PD and Max as implementations of it?

1 Like

Not quite as ambitious, but Makenoise have been posting some “VC environment” videos on their Instagram, generating various soundscapes using their Shared System. Like this:

1 Like

Just got round to properly checking the links out - really nice work! Feels similar to what we’re doing, and similar implementation too. We’re running the audio side in Max and sending osc to the hololens.

I feel like there’s a lot of potential in the mix of spatial audio (using speakers) + VR/MR from an art and not gaming angle. The thing that really excites me about the mixed reality thing is the future scope for shared experience. It’s super clunky with Hololens 1 - essentially you can run the same program on 2 hololenses and film each other (as we did in the video), but as soon as you introduce some chance operations the experience diverges for each user!

Yes totally. I have a very low budget, but it’s something. Difficult as all this tech costs so much and depreciates so fast. I could easily spend £20-30k tomorrow!

This is the most inspirational term for me at the moment. Building worlds out of sound, and using sound to drive other processes that are not sound. Making those parameters affect and feedback to layers / parameters as part of system - that can be immersive.


oh man, totally + cosign all above. Very excited about the possibilities with all this stuff. Not sure if too far off topic for lines, but could be cool to have a VR/AR/xR art and possibilities thread, too!

1 Like Not mine but very cool nonetheless (virtual enviroment around 4:30 mark).
I created recently webapp for AR enhanced radio drama (AR.js + webaudio) and people were very enthusiastic about it so definitely AR can enhance music :slight_smile:
I would especially like to combine virtual enviroments with physicall modelling like in first video because it can create sounds that sound natural but are not generated by any natural physical instrument. The only thing that worries me is that right now the whole ecosystem of AR/VR is very fragmented and I am believer in AR/VR in browsers because that way the can be enjoyed by people on phones/pcs/tablets etc.


Would love to hear more about your webapp!

Sorry for the off-topic slip here but I can’t help but be frustrated by that video. I like Fluorescent Grey’s music, what I’ve heard is usually really expressive and doesn’t stop at just great sound design like some similar IDM. The music in the video is a case in point, beautiful stuff. I just don’t understand the weird focus on this “rare” synth module in that video. Waveguide synthesis is widely implemented in computer music systems, and physical modeling synthesis techniques in general have been widely exploited by computer musicians for decades – for exactly the reasons he points to in many cases; being able to synthesize impossible instruments by composing with low-level physical parameters. You don’t need an expensive pile of 90s hardware to do this, as cool as the studio shots look… (End of rant.)

On a more positive note – I’m super excited to see so many musicians working with game engines like unreal/unity/etc and the little snips of the Florescent Grey project look wonderful!

Thanks for sharing that, sorry for my little rant there…

To be honest this webapp was nothing fancy. The idea was that people who wanted to listen to radio drama had to explore the building, find markers and view them through web app which would display 3d object on marker and start playing part of drama that was connected to that marker. Main idea was to have a drama that can be only heard in specific place (near the markers) and to have experience similar to gathering logs in old games (like System Shock2, Doom 3 etc.).
Technology wise it was basically pure AR.js for AR and webaudio to play audio. I choose to do it in web technologies because making specific app for different systems would be time consuming and also I don’t believe that people would download app that would be used only once.
And about a hardware I understand your frustration because this fascination with rare hardware might create mystical aura around something which is fairly common but on the other hand I understand author fascination with rare and old-school synths because they represent some point of time in research into physical modeling and are unique in some sense (even when this uniqueness comes from the fact that hardware was lacking in power, so generated sound have some artifacts etc.).
But of course there is a lot of great software available to everyone today. I have been using Reaktor physical modeling synths and Collision/Tension from Ableton with great effect so it definitely can be done cheaper. But I would like to also have some modern physical modeling synth in hardware with full midi cc/OSC editing capabilities in realtime because then I could use more of computer power for running visualizations/simulations in software.

1 Like