Language to code an interactive ambient sound art installation~

I wonder if Supercollider would be a good choice as well for the fact it seems like it can also run on anything and work.

It’s a good one, too. Gregory Taylor writes most of the documentation. Reading an earlier version of that was basically my introduction to computer music principles in 1999.

If you want to track position or gestures of arbitrary visitors to your installation using cameras/IR/structured light cameras, Wekinator is worth a look:
http://www.wekinator.org/

as someone mentioned above,
first step is to work out your approach and what sensor hardware you’re going to need…
then look for platform that is going to support that sensor hardware. (unless you want to get into lower level coding!) and then you need to think about processing requirements if thinking about a rPI (the rPI is less powerful than a even a cheap laptop)

if you’re going for something like the rPI, then supercollider or pure data are good choices.
pure data i think is a bit easier for non-coders (at least initially) but as a ‘traditional’ programmer, I often prefer supercollider as its text based.
(that said, i find low level integration via externals easier in pure data)

as for bela… its great if you need analog inputs and outputs, and really low latency, but if you start hooking up things like usb based sensors those advantages tend to diminish. also the beaglebone black is much less powerful than a rPI3/4.
(btw: i dont think the bela ide helps with pure data, but it is very cool for c++ and supercollider)

so if your not using that, id use rPI and look at the patchboxOS by blokas as an easy to setup distro for audio work. (if your doing audio on the rPI you’ll need an audio interface - checkout pisound)

the nice thing is… if you’re doing it in pure data or supercollider, then you could do initial development on your desktop, and then you’ll have a feel for the processing load, and if moving to a rPI is viable or not.

Yes, although Max/MSP and PD have diverged a bit more over the years and at first it will be disorienting coming from Max to want to reach for a buffer~ and find table and array instead, etc. The principles are basically the same, although some core differences between the behavior of the audio engines is worth knowing too. (Like Max/MSP has an independent scheduler for control and audio rate messaging, whereas PD just computes messages on every new DSP block. Max/MSP has right-to-left semantics while PD orders by last node connected – but in both systems, you should be using trigger objects to enforce deterministic flow or it’s going to get messy!)

On the other hand it used to be that starting out with PD first was much more difficult because of the lack of documentation – things have changed a lot in the last 15 years though, and there is a lot of good documentation now. Like the FLOSS manuals book for example: http://write.flossmanuals.net/pure-data/introduction2/

They’re both great systems. Max/MSP is always going to have a leg up on ease of use and onboarding because it is a commercial product, and PD is probably always going to have a leg up on embed-ability and portability because it is FLOSS.

Neither one IMHO really locks you into abstractions at such a high level that moving from one to the other would be completely foreign – they both give you a fairly low level toolbox for signal processing in a dataflow style interface. The concepts translate well beyond just Max and PD, too.

Edit: I was curious about how the Max runtime might get along under wine on a raspberry pi, but then I came across this thing: https://www.dfrobot.com/product-1727.html Definitely more expensive but also quite small, and apparently possible to hackintosh for Max purposes: https://cycling74.com/forums/max-on-lattepanda/ Seems like an interesting possible route!

“My favorite programming language is … solder.” -Bob Pease

Sounds like a variation on the “Hassler”, created by Bob Widlar.