AVB is purpose built for this as it has the time concept built into the entire structure of the protocol. It’s also a well accepted standard. UDP has no built in notion of time or bandwidth and requires many other layers on top to make it reliable and functional for such purposes.
Here’s the AVB SDK:
It’s not clear to me if this is ready for software workflows yet; they haven’t put all parts of the stack into this SDK yet.
Another option is Dante, a similar stack that also offers a couple out-of-the-box apps for connecting software on the same box and across the network with virtual soundcards. The apps aren’t free, unfortunately ($30-50) but your music apps don’t need to have specific Dante support, they just connect to virtual soundcards, so you could in theory use it with Orca etc. immediately.
Another option for building network support directly into apps is NDI. It’s designed for video but it supports audio too, their SDK is not open source but it is royalty-free. A lot of video apps have built-in support for NDI already.
I don’t see any problems with using UDP for transporting audio.
Yes, AVB (and Dante) is a great standard for large scale systems in concert halls and sporting venues. But depending on your application it might be totally overkill to implement. Additionally AVB requires AVB-specific routers/switches to work.
I’ve written some protocols for transporting audio data over UDP and have been pretty satisfied. Mostly they existed in the form of JACK clients for Linux.
It’s really important not to optimize before testing (what may or may not be considered a naive solution).
EDIT: Also, from what I remember, OpenAvnu absolutely requires an Intel I210 Ethernet controller
Also, to note: ORCA is a MIDI app (and OSC and UDP already, of course), and MIDI can also be transported over the network using an existing standard, RTP-MIDI.
Do not use UDP for audio without very good reasons and lots of engineering available.
The obvious issues of not guaranteed delivery, out of order reception, and duplicate reception, are just the start. Network intermediaries make many assumptions about UDP based protocols that can make things even worse. Implementations other than Berkeley Sockets (what you get on Linux) can be pretty iffy for UDP.
I have many years of network protocol design and implementation… include several years specifically on media applications.
Is there a reason you don’t want to use TCP? What’s the context?
so cool, thank you
no network needs, same machine, linux
(elementary Juno, MBP 13" mid 2012)
it’d be great to have an app for ORCA
that has 10 chan of synth a la pilot
and 6 chan of sample/slice a la gull
ability to export .wav file (pilot has this)
and hundredrabbits themes (skins, pilot has this)
probably an electron.js idea
Ah - okay - so several things to think about:
☞ When pilot and gull say they are a “UDP synthesizer” and “UDP sound machine”, they mean that they are controlled via UDP messages, not that they send audio that way.
UDP for this kind of control is okay… if you don’t mind what happens when you lose a packet. And yes, you can lose a packet even if there is no physical network, and both applications are on the same machine (though it is rare if you keep the load down…)
In these synts the worst that will happen is you’d lose a note or trigger, or miss a parameter change. In practice, with ORCA on the same machine, not likely an issue (well, unless it is a very underpowered machine).
☞ You could easily build an app that had the sound generation features you desire… and 16 channels of control is find for UDP - and in this situation, unlikely to be a concern.
What you wouldn’t want is to run both something pilot-esque and something else gull-esque… And try to pipe audio between them (or mix them) via UDP. Instead, you’d push audio around via Jack (which if you needed a real network, has that stuff available).
☞ But - ORCA can send OSC just as easily as raw UDP… so why not just build the audio engine you want in SuperCollider, and send it OSC? SuperCollider has exactly the right architecture here: Control routing via OSC, and audio routing either internal to the scsynth engine, or externally via Jack.
SuperCollider can trivially record the final output to WAV or AIFF files - so its got that going for it, too.
so cool to have your insights about this
+1 on JACK, especially since it seems like you are looking for ways to share audio between applications on the same computer. It was built for that.
After years of using JACK on linux, I learned recently that it also works quite well on OSX on top of core audio if you don’t mind using the jackd commandline interface. As a developer, this simplifies things quite a bit. Instead of having to target each platform backend, all I have to do is write something using the JACK api.
There are a few options for JACK over a network, though I haven’t used any myself. JackTrip and netjack come to mind. It looks like JackTrip has more recent development activity than netjack.