Integrating JavaScript and SuperCollider

Hello people of lines and happy 2021,

I have spent many hours reading this forum and wish to take a moment to express my gratitude for all who have contributed to this space!

Below is information about making music tools with web technologies, specifically integrating Node.js and web browsers with SuperCollider. I’ve kept a few notes about my process in here too.

As someone who is practiced building user interfaces with web technologies, I found myself using Node.js for more things as the ecosystem grew over the years. I also value the open platform of web technologies because the tools I make are not tied to a particular operating system or DAW.

The examples below are built on a quite reliable software framework for sharing state between a Node.js and SuperCollider process that I have used in production on a few projects.

examples!

Out of This World Advice

An interactive installation, here Node.js handles the LED animations (30fps easily!) and button presses (via Arduino), while SuperCollider handles a semi-generative sound engine. All running in a Raspberry Pi 4! It was starting to be a bit much for the Raspberry Pi 3…

Touchscreen Euclidean Sequencer

This is integrated into my “performance environment”, I start up an Electron app which spawns SuperCollider with some constantly running instruments, and serves this touchscreen interface for controlling some euclidean rhythms that I use with an iPad (using Safari, “save to homescreen” the webpage and it opens full-screen like an app).

For SuperCollider fans, the play button is actually calling play on a Pbind in SuperCollider, running on a LinkClock (Ableton Link implementation in SuperCollider). Modifications to these parameters in the GUI are changing the parameters with Pdef, so the changes are quantized. These are the same tools one might use in SuperCollider if they are live-coding. The same approach is used in “Out of This World Advice” mentioned above.

Conceptually, the Node.js process translates between the user interface and these SuperCollider APIs.

Most recently, I have been working out some LinkClock syncing issues and perhaps more excitingly I have introduced “presets” and integrated them with the Octatrack. Making varied music is important to me, and I am very interested in beat-synced syncopated sounds. The OT sends Program Change messages when a pattern changes, so now the settings of these “generative sequencers” can be assigned to an OT pattern and auto-switch when the OT pattern switches. I am very interested in exploring this “presets” into a generative system territory further. I’d love to learn of any relevant process-oriented discussions.

software

supercollider.js

All of what I am doing is made possible by the excellent groundwork from crucialfelix on the supercollider.js project! This is the Node.js library that handles low-level communication with a SuperCollider process. It also allows a Node.js process to spawn SuperCollider.

supercollider-redux

This is a Node.js library to implement a Redux-style state store mechanism that is shared between the Node.js and SuperCollider processes. It allows for all of the application state logic to be written in one place (this is a design principle of Redux itself, really) in the JS code. When the SuperCollider process wants to change state, it needs to “dispatch” to the store, which sends a message up to the JS layer, JS figures out the new state, and sends the new state back down to SC.

This architecture is meant to ease application development and to reduce bugs in software where state is shared between Node.js and SuperCollider, it is certainly not aimed at efficiency! For “gestural” control signals, for example a MIDI knob that controls parameters of a sound directly, I still have SuperCollider listen to the MIDI input directly.

Thanks for reading!

I’m excited to explore this territory with anyone interested. I would especially be interested in learning more about any prior work / related projects that come to mind.

17 Likes

I haven’t dived into all the fantastic resources you mentioned yet, so I hope you’ll forgive a very naive question: could this be used to create what I’ll call, for lack of a better term, multi-user supercollider?

Could I use this to let people run supercollider scripts in their web browser, which is making use of a supercollider instance on my server?

My mind boggles a bit at what it might require to make such a thing scalable, which is why I suspect your vision for this is more likely to be for a single user, running all of the necessary stack locally?

Edit:
With the above I was thinking massively multiplayer, but I guess another multiplayer scenario is just a handful of people. Similar to how folks use Troop to collaborate on FoxDot files.

I guess anytime I hear “web” my mind immediately goes to the networked use cases.

1 Like

I’ve been working on various multi-user situations of a web client interacting with a specific instance of SuperCollider running on a server. Each user’s OSC controls generated by the web page control aspects of the SuperCollider patch running on the system. The missing piece is getting reliable and fast streaming of the audio created on the server back to all the clients. I can do it using JackTrip or SonoBus but I’m specifically looking for something that will stream audio back at low latency direct in the browser. WebRTC seems to be the best bet but I haven’t seen a demo doing it yet.

1 Like

i would say that this is a “classic” supercollider use case because of the inherently decoupled and networked architecture of sclang (music logic) and scsynth/supernova (DSP engine.)

when conducting IRL supercollider workshops in the past i’ve usually had a segment where everyone collaborates on interacting with a common remote server; it’s well set up for that.

the OSC protocol is pretty simple in itself (though implementing everything required to build UGen graphs, is not), so there have been quite a few “alternative” clients to sclang (there are more-or-less-complete client implementations in ruby, haskell, clojure etc.)

in particular, the project linked about is the javascript client implementation that chris sattinger (core SC dev) has been working on for many years now…

the OPs implementation of a mirrored redux store in sclang definitely seems like a useful additional utility!

1 Like

Here to see how this thread develops. Eli Fieldsteel has his latest supercollider lectures from the University of Illinois Urbana-Champaign all on YT. I’ve been relearning supercollider (starting from square 2?) through those and have only just perused the documentation of supercollider.js.

I look forward to lurking in the cyber corners of this thread :wink:

Also, maybe this as good a place as any to mention this but is anyone familiar with Croquet.io ? It’s whole thing is about multiuser state synchronization and I’ve gotten it to send WebMidi signals to multiple remote computers pretty seamlessly. There’s a lot of possibilities there in my mind.

By the way, I’m very interested in what you’re working on but I need to take the time to sort through what you’ve posted.

Editing to add:
Flok enables an easy web-based multi-user SC system but requires the installation of a node bridge

1 Like

You might also be interested in this project. It’s a proof of concept of porting supercollider to wasm so that it can run in the browser itself.

2 Likes

Awesome to see the interest here! Woot woot, thanks all for reading.

@jasonw22 I support asking all the questions! There are many possible intersections here.

In the work documented above I use JavaScript in the web browser to write bespoke user interfaces and use Node.js as a convenient way to animate LEDs and write “application logic”.

The tooling and developer experience around web technologies is much more well-developed (and more widely applicable) than say the user interface APIs and debugging facilities within SuperCollider itself. This combined with my comfort building with web technologies are largely my motivation for doing things this way.

From my perspective this is also a clear trend in the world, web technologies have been an emerging standard for 2D user interfaces for some time, and professional audio software companies are beginning to adopt web technologies for user interfaces.

In case it isn’t clear, I <3 SuperCollider and surely there are many cases when writing a UI directly in SuperCollider is the right tool for the job (I do have a few of those as well…).

The posts in this thread so far capture a wide breadth of ways to conceive of the intersection of JS and SC. Sounds like @carltesta has made good progress of what might be called a “centralized synthesis server”, where control data is sent from users to a single location where sound is generated, then sent back. Curious to hear what comes of this! I remember setting up a WebRTC server for a project and the latency was not great…I can imagine a variant on the Audiomovers project, where low-latency is the aim but somehow easy to use and integrated into a web browser (a plugin?)

In case a bit of an overview is helpful to anyone, JavaScript can be used in various contexts. The Node.js engine is a separate process like Python. The web browser has it’s own JS execution engine with confined limitations, for example it cannot read any files on your computer or directly access USB devices except for very specific circumstances. Web Audio (in the web browser) is a very thorough set of tooling, though might not be able to approach the level of complexity of SuperCollider.

A native application like SuperCollider cannot be run in the browser, with the exception of the cutting-edge approach linked to by @chrisl ! This is actually compiling the SuperCollider C codebase to a lower-level “assembly” like code that can be interpreted by web browsers that support the WebAssembly standard. Whew!

There are other ways to use a web engine and circumvent the “sandbox” limitations of the web browser, for example the Electron framework allows desktop apps to be built with an embedded web browser by pairing an embedded web engine with a side-by-side Node.js process that can access system-level stuff. This is what my “performance environment” uses…

Also, Chris many thanks for your newsletter curation efforts all these years! :slight_smile:

In terms of collaborative / web / multiplayer music making, yet another architecture is one where each individual has their own sound engine and only control signals are sent across the wire. This could be implemented with everyone running SuperCollider locally and using a single centralized Node.js server, for example.

In many ways this approach appeals to me the most for distributed music making. Considering everyone has a powerful computer and bandwidth / jitter are the limiting factors for audio transmission, I’m curious when might it make sense to send reliable “control” signals and have everyone’s personal synthesis machines do all the work? :stuck_out_tongue:

I’m now following Eli Fieldsteel, thank you @rennerom ! To me SuperCollider is an environment with many facets. For example, the Pattern system is almost a language all on its own. Or one can write synthdefs and happily spawn them from the IDE without needing any understanding of how to structure code into folders, use Quarks, etc.

I look forward to continuing the discussion! :nerd_face:

3 Likes

Your work in this field is truly fascinating!

I’m currently working with SuperCollider in my master thesis project and I also came across the idea of running SuperCollider in a web browser. Of course, my experience with SuperCollider is still limited and my experience with JS even less, so from my standpoint it’s a little overwhelming… But would it be possible to run a SuperCollider server in a web browser, such that the user only would have access to a GUI?

Maybe it’s far fetched for a thesis project, but I’m hoping to find out what is possible, especially now that it’s difficult to arrange physical ‘laboratory’ experiments :slight_smile:

running SuperCollider in a web browser

i was going to say this sounds like a daunting project, but an SC developer has posted a POC of just this, shared 3 posts up in this very thread. the project README is helpful. supercollider/README_WASM.md at wasm · Sciss/supercollider · GitHub

my interpretation of that would be that it still has architectural puzzles unsolved, but it is an active project.

an “easily” achieved alternative would be to have the browser app controlling a remote SC audio server, configured to stream audio back to the client. (there are various ways to do this. norns.online is one example.) this would be problematic if your project was sensitive to latency.

there are other DSP frameworks that already work pretty seamlessly with webaudio, such as faust, or of course the web audio API.

Hello Carltesta!
Are there any new developments on webrtc or could you point me to some place I could take a look?

What I’m trying to do: SuperCollider → low Latency audio (webrtc) → 1 client that listens to the generated audiostream via the browser.

Thank you very much :slight_smile: