Awesome to see the interest here! Woot woot, thanks all for reading.
@jasonw22 I support asking all the questions! There are many possible intersections here.
In the work documented above I use JavaScript in the web browser to write bespoke user interfaces and use Node.js as a convenient way to animate LEDs and write “application logic”.
The tooling and developer experience around web technologies is much more well-developed (and more widely applicable) than say the user interface APIs and debugging facilities within SuperCollider itself. This combined with my comfort building with web technologies are largely my motivation for doing things this way.
From my perspective this is also a clear trend in the world, web technologies have been an emerging standard for 2D user interfaces for some time, and professional audio software companies are beginning to adopt web technologies for user interfaces.
In case it isn’t clear, I <3 SuperCollider and surely there are many cases when writing a UI directly in SuperCollider is the right tool for the job (I do have a few of those as well…).
The posts in this thread so far capture a wide breadth of ways to conceive of the intersection of JS and SC. Sounds like @carltesta has made good progress of what might be called a “centralized synthesis server”, where control data is sent from users to a single location where sound is generated, then sent back. Curious to hear what comes of this! I remember setting up a WebRTC server for a project and the latency was not great…I can imagine a variant on the Audiomovers project, where low-latency is the aim but somehow easy to use and integrated into a web browser (a plugin?)
In case a bit of an overview is helpful to anyone, JavaScript can be used in various contexts. The Node.js engine is a separate process like Python. The web browser has it’s own JS execution engine with confined limitations, for example it cannot read any files on your computer or directly access USB devices except for very specific circumstances. Web Audio (in the web browser) is a very thorough set of tooling, though might not be able to approach the level of complexity of SuperCollider.
A native application like SuperCollider cannot be run in the browser, with the exception of the cutting-edge approach linked to by @chrisl ! This is actually compiling the SuperCollider C codebase to a lower-level “assembly” like code that can be interpreted by web browsers that support the WebAssembly standard. Whew!
There are other ways to use a web engine and circumvent the “sandbox” limitations of the web browser, for example the Electron framework allows desktop apps to be built with an embedded web browser by pairing an embedded web engine with a side-by-side Node.js process that can access system-level stuff. This is what my “performance environment” uses…
Also, Chris many thanks for your newsletter curation efforts all these years! 
In terms of collaborative / web / multiplayer music making, yet another architecture is one where each individual has their own sound engine and only control signals are sent across the wire. This could be implemented with everyone running SuperCollider locally and using a single centralized Node.js server, for example.
In many ways this approach appeals to me the most for distributed music making. Considering everyone has a powerful computer and bandwidth / jitter are the limiting factors for audio transmission, I’m curious when might it make sense to send reliable “control” signals and have everyone’s personal synthesis machines do all the work? 
I’m now following Eli Fieldsteel, thank you @rennerom ! To me SuperCollider is an environment with many facets. For example, the Pattern system is almost a language all on its own. Or one can write synthdefs and happily spawn them from the IDE without needing any understanding of how to structure code into folders, use Quarks, etc.
I look forward to continuing the discussion! 