The first two seem to do more heavy lifting for you:

Draw2D

  • Docs
  • Data Binding: Watch.JS, Backbone.js, Backbone.ModelBinder, JavaScript, RivetsJS
  • Read: JSON
  • Write: JSON, SVG, PNG
  • Undo
  • Zoom
  • Touch Events

JointJS/RAPPID

  • Docs
  • Data Binding: JavaScript
  • Read: JSON, LocalStorage
  • Write: JSON, LocalStorage, PNG, JPG, SVG
  • Undo
  • Zoom
  • Touch Events

jsplumb

  • Docs
  • Data Binding: JavaScript
  • Read: JSON, LocalStorage
  • Write: JSON, LocalStorage
  • Zoom

Node-RED

  • Docs
  • Data Binding: JavaScript
  • Read: JSON
  • Write: JSON

GoJS

  • Docs
  • Data Binding: JavaScript
  • Read: JSON
  • Write: JSON
1 Like

I’d love a better understanding of the environment this will be running in.

What operating systems need to be supported? Is this environment running on a user’s computer, or from a webserver on the device itself or… ?

How about phones or tablets?

What’s the user like? Do they need to be able to understand how to install software from the command line on their chosen platform to use this? Or is this more of a tidy installer/works out of the box situation?

Not a Javascript guy myself, but we recently tackled a visual cable-based modular patching system for one of our plug-ins. JUCE has an example modular plug-in host in its repo. It might be tough to read if you’re not familiar with JUCE, but it at least provides an example design pattern for visual modular systems. In particular, take a look at PinComponent and ConnectorComponent:
https://github.com/julianstorer/JUCE/blob/master/examples/audio%20plugin%20host/Source/GraphEditorPanel.cpp

1 Like

running on desktop, probably don’t care about phones, assume the user is technically sophisticated enough to use the command line.

this could certainly be a hosted app using chrome.serial and chrome.fileSystem for starters.

electron looks like a good solution for subsequent packaging to desktop.

thanks a ton for the graph library roundup.

2 Likes

thanks, that is very helpful. i have used JUCE a bunch, it’s a good solution; maybe a bit lower-level than i’d prefer for this. had seen that demo before but somehow pushed it to the back of my mind.

i would probably take that route myself. but i kinda thought maybe someone else might enjoy doing this, and thought that perhaps JS was a more familiar tool for people and could get the job done with minimal fuss. (maybe i’m totally wrong about that.)

If you decide to go the JavaScript route, I’m happy to help.

JUCE seems very interesting as well, but I imagine there are others around here more qualified to help with that than I am.

I wish I could help, but at the moment my dissertation is eating up any remaining free time.

I was thinking about it, and an even more relevant open-source example would be the Fish environment for Shbobo’s Shnth: http://www.shbobo.net/ The Shnth is typically programmed using Shlisp, which has Lisp-based syntax. There’s also a frontend graphical interface (Fish), which is programmed in JUCE. It doesn’t have the clean, cable-based interface that the JUCE plug-in host does, but it works in the way that Aleph does. The patching environment is used to generate text, which is then compiled for Shnth use.

Two more open-source modulars:
Axoloti: https://github.com/axoloti/axoloti
WREN: http://bluehell.electro-music.com/modules/

As far as Javascript goes, there’s a JS implementation of VVVV, which has visual patching:
http://www.vvvvjs.com/

I suppose that mashing up the VVVV patching code with the text generation of the Fish environment would work quite well.

EDIT: Here’s the relevant VVVV patcher JS code: https://github.com/zauner/vvvv.js/blob/master/editors/vvvv.editors.browser_editor.js

1 Like

funny - i was the one who recommended juce to peter for the frontend, back when. ( not sure why it didn’t occur to me for beekeep frontend in the first place… I was hung up on just using pure c for some reason? wanted to try gtk anyways? who knows. sorry. )

( off topic: someday still want to port the shnth opcodes to c. shlisp is fun. don’t think the port would be as hard as it seems at first glance. )

I was hoping someone with a preferred toolset for this kinda thing would just volunteer. I’ll let this rest until next weekend or something, then if it’s still up to me ill just start in on a nicer frontend for beekeep. Probably with juce so it can just link bees sources and json<->scn converter directly.

@jasonw22 , do you have an aleph? Would be hard to work on this without one.

1 Like

No, I wish I did. If anybody knows where to find one…

They come up for sale occasionally but the timing hasn’t lined up with my finances just yet.

i’ll just chime in and say I LOVE the design and functionality of fish

i dont need faux cables
even tho UI elements like lumen and aalto’s are pretty cool

well i got antsy and went ahead and started building a JUCE frontend.

not much functionality yet, but i did make it through the not-insignificant drudgery of linking bees source in a c++ program (oh yeah, that’s why i didn’t do this in the first place.)

so now there’s a thing where it initializes the control network and can create operators (right-click for creation popup.) each operator gets a JUCE component that can be dragged around on a canvas.

lots more work of course, still open for volunteers.

on my fork for the moment, until i add stuff that actually does things:

i think it’s helpful to actually have the bees code running in the editor. using JS would require more glue.

4 Likes

Nice! Since you’re on JUCE, I’ll try to help here and there. I don’t have an Aleph, so I can’t really help with the guts, but I’d be happy to help out with graphics/skinning and general JUCE arcana. We do vectorized interfaces for our plug-ins (http://unfilteredaudio.com/images/sandman/sandmanretina.png). It ends up being really useful with the current trend of high resolution and high pixel density displays.

i see, that makes sense. (plugin UI looks lovely by the way.)

so far: storing all node positions as double, relative to a big canvas, seen through a viewport. but at some point pixels come into it.

obvs it would be a good idea to be disciplined and only specify drawing dimensions as proportions of real screen size, some utlities using Desktop::getMainMonitorArea() or something. ( ed: yikes, really, Desktop::getInstance().getDisplays().getMainDisplay().totalArea now?)

would be curious if you can point at any OSS projects that already employ such a toolkit.

of course, doing all this stuff is kind of why i would have liked to just use a library made for drawing graphs… oh well…

2 Likes

I’ll look around to see if there’s a great OSS JUCE app that employs decent interface standards.

Madrona Labs has his common code on Github:

Of particular interest: https://github.com/madronalabs/madronalib/tree/master/source/LookAndFeel

In general, as long as you draw your knobs/visual components via code (and not through pre-rendered images), the scaling for various pixel densities happens automatically (Retina screens, for example, are treated as 2x pixel density displays with standard resolutions. We didn’t have to write special code for our plug-ins to appear the same size on various densities. It’s more problematic, though, when you need to scale for overall area, which it sounds like you’re doing).

The Introjucer is a great tool for doing layout on more static interfaces (toolbars, frames, etc.). There, you can set positions and sizes of components as relative values instead of absolutes.

I’ll take a look at the code this weekend.

2 Likes

actually just went ahead and made some conversion routines between screen, canvas, and pixel coordinates, basic and certainly not optimized, but seems to work fine. i can still drag the boxes around, hooray. no pixel coordinates anywhere except in paint and mouse handling routines. so i think theoretically should be able to save a canvas and all its components and have it look the same on all screens.

haven’t tried with 4k displays yet. will do so (i have a 4k windows/linux laptop, believe it or not, probably just about the nastiest test case for this.)

definitely not gonna be using any bitmap assets for this thing. just want to get as much usable functionality done as i have time for. nicer to do the widgets in code.

Madrona Labs has his common code on Github:
https://github.com/madronalabs/madronalib

that is very cool, thanks.

1 Like

Found this thread after a day of aleph sound experiments. I spent some hours in Inkscape drawing operator panels with inputs and outputs and realized I’m doing prep work for a GUI.

What’s the shortest path to get started with all of your existing work?

3 Likes

honestly i did not get very far beyond:

  • making sure bees can compile under c++
  • re-acquainting myself with some juce stuff like viewports and resolution scaling

so there is this “utils/beekeep” project which at some point was just a CLI program to convert between binary scene data and a json representation. then i added a ridiculous gtk frontend and it got into its present state which is maybe a little borked. when i have time i can just roll back the gtk stuff.

there is aleph/utils/avr32_sim which provides stubs for all the missing hardware stuff./

“utils/beekeep_juce” is probably not very useful but it is there.

rick made a PD wrapper for bees also which is maybe a better starting point

thing is that i don’t really care about a UI (not sure who does.) it would be just good to have some way of editing bees patches offline. my preference would actually be for a little scripting environment.

my current thinking of how to implement this would be to bind bees to a lua interpreter.

but there are many options, the point is that it should be reasonably straightforward to bind bees to whatever FFI. if you wanted to design a UI in python or what have you.

1 Like

agree with this - once you have arbitrary op insertion/deletion, patching is quite fast! scripting could be good for sharing snippets of scenes.

3 Likes

I agree that a realtime GUI bound to a running scene is silly. Here’s my context.

SuperCollider was my first modular computer synthesis platform. After poking through examples and doing n00b copy pasta, I put together some live performances and I’m still proud of them. But I got stuck because I couldn’t figure out how all the UGens and Classes and methods fit together. So I began drawing literal graphs on paper, like a modular synth and “patching” inputs to outputs. Then I could translate that into the code, debug and boom!

I noticed the same kind of metaphor is used in the 7 part Aleph tutorial, pencil on graph paper. Initially, I began doing the same thing, then realized this is something a computer is good for. So now I have a bunch of reusable operator “panels”. I’m gonna do the inputs and outputs today.

I’m down with Lua bindings. That could be fun.

1 Like

You can already sketch out scenes in the aleph pd external (and run them to some extent!). But there’s no tool to convert pd patch containing BEES ops into a BEES scene.

EDIT:

The thing that makes this suck somewhat is presets! pd has no concept of a preset afaik… and it’s really a very interesting concept for musical instrument control

3 Likes