Hey, no worries. I think the key to understanding where we were coming from with the language design was to make it unsurprising rather than clever, so there’s little hidden stuff, or features that get the realtime crowd anxious (memory allocation, mutexes, exceptions etc etc). This does however mean that some styles of programming are weakly supported, so (for example) we don’t have OO, but we do have encapsulation.

The longer term aim is that sound designers will be plugging pre-existing components together to form their audio pipelines without really knowing or caring how the internals work. A filter is a filter, it exposes these parameters which have these ranges etc, so they live tweak a sound/effect till they get what they want, then export it into their project, and move to the next sound. They are unlikely to go deeper into the underlying SOUL than seeing a graphical representation of the audio graph, and use a search tool to find pre-existing components to use. In such a world we’ll be able to move on from worrying whether the syntax for a vector is quite right :slight_smile:

I don’t think lack of OO etc is a bad thing (I know it was the panacea for a while but hey so has everything been :wink: ) & strongly agree you can hide a lot of the nonsense. There is way too much of older programmers having learned all this the hard way and not wanting it to be easy for those who follow IMO (and some very notable exceptions to this obviously) so it is good to see work on making this all less esoteric

I’m most interested right now in your ideas about leveraging hardware which is why I’m following your journey closely…

looks like we will hear more at this years ADC. (hmm, i really must try to attend this one day!)

https://mailchi.mp/juce/adc19-spotlight-on-embedded-audio?e=8954c4b7a2

Yes indeed, ADC has a SOUL workshop this year, and we’ve got some exciting stuff to demo. I believe the workshop will be recorded so it should be available on the ADC youtube channel (I don’t think it’ll be live-streamed), so you’ll be able to take a look.

We’ve just released a public beta of the compiler and runtime, with a command line tool - check it out at https://github.com/soul-lang/SOUL

Also, our website (https://soul.dev) has been updated to include support for our new soulpatch format, which is our very lightweight plugin format. The language has now got some support for sample playback, and various enhancements to allow parameters to be handled more naturally in code.

2 Likes

Just piping up here briefly, having spotted this thread… I’m doing a write-up of some of the main themes of ADC '19 for Sound On Sound mag (which from our POV includes SOUL and MIDI 2.0), and am currently swapping emails with Jules to try and understand exactly how it all hangs together. (SOUL on an Akai Force seems fun, though I didn’t get a chance to actually play with it.) Am happy to hear people’s thoughts, and to discuss.

I’ll look for the workshop recording online - I somehow managed to miss that.

Has it been demonstrated on the Force? That’s extremely interesting to me…

Yep (though I didn’t see the demo - this is all what Jules has told me): SOUL with the LLVM back-end generating native code on the Force. Put some SOUL source code on an SD card, plug it in and you’re away.

Sounds great - I’d love to have that on my MPC Live… I was really hoping for “attach keyboard and go to a new screen” but it sounds like a good start! Was it Jules himself who’s done it?

About the SOUL support on the Force. This work was done by Akai, and is using the publicly available SOUL drivers that we’ve released. It’s still an early tech demonstrator, but they are I believe looking to get this to their beta testers sometime early next year.

I believe it’ll also be available for the MPC Live…

How it works, you stick some SOUL patches on a USB stick, and insert it into the Force, then when you view plugins/inserts, you get an extra ‘SOUL Patch’ option where you can select a plugin to add. Simple as that!

They have been working on exposing their in-house GUI toolkit, which would mean it’d be possible to make a nicer GUI on top of a soulpatch.

That’s awesome. Would be a really, really fantastic addition to these machines.

Here’s Stephane Letz from the Faust project having a play with the Force during ADC. He chose a Faust physical model of a Clarinet, and exported that to SOUL, and got that running on the Force. It was pretty good to see, and showed the potential.

There were also some Belas running SOUL in the background, as eurorack modules.

DSCF3858 by Cesare Ferrari, on Flickr

Is the source code available edit the Bela peppers?

Not the source code, but the binaries for the soul command with full bela low latency support are included in the linux release - https://github.com/soul-lang/SOUL/releases/latest

When run on the bela, the I/O looks like 10 inputs and 10 outputs, with the first two being the audio i/o. Any parameters are automatically mapped to the 8 control knobs, so an existing patch with parameters gets the first 8 mapped to the knobs, and can be modulated by the 8 inputs. Patches with >8 knobs, well, you only get the first 8 to play with, sorry :frowning:

We were thinking of using the buttons on the board to switch between patches, which would be very cool, but haven’t implemented this, likewise the LEDs could indicate something like loaded patch, or level meter, but again, it’s vanilla at the mo.

1 Like

the binary is just the soul executable,
so we can alter the soul patch to change parameter order etc i assume

any idea when the source code would be released?

anyway, seems like a fun way to play with soul… given Ive a couple of peppers, and salt :slight_smile:

I’m a little confused about what is actually released under which licenses. I understand there’s both a SOUL language and a runtime… or several runtimes?

A very important question for me is, is it possible to compile SOUL (to anything - an IR or C++) using only tools with sources available under free licenses, or do I need to get the binary release? And if I do, will that change?

1 Like

To answer your question about the licenses is a little involved.

The whole system is comprised of a front end language (SOUL), an intermediate representation (Heart) and a runtime to JIT the code (no clever name for this, we call it a SOUL runtime). The runtime takes in Heart, not SOUL, and JITs this, or, we can convert this to C++ as well for integration into more conventional projects.

As of now, the open source bit is the parser and compiler which converts SOUL into Heart, so basically the language definitions are open source as well. We haven’t opened up the runtime, and this is for a number of reasons, from boring things like we’re not sure if we’re going to license this code to device manufacturers or open it, through other things like there’s lots of churn in that part of the code, and we’re still working through what we want the interfaces to look like.

I’m guessing from your comment that you don’t want to commit to something which requires closed source components, and I totally understand that. We might see some movement in this direction soon, with either this implementation being opened up, or a reference implementation being released.

Meanwhile, the closed source binaries have been released with very permissive licenses - you can use them for fun or in commercial products.

1 Like

No idea on the source code - certainly i think for something like Bela it might be useful as you’ll probably want to enhance it in interesting ways to work well with the devices UI.

Well do have a play with the Belas, i’m a big fan. They aren’t hugely powerful, but they seem to be very robust and easy to work with.

Similar to the Bela, i’ve just received an Elk OS board, so that’s my Christmas sorted!

Yeah, it’d feel really weird to me to write in a language where someone else could take the compiler away. I used to be well into Reaktor before I moved to Linux for my main desktop and that was a lot of work I had to walk away from, and even if I got a new Mac I’d have to upgrade. I’d rather not be in a situation like that again.

I can accept that you want to keep some parts closed for easier monetization (and as a dev myself I’m also intimately familiar with “good enough for production but not good enough for Github”!), but I’d be much more comfortable using SOUL if I knew I would always have the infrastructure to e.g. compile it into Faust or C++, even if I would predominantly use it on a proprietary runtime (Akai) for the moment.

2 Likes

exactly the reasons id like to see this open sourced …so that the community can extend, and take to new platforms that might not interest Roli.

as a developer, Im with @rvense - if im investing time into something new like SOUL, Id like to know I can move it to new platforms if needed, or add features that I require.
Pepper is a good example here… we now have to wait for Roli to extend to support the LEDs, buttons or things like I2C.

of course short term its not an issue
I completely understand the need for things to mature before being opened up… i’d do the same. Also i respect that it make take a little time for Roli to determine how they monetise this, to help pay for its development and support.

anyway, its great to see SOUL progressing, and getting nearer to a release.

2 Likes

The problem is that you are seeing this as a proprietary programming language, and whilst that’s the case, worrying about the implementation being closed is a valid issue.

If, say, there was an agreed ISO standard for SOUL, and the major phone/console manufacturers provided built-in support for it, and it became a de-facto ‘check list’ item for things that support audio, you’d I imagine have no such concerns, even though the implementations were still closed source. Basically, a single supplier is the issue, not the closed nature of the implementation.

Looking at, say, the graphics world, the fact that Nvidia and AMD keep their graphics hardware drivers closed source doesn’t mean people shy away from using Vulkan, as the choice of vendor for end users keeps them honest.

If we look at SOUL through the same lens, the right strategy very much depends on whether we’re seeing this being adopted and available in major OSes, or whether we see it as one companies interesting tech. I’m shooting for the major OSes supporting it, but it’s not going to happen overnight as you can imagine. You’ve got to think big though, haven’t you?