came across this little gadget

got an embedded GPU - I know that GPU’s and DSP are a bit far apart still - but still… I’m sure interesting things could be done with it- maybe a little convolution box!

nebula box? possibly? m not sure if they’re still doing the CUDA thing actually

Coincidently, since this thread has just popped up again…

TBH I’ll need to watch the keynote again to remind me of the objectives - seems way more like C++ level than something like Faust for example

2 Likes

yet the syntax was also thought of so as to welcome a broader audience, i.e. people with a Javascript background won’t feel in uncharted territory too much, otoh the let and this keywords do feel weird to type if you’re used to C++ :slightly_smiling_face:

Yeah - I’m not worried about the syntax - I write in half a dozen languages regularly and quite frankly C++ seems really dated these days - at some point I’m going to see if I can use Rust to write this stuff

My point though was - the Soul code looks a lot like the C++ DSP code I happen to be working on - it’s a similar level OTH Faust is more like Max/PD/Supercollider in that you are chaining processing blocks together.

Personally I would have thought that that level would be better in general - with the ability to dive down to raw code when you needed it. Perhaps that’s the goal - like I say I need to go back to the keynote - was just surprised it seemed pretty low level was all

sure I agree, I really like functionals paradigm and Faust does a great job at expressing DSP concepts simply imo, seems like there’ll even be a Faust to SOUL backend at some point

For those of you wanting to have a dabble with the language, we released a web playground for SOUL yesterday, so if you head over to https://soul.dev/examples you can load these examples into a playground and run them in the browser.

As for earlier mentions of preferring to program in Faust, Stephane has put a Faust to SOUL translator together, https://github.com/grame-cncm/faust/tree/master-dev/architecture/soul, so you can use the SOUL backend with Faust programs. For example, here’s a Faust example:

4 Likes

cool

I don’t meant to be critical (goodness knows I get enough of that myself: Release something & then all you hear is “why didn’t you do X,Y or Z instead”) was more just interested in design choices. Reading a bit more carefully I found this https://github.com/soul-lang/SOUL/blob/master/docs/SOUL_Overview.md which is helpful.

Having been using Rust of late - the safety aspect particularly interests me. C/C++ seem very dated in that respect now - even with safe pointers etc - better to have that safety as a first class aspect of the language (and I “grew up” with pointers, I am comfortable with them, have enough practice of coding with them that I rarely make pointer type errors and still think they should be hidden by the compiler which is far better placed to reason about them than the developer)

Hey, no worries. I think the key to understanding where we were coming from with the language design was to make it unsurprising rather than clever, so there’s little hidden stuff, or features that get the realtime crowd anxious (memory allocation, mutexes, exceptions etc etc). This does however mean that some styles of programming are weakly supported, so (for example) we don’t have OO, but we do have encapsulation.

The longer term aim is that sound designers will be plugging pre-existing components together to form their audio pipelines without really knowing or caring how the internals work. A filter is a filter, it exposes these parameters which have these ranges etc, so they live tweak a sound/effect till they get what they want, then export it into their project, and move to the next sound. They are unlikely to go deeper into the underlying SOUL than seeing a graphical representation of the audio graph, and use a search tool to find pre-existing components to use. In such a world we’ll be able to move on from worrying whether the syntax for a vector is quite right :slight_smile:

I don’t think lack of OO etc is a bad thing (I know it was the panacea for a while but hey so has everything been :wink: ) & strongly agree you can hide a lot of the nonsense. There is way too much of older programmers having learned all this the hard way and not wanting it to be easy for those who follow IMO (and some very notable exceptions to this obviously) so it is good to see work on making this all less esoteric

I’m most interested right now in your ideas about leveraging hardware which is why I’m following your journey closely…

looks like we will hear more at this years ADC. (hmm, i really must try to attend this one day!)

https://mailchi.mp/juce/adc19-spotlight-on-embedded-audio?e=8954c4b7a2

Yes indeed, ADC has a SOUL workshop this year, and we’ve got some exciting stuff to demo. I believe the workshop will be recorded so it should be available on the ADC youtube channel (I don’t think it’ll be live-streamed), so you’ll be able to take a look.

We’ve just released a public beta of the compiler and runtime, with a command line tool - check it out at https://github.com/soul-lang/SOUL

Also, our website (https://soul.dev) has been updated to include support for our new soulpatch format, which is our very lightweight plugin format. The language has now got some support for sample playback, and various enhancements to allow parameters to be handled more naturally in code.

2 Likes

Just piping up here briefly, having spotted this thread… I’m doing a write-up of some of the main themes of ADC '19 for Sound On Sound mag (which from our POV includes SOUL and MIDI 2.0), and am currently swapping emails with Jules to try and understand exactly how it all hangs together. (SOUL on an Akai Force seems fun, though I didn’t get a chance to actually play with it.) Am happy to hear people’s thoughts, and to discuss.

I’ll look for the workshop recording online - I somehow managed to miss that.

Has it been demonstrated on the Force? That’s extremely interesting to me…

Yep (though I didn’t see the demo - this is all what Jules has told me): SOUL with the LLVM back-end generating native code on the Force. Put some SOUL source code on an SD card, plug it in and you’re away.

Sounds great - I’d love to have that on my MPC Live… I was really hoping for “attach keyboard and go to a new screen” but it sounds like a good start! Was it Jules himself who’s done it?

About the SOUL support on the Force. This work was done by Akai, and is using the publicly available SOUL drivers that we’ve released. It’s still an early tech demonstrator, but they are I believe looking to get this to their beta testers sometime early next year.

I believe it’ll also be available for the MPC Live…

How it works, you stick some SOUL patches on a USB stick, and insert it into the Force, then when you view plugins/inserts, you get an extra ‘SOUL Patch’ option where you can select a plugin to add. Simple as that!

They have been working on exposing their in-house GUI toolkit, which would mean it’d be possible to make a nicer GUI on top of a soulpatch.

That’s awesome. Would be a really, really fantastic addition to these machines.

Here’s Stephane Letz from the Faust project having a play with the Force during ADC. He chose a Faust physical model of a Clarinet, and exported that to SOUL, and got that running on the Force. It was pretty good to see, and showed the potential.

There were also some Belas running SOUL in the background, as eurorack modules.

DSCF3858 by Cesare Ferrari, on Flickr

Is the source code available edit the Bela peppers?