SOUL - SOUnd Language (JUCE/Roli)

Thanks @zebra for this long feedback !

We are always interested to get precise comments and improvement suggestions. If we don’t get them on the Faust mailing list, we cannot hope to make progress !

  • for instance about the virtual issue, could you be more specific and give some concrete examples ?
  • about the powf thing, have you tried the --fast-math <file> options that allows to “patch” some possibly CPU costly functions and replace by your own ?
  • about the SIMD thing: our general strategy is to have the Faust compiler generate different “shape” of C++ code (or LLVM IR…etc…), one big DSP loop in scalar (= default) mode, separate sub-loops in -vec mode…etc… so that the underlying C++ compiler (or LLVM JIT) can then auto-vectorize it. This works pretty well AFAICS. They are also tools to help you discover the best compilations parameters: look at faustbench and faustbench-llvm tools here:
    https://github.com/grame-cncm/faust/tree/master-dev/tools/benchmark
  • about the functional versus imperative discussion: sure the functional approach does not always fit, and possible does no scale yet in our current model. We certainly have to improve all this in the future, but again, having description of precise use-cases with their specific needs really helps !
3 Likes

@sletz thanks to you for the helpful response. i will for sure 1) explore those excellent tips and 2) attempt to boil down any remaining issues to a more precise form, and submit through official channels. (er… as time and IP permits.) :mailbox_with_mail:

This won’t be in V1, but certainly something we’d consider adding later (or the community could help to add) if there’s demand for it.

Yeah, it’s quite fundamental to our design that a process can have many streams running at different rates, and the interpolation is handled invisibly. We’re also planning to have windowed streams for frequency domain stuff (probably a V2 feature)

Just following on from some things Jules commented on, SOUL supports oversampling and undersampling within a graph too (not just inputs and outputs at different rates). So, if you have a processor algorithm that produces aliases (say for example a waveshaper or naive oscillator) you can mark it to be oversampled, say x4, and the system will automatically upsample/downsample streams to the processor. So it’s very easy to experiment with oversampling within the graph.

We also support undersampling, say 1/64th of the sample rate, which maps well to control rate logic. The interpolation strategies have sensible defaults (so bandlimited interpolation for the oversampled cases, and linear interpolation for undersampled streams) but you can specify alternative strategies if you know best.

One other point i’d like to make is that SOUL could be quite handy for non-audio use cases, so programming micro controllers. It’s not something i’ve tried, but it’s on the list of things to think through (say targetting an arduino).

2 Likes

I will be interested to see how soul works with supporting various core-embedded dsps. Predominantly because these DSPs (in the case of the kind of chips you find in phones) are often used for heavy lifting in decoding operations and such. Quite often there is an API available if you speak to the correct engineer on a good day, but the documentation isn’t always great. The same is true with multimedia embedded targets. Take the NXP IMX8M for example. A chip used in a range of ‘connected’ home audio products. There is a toolkit for using the internal DSP, (subject to a volume of chip sales and a few NDAs with the appropriate parties), but how to utilise the processors is neither clear or indeed trivial, because there is already a proprietary audio core library that runs on it, that belongs to a separate conglomerate to NXP. In fact, companies like DSPConcepts have built an industry out of solving this very problem Jules describes, in the context on automotive audio systems.

As a DSP engineer, I would advocate a language like soul, that allows DSPs to be utilised in user-space applications, allowing for very high quality integration and efficient processing.
Recognising that the core of working with these processors demands hand assembler-based optimisation, a fundamental understanding of the architecture of the DSP and the intricacies of each DSPs API (they are all different), this really looks like a massive but worthwhile challenge.

I too look forward to the demos next year. And even if the hardware solution that comes with soul is a bela, an SC589 ezkit, or a gtx1080, I can’t wait to make audio apps with it.

P.s. that sharc board is great, but a total monster, and there is quite a steep learning curve to get into the sc589 processor. Great if you can run Faust on one (I have browsed through some example code), but CCES is not a cheap option for the home-gamer. I would love it is AD made their DSP ecosystem more accessible, but it’s the same old story of support vs finance. The AD Sigma series DSPs are a bit more accessible, but it’s drag and drop graph programming only until you start selling volumes.

5 Likes

good question
I cant help loving the notion,
but the code still looks kindof hangover from 1990s C++

My jaw dropped when he said in his first example “it can’t be simpler than that” .
// mumbling to the video "Oh Yes it Can please make it leaner, more expressive

They are not alone and Kudos & good luck
we are heading into interesting times

LLVM is catalysing incredible changes recently all over
2018 packed new ideas, seeds of fresh sophistication
redesignrethunking for simpler better modular clarity hah!
feels to me like beginning of a new very creative era
1920 >>> 2020 coming

for example check out whats cooking and capable with these three
Julia, Houdini, Blackmagic

Julia Language
https://julialang.org/

5th JuliaCon London, August 2018


watch videos :: playlist

JuliaCon 2018 | Keynote - The Future of Microprocessors | Sophie Wilson

JuliaCon 2018 | TIM: Unified Large-scale Forecasting system in Julia | Ján Dolinský

JuliaCon 2018 | LightFields.jl: Fast 3D image reconstruction for VR applications | Hector Loarca

Houdini
https://www.sidefx.com/

Houdini 17 Banshee. Launch Presentation [Autumn 2018]

BlackMagic eGPU
https://www.blackmagicdesign.com/products/blackmagicegpu/
https://www.blackmagicdesign.com/products/davinciresolve/

1 Like

I did the faust quick start tutorial and it was simple and informative. Runs in a web browser! I really like the syntax and clarity of the visual representation of the code. It strikes me as a good middle ground between a language built to make music like Supercollider and a box-&-patchcord canvas like PD.

came across this little gadget

got an embedded GPU - I know that GPU’s and DSP are a bit far apart still - but still… I’m sure interesting things could be done with it- maybe a little convolution box!

nebula box? possibly? m not sure if they’re still doing the CUDA thing actually

Coincidently, since this thread has just popped up again…

TBH I’ll need to watch the keynote again to remind me of the objectives - seems way more like C++ level than something like Faust for example

2 Likes

yet the syntax was also thought of so as to welcome a broader audience, i.e. people with a Javascript background won’t feel in uncharted territory too much, otoh the let and this keywords do feel weird to type if you’re used to C++ :slightly_smiling_face:

Yeah - I’m not worried about the syntax - I write in half a dozen languages regularly and quite frankly C++ seems really dated these days - at some point I’m going to see if I can use Rust to write this stuff

My point though was - the Soul code looks a lot like the C++ DSP code I happen to be working on - it’s a similar level OTH Faust is more like Max/PD/Supercollider in that you are chaining processing blocks together.

Personally I would have thought that that level would be better in general - with the ability to dive down to raw code when you needed it. Perhaps that’s the goal - like I say I need to go back to the keynote - was just surprised it seemed pretty low level was all

sure I agree, I really like functionals paradigm and Faust does a great job at expressing DSP concepts simply imo, seems like there’ll even be a Faust to SOUL backend at some point

For those of you wanting to have a dabble with the language, we released a web playground for SOUL yesterday, so if you head over to https://soul.dev/examples you can load these examples into a playground and run them in the browser.

As for earlier mentions of preferring to program in Faust, Stephane has put a Faust to SOUL translator together, https://github.com/grame-cncm/faust/tree/master-dev/architecture/soul, so you can use the SOUL backend with Faust programs. For example, here’s a Faust example:

4 Likes

cool

I don’t meant to be critical (goodness knows I get enough of that myself: Release something & then all you hear is “why didn’t you do X,Y or Z instead”) was more just interested in design choices. Reading a bit more carefully I found this https://github.com/soul-lang/SOUL/blob/master/docs/SOUL_Overview.md which is helpful.

Having been using Rust of late - the safety aspect particularly interests me. C/C++ seem very dated in that respect now - even with safe pointers etc - better to have that safety as a first class aspect of the language (and I “grew up” with pointers, I am comfortable with them, have enough practice of coding with them that I rarely make pointer type errors and still think they should be hidden by the compiler which is far better placed to reason about them than the developer)

Hey, no worries. I think the key to understanding where we were coming from with the language design was to make it unsurprising rather than clever, so there’s little hidden stuff, or features that get the realtime crowd anxious (memory allocation, mutexes, exceptions etc etc). This does however mean that some styles of programming are weakly supported, so (for example) we don’t have OO, but we do have encapsulation.

The longer term aim is that sound designers will be plugging pre-existing components together to form their audio pipelines without really knowing or caring how the internals work. A filter is a filter, it exposes these parameters which have these ranges etc, so they live tweak a sound/effect till they get what they want, then export it into their project, and move to the next sound. They are unlikely to go deeper into the underlying SOUL than seeing a graphical representation of the audio graph, and use a search tool to find pre-existing components to use. In such a world we’ll be able to move on from worrying whether the syntax for a vector is quite right :slight_smile:

I don’t think lack of OO etc is a bad thing (I know it was the panacea for a while but hey so has everything been :wink: ) & strongly agree you can hide a lot of the nonsense. There is way too much of older programmers having learned all this the hard way and not wanting it to be easy for those who follow IMO (and some very notable exceptions to this obviously) so it is good to see work on making this all less esoteric

I’m most interested right now in your ideas about leveraging hardware which is why I’m following your journey closely…