SOUL - SOUnd Language (JUCE/Roli)


Thanks @zebra for this long feedback !

We are always interested to get precise comments and improvement suggestions. If we don’t get them on the Faust mailing list, we cannot hope to make progress !

  • for instance about the virtual issue, could you be more specific and give some concrete examples ?
  • about the powf thing, have you tried the --fast-math <file> options that allows to “patch” some possibly CPU costly functions and replace by your own ?
  • about the SIMD thing: our general strategy is to have the Faust compiler generate different “shape” of C++ code (or LLVM IR…etc…), one big DSP loop in scalar (= default) mode, separate sub-loops in -vec mode…etc… so that the underlying C++ compiler (or LLVM JIT) can then auto-vectorize it. This works pretty well AFAICS. They are also tools to help you discover the best compilations parameters: look at faustbench and faustbench-llvm tools here:
  • about the functional versus imperative discussion: sure the functional approach does not always fit, and possible does no scale yet in our current model. We certainly have to improve all this in the future, but again, having description of precise use-cases with their specific needs really helps !


@sletz thanks to you for the helpful response. i will for sure 1) explore those excellent tips and 2) attempt to boil down any remaining issues to a more precise form, and submit through official channels. (er… as time and IP permits.) :mailbox_with_mail:


This won’t be in V1, but certainly something we’d consider adding later (or the community could help to add) if there’s demand for it.

Yeah, it’s quite fundamental to our design that a process can have many streams running at different rates, and the interpolation is handled invisibly. We’re also planning to have windowed streams for frequency domain stuff (probably a V2 feature)


Just following on from some things Jules commented on, SOUL supports oversampling and undersampling within a graph too (not just inputs and outputs at different rates). So, if you have a processor algorithm that produces aliases (say for example a waveshaper or naive oscillator) you can mark it to be oversampled, say x4, and the system will automatically upsample/downsample streams to the processor. So it’s very easy to experiment with oversampling within the graph.

We also support undersampling, say 1/64th of the sample rate, which maps well to control rate logic. The interpolation strategies have sensible defaults (so bandlimited interpolation for the oversampled cases, and linear interpolation for undersampled streams) but you can specify alternative strategies if you know best.

One other point i’d like to make is that SOUL could be quite handy for non-audio use cases, so programming micro controllers. It’s not something i’ve tried, but it’s on the list of things to think through (say targetting an arduino).


I will be interested to see how soul works with supporting various core-embedded dsps. Predominantly because these DSPs (in the case of the kind of chips you find in phones) are often used for heavy lifting in decoding operations and such. Quite often there is an API available if you speak to the correct engineer on a good day, but the documentation isn’t always great. The same is true with multimedia embedded targets. Take the NXP IMX8M for example. A chip used in a range of ‘connected’ home audio products. There is a toolkit for using the internal DSP, (subject to a volume of chip sales and a few NDAs with the appropriate parties), but how to utilise the processors is neither clear or indeed trivial, because there is already a proprietary audio core library that runs on it, that belongs to a separate conglomerate to NXP. In fact, companies like DSPConcepts have built an industry out of solving this very problem Jules describes, in the context on automotive audio systems.

As a DSP engineer, I would advocate a language like soul, that allows DSPs to be utilised in user-space applications, allowing for very high quality integration and efficient processing.
Recognising that the core of working with these processors demands hand assembler-based optimisation, a fundamental understanding of the architecture of the DSP and the intricacies of each DSPs API (they are all different), this really looks like a massive but worthwhile challenge.

I too look forward to the demos next year. And even if the hardware solution that comes with soul is a bela, an SC589 ezkit, or a gtx1080, I can’t wait to make audio apps with it.

P.s. that sharc board is great, but a total monster, and there is quite a steep learning curve to get into the sc589 processor. Great if you can run Faust on one (I have browsed through some example code), but CCES is not a cheap option for the home-gamer. I would love it is AD made their DSP ecosystem more accessible, but it’s the same old story of support vs finance. The AD Sigma series DSPs are a bit more accessible, but it’s drag and drop graph programming only until you start selling volumes.