while I don’t think it is perfect - I love having an instrument that is so expressive.

I was talking to someone at work about this - I’m not the greatest engineer in the world but what I am very good at is convincing people that what I’m doing is a interesting and worthwhile (normally because it is interesting and worthwhile :wink: ) & I realised that that is how the company I started now employs over 50 people and is growing fast.and that that aspect of my skill set is way more important than my engineering skills. I think Roli have that same thing and like you say - will take them a long way (I’d still love to know what they told investors to get that $50 mil investment they got a bit ago -that’s a lot of outcome (ie you normally promise investors a 10x return – guessing they will aim for an IPO - the music tech world isn’t very acquisitive - but still that’s a lot of growth))

2 Likes

Yeah, we’re not very used to VC backed startups in the music instrument world.

They did it by calling themselves an “interaction design company”. This allowed them to ride the internet money train a bit.

But yes, it also requires them to think very big now. I have a feeling that Roli’s strategy is a bit of a glacier (mostly submerged) and we’re just starting to see a bit more of it poking up above the ocean’s surface.

I’ve wanted to run DSP code on a GPU for a while now, and would love for somebody to make it easy. And if there’s a way to use the actual DSP hardware on my devices with the same coherent and language agnostic API, so much the better.

1 Like

agree about the glacier aspect

and yes agree DSP/GPU coming together will be fantastic. I wonder if the lurking monster might be game music - it’s not really had a vast amount of attention in tech terms - but there is so much that could be done - one thing I want to look at with the music tech company I’m vaguely starting with an view to the future - generative music - shared/interactive performance spaces/installations - all kinds of possibilities

1 Like

Games need physics based sound if they want to achieve realism.

You can think of this as akin to global illumination lighting in the graphics world. By modeling light behavior based on physics (soup to nuts) you end up with scenes that look for more realistic than any more reductive abstraction can deliver.

Physical modeling of audio is pretty performance-sensitive stuff, so you need every gram of optimization you can find.

1 Like

it would be so cool if you could wander around a virtual space tapping/hitting things and hearing sound! imagine the mad instruments you could build - Harry Partch eat your heart out!

Oddly enough just bought a copy of Musimaths volume 1 which talks a lot about the fundamental maths of sound…

3 Likes

Musimathics are fantastic books. Fun stuff!

http://www.musimathics.com for folks that haven’t encountered them yet.

A cool physics based sound demo in this thread:

1 Like

Thanks for sharing this, I am impressed!

I guess the open question is - what kind of DSP are we going to be able to use? Personally, I wouldn’t be totally opposed to buying some kind of soul-audio interface, with an onboard DSP that can be programmed on another machine.

When I look at the teenage engineering pocket operators, I’m amazed at the low power usage - I’ve had two double A’s in mine for months and haven’t had to switch them out yet. But yet there’s no noticeable latency, and there’s never any audio glitching.

I wouldn’t be able to have that kind of experience just running a program on my CPU; power, latency, and glitching are all issues I deal with regularly when writing audio programs.

If I wanted to build something like a pocket operator myself, I would have to buy a DSP, some kind of hardware to program that DSP, then I’d have to do a bunch of soldering and low level programming. Also I’d probably have to quit my job so I’d have enough time to do this.

It seems like Soul could alleviate many of these difficulties, depending on what kind of hardware will exist to run it on.

I think the smart thing to do is offer to support a number of ROLI DSP developer boards with different types of commonly used chips on each one as example implementation, with royalty-free IP, providing a low-level generic API (that only differ from chip-to-chip by function names, keeping the same function-calling signature consistent between them) to interact with each chip/board.

1 Like

New chips, maybe there’s a need for it, I don’t know, but I felt like a larger point he was trying to make is that our devices already have DSP chips, but our operating systems don’t give us APIs for them.

1 Like

And I guess that’s my point, those devices all have different ones :slight_smile:

Im with @murray , whilst watching the video , the only way I could see Roli making money from this, is to make their own chips (partnership/licensing) , and it was almost a sales video for those i.e. they are making the case for why we need new hardware/tech :wink:

they will no doubt do software versions, and help/allow other manufactures to implement, as this is required for adoption and demonstration (*) . but theres no/little money (and considerable expense) involved for Roli to go around implementing on lots of different platforms.

so I suspect, their ambitious plan is to produce new SOUL dsp chips (then to license in products).
others will be able to also do this, but Roli will alway have the advantage, as they have access to the specs etc before anyone else, and the last say on changes.

also , I think FPGAs are very much in vogue at the moment, and that was not mentioned at all, yet is very applicable to domain specific chips … so perhaps it wasn’t mention, because that is what Roli plan to use.


(*) demonstration thats its viable is important, so i think as per the video , they might do one implementation at each levels of the stack ( software, driver, chip) … and no doubt the chip is last to unveil, due to investment, but also to not ‘let the cat out of the bag’ too soon.

from a purely ‘selfish’ viewpoint, I was excited to hear they had been using Bela, it makes sense as a demo platform (its open, it low latency, its close to hardware) , and I’ll be thrilled if they use it as an example platform given I have a stack of them here :wink:

3 Likes

I got the same impression. IMO it’s a very welcome development. I remember when I first started hearing about, say, UAD accelerator cards, I thought that it was so ridiculous that only a single brand’s plugins would run on them. A general-purpose audio processing card makes so much more sense.

I don’t care who does it, as long as it’s built upon an open platform; if we see another MIDI-like miracle of an entire industry agreeing upon something, but w/r/t audio processing, I’d die a happy man.

This is interesting, but not convincing. I’m always on the lookout for new audio-specific languages, which is how I came to this video …actually I came here because Roli e-mailed me a link! But I don’t see any promises here that Faust (http://faust.grame.fr/) isn’t already delivering, today.

Faust already uses LLVM, already compiles faster than hand-tuned C code (as demo’d by CCRMA’s port of their Synthesis Toolkit), already ports to a vast number of platforms (including DSP chips) and cross-compiles to all the major languages, can already be live-demo’ed in a browser, et cetera. It’s also (IMHO) vastly more elegant and concise at describing DSP computation (once you wrap your head around it).

Julian seems to be very pleased with how concise, high level & clear his examples are, and I’m sure that’s true compared to other code he or others have had to write. But I’m pretty sure every one of these examples would be a great deal simpler & cleaner in Faust. You wouldn’t have to write a voice allocator for your synth, for instance, because Faust gives you that for free. Likewise you wouldn’t need to implement delay, reverb, filters, et cetera because the Faust libraries are rich with ones you can already use. But OTOH if you did want to implement it all, it’d still be much shorter, because you just describe the functions, and the compiler implements them. Delay, for instance, is a one-liner.

Am I missing something here? Or is Julian just not aware of Faust? (Or is Faust itself actually the secret sauce behind SOUL? That seems unlikely, but OTOH Faust is already fairly-well integrated with JUCE, so when Roli bought JUCE they got Faust support for free.)

Certainly the “getting your head around it” part of Faust takes some time. I can see that SOUL is very much a procedural, OO-ish, C+±based approach to coding sound. So are several other interesting and powerful sound languages. Whereas Faust is purely functional – but if there’s one type of coding that functional programming is ideally suited to, it’s signal processing.

Clearly I’m a big fan of Faust, but if there’s something that SOUL can do better, I’m all ears. I’m just not seeing the advantage yet. Am I missing something?

-mykle-

8 Likes

As an aside, is there a resource for
examples of devices/applications built on Faust? Other than the (rather) comprehensive documentation, a quick Google didn’t help me out.

EDIT: I did find this, but it’s not really a practical application (as the paper itself stands):
http://www.dafx.ca/proceedings/papers/p_287.pdf

EDIT: ah, found what I was looking for
https://faust.grame.fr/community/made-with-faust/index.html

1 Like

I didn’t know Faust could target DSP hardware, but that makes logical sense. Would be cool to see examples of that.

yeah, I wondered why not use an existing language … there are a few around.

I think at the level of detail thats been talked about so far, it’s impossible to say if they are going to do something that (e.g.) faust cannot, as we have very little idea of the feature set… or if it comes down to something far more mundane like “commercial control” (*)

as you say functional vs procedural is interesting, not sure if it would drive them, but for sure their target audience was C++ developers, so they’ll want to keep them happy… though not sure a high level language will ever be very appealing to C++ developers - we like the nitty gritty stuff :wink:


(*) in fairness, if they plan to invest a lot in this area, this could be important to them, to ensure they can monetise it.

I’m not particularly trying to step up to bat for SOUL here, but is there any kind of simple Faust web IDE? Something akin to jsfiddle or your pick of the litter of WebGL playgrounds?

From my perspective, this was one of the most compelling parts of the concept: start in a browser, and when you have something whose sound you like, you easily have a path to plugin/something reusable and portable.

this: https://faust.grame.fr/editor/ ?

1 Like

Gotcha. This is a good start, but clearly has a community building and education problem.

With something like ShaderToy, I can visit the homepage, and wind up on an open source demo that I can see and tweak in one click: https://www.shadertoy.com/view/4tVBDz (on my phone, at that)

Obviously SOUL as a language isn’t going to solve that problem, but the birds-eye observations and goals in the talk seem to be interested in addressing those problems.

Faust can even target LaTeX. Just in case you want your DSP algorithm printed up as a 500-page set of mathematical equations… think holiday presents for in-laws.

So, basically it targets everything :slight_smile:

4 Likes