The SOUL thread got briefly sidetracked into Faust land, which stirred up a bunch of interesting topics in my brain, none of which are on topic really for SOUL.
How about a less focussed thread at the intersection of non-existent or theoretical DSP hardware, programming language design, and the UI demands of digital musical instruments?
I have some specific questions / ideas on this topic but want to see first whether anyone else will bite. Realise that many people seriously interested in this stuff would rather let code do the talking and or discuss some concrete new tech like SOUL or Faust. But here goes…
What elaborate DSP tech will we be using in the future to vibrate air? Can we make a toy version in 2018?
in particular i feel there’s still a general lack of focus on the interface(s), how to actually play the “instruments”. regardless of what’s under the hood, i think the interface largely will “drive” the direction of the music, and dictate the process (and how much fun it is). everything that’s not realtime is a no go, there’s otherwise a great risk of electronic music going in the direction of conceptual modern art…
I’ll always be a VR sceptic because I love touching things so much ! (strange sentence). all I want is a ton of data from a nice-feeling surface with an oled/e-ink display and a computer in it. I think that wiould provide most of what humans need from technology tbh, but I think the rest of everyone is more into the making a second reality thing (not to diverge real deep down a rabbit hole)
yeah i agree, hence the AR bit. i view the VR more as a virtual sound processor, with infinite timbre possibilities, whether to place myself in it i’m not sure haha. to build something as powerful from the ground up would be very costly time wise, and i think it’s more realistic for the audio DSP to coexist with the GPU.
i was almost gonna triple post about the OP-1, which seem to be very popular, although very limited at the same time. from personal experience, the “success” of such a simple synth is largely based on that it’s so much fun to use, and well thought out in what it actually can do. it’s hard to believe there’s such limited availability of similar “sound machines” with a good display as you say, especially to be configured in realtime from something like Reaktor or Live.
Can we throw in a cost constraint? Viable DSP hardware costs quite a bit less than viable AR/VR hardware, for example.
I’m very interested in UI but I see no reason to abandon knobs, sliders, and buttons, just yet.
I’m intrigued by modular systems that do a good job with modularity at multiple scales. You want super granular low level “blocks” for building up larger “modules” that are then recombined musically at improv/composition time. This two tier structure is expressed very explicitly in something like Reaktor Blocks, but @TheTechnobear’s Orac for Organelle is another example, I think.
I’d love to see Orac spread to the Pisound and Bela platforms, and it’d be cool if it could use other backends, not just Pd, but SuperCollider, or maybe SOUL or Faust.
A low cost modular DSP platform that allows for flexible deployment of MIDI and OSC controllers sounds like the alien DSP future to me.
Not sure if this is strictly on-topic, but I’ve been thinking a bit about user interfaces for granular synthesis in the style of Xenakis/MI Clouds. I think a very cool (but expensive) solution would be to approach the freeze buffer from a sort of polyphonic angle, and have a continuous controller (Continuum, ribbon controllers, etc.) to pick out several parts of the buffer to play together as a sort of chord or arpeggio in order to build the sort of textural clouds that Xenakis and Mutable Instruments had in mind, while allowing for more control and expressiveness. Having control over grain envelopes and size would be useful, and there’s probably a good way of controlling that, but I haven’t given that a ton of thought beyond “just do it like Clouds does”. At this point I’m definitely describing a specialized instrument, rather than something like a Eurorack module, but I think granular synthesis is a powerful technique, and cracking the UI on it could really bring it forward as a strong tool for experimental music and sonic exploration.
I already have Orac 1.1 (in dev) running on both rPI and Bela , using a remote display.
Also Orac (or similar) for supercollider is already ‘on the cards’.
Prior to Oracs release, Id already got Supercollider running on the Organelle - but Orac put it on the back-burner for a while, but the intention was to then move that to Orac.
(PD was done first because I had the C&G PD patches as candidates for Orac modules)
Recently been back to SC explorations, so Im keen to do something in this area.
of course this is not new, , the idea that you load modules, route audio/data, it exposes parameters is obviously being used everyday as VSTs / AU, where the vst host is your rack
I am really interested in this hybrid approach, where we have modular software architecture, mapping onto modular hardware (as is possible with things like Percussa SSP, Bela Salt etc)
One challenge I think in this area is the UI , and how we make ‘multi functional’ modules feel much more coherent… currently there is a real difference between a synth/module that has been designed for a particular use - often feeling very natural, compared to multi-functional modules/synths which have quite high complexity, and don’t often feel very natural/fluid. its a natural consequence of generic vs specific.
(though I do think Reaktor Blocks, has shown a good way forward, though perhaps the modular metaphor might be a bit limiting)
Then the playing interface, ‘where the rubber hits the tarmac’ , where I think expressive controllers are offering great possibilities , and I expect this to continue, perhaps with some more emphasis on haptic feedback?
how does this relate to dsp tech?
well I think it shows that there are many concerns, and they are distributed, and at different levels. so I suspect this is the challenge for dsp, how can we write code that exists at different levels of hardware on the device (some on dsp chips, other on cpu) , and over distributed boundaries onto different devices.
for sure midi and cv (!) has served well, but I suspect there are other ways to express dsp intent that my need to be explored.
I don’t have much to contribute here, other than that I’d love to see this explored further. Haptic feedback is such an underutilized tool, and I think there’s room for it to be applied to things like endless encoders and touch-strips in the future, and even to keyed interfaces like the Roli Seaboard. There’s lots of possible innovation in that direction, and I’m glad to see someone thinking about it here.
I’ve had super similar ideas for that kind of thing @jasonw22 (albiet laptop-focused). A modular UI type-thing that would let scripts in different languages to be plugged into each other and speak CV, with a focus on integration with different physical controllers.
Idk if this is DSP exactly but here are some usability thoughts…
As a modular lover my biggest pain point for the format is that the wires actually get in the way of my ability to perform the device when I get deep enough. If there is a way to eliminate the wires through some alien latency free audio connection that would be amazing.
IMHO music is never going to leave knob/button land, but visualization of the signal flow is something that is prime for AR visualization. Especially since you can turn it off when you are preforming.
Also, Ive been thinking lately that I would love to see a modular synthesis format that is more slimline/wireless. Akin to something like this: https://palettegear.com/
great activity here! from a controller perspective i think most generic ones are a failure when it comes to build more complex setups for live performance/process, however extremely useful as add-ons for particular tasks.
i also like granular synthesis, and it’s very powerful and intuitive with a good interface. but to populate a controller with the correct representation of all different “blocks” and keeping everything together at the same time is obviously a challenge, because no one has done it yet? probably, significantly more time has to be spent developing the controller and the desired artistic “processes”, than the underlying engine or DAW.
it would be interesting to know what kind of “advanced” controllers people use here. some years back i hoped push would be be the standalone device, at least for ableton live. but i think it was a failure, however still better with 3rd party software…!
For a while I started a project that used the scripting functions in Max to generate semi-randomised DSP chains, from blocks I started to design. Like taking the randomise function of some synths further - by randomising the architecture. I then planned some kind of topographical control for it, a bit like that spatial thing in AudioMulch. Stumbling on interesting combinations through chance, shifting routings through organisation in space. Part of the idea came from thinking about Peter Blasser’s synths, the combination and interaction of parts, eschewing stability, mixing audio and control and making something inherently tweakable and surprising…
We could build that if you like? I’ve got a beginning somewhere…
I use Soundplane, Eigenharps for playing surfaces, a 61 note keyboard (a synth, so I can use standalone ), and then a Push 2…
I think the Push 2 is great, and have been writing my own software for it, running on Organelle/rPI etc … (as well as of course using it with Live), its very close to perfect for multi purpose control - my only gripe is it size, id prefer it smaller - so smaller pads, and 16x8, but they’d still have to be rgb and polypressure/velocity sensitive.
(ive no issue with the ‘extra buttons’ as I find these can easily be repurposed in a musical context, and having labels makes them easier to remember)
… so I definitely do not agree with the idea it was a failure, rather the opposite, its very flexible if you run with it
‘user level’ scripting - whats the thoughts on this?
Im personally I’m pretty sceptical on this, and growing more so, as we see more of it.
sure as a programmer I quite like it, and of course its vital if your a live coder - absolutely necessary.
what I don’t like is it gives the impression of (and fans say) ‘anything is possible’ to potential buyers… and deliberately trivialise the skills/investment of time to do this - and often divides communities into those the hackers/patchers and the rest… where the rest end up trying to appease the hackers to get their wishes (and perhaps hyped expectations) carried out.
this was partly how orac came about, musicians using the organelle asking, I want this synth patch, but with this other FX or sequencer - and the response they were getting was, sure what you need to do is “open PD, cut this, paste that, connect this” - it just seem totally unrealistic (and its not a good way to learn patching skills frankly)
of course, there is the flip side, some who might not normally get into patching, start doing this, and learn to program, and really enjoy it (and get lots of praise from the community for doing it) - but I think they are the minority.
also does it give the ‘manufacture’ a get out clause for not doing all the work? leave it to the community?
but perhaps its just me thinking scripting ‘leaves too many behind’,
perhaps others are just happy it opens things up, and so offers more potential?!
cheers, interesting controllers! maybe i was a bit harsh about the push, i think my expectations were quite high. like being able to sit in the sofa without a display and just listen in the dark
the push is still part of my setup, but more like a work flow enhancer than enabling creative edge. otherwise, i like it quite minimal for computer stuff now, basically 128 and an OP-1 with some meta MIDI scripting.
I recently interviewed Jeff Mills and asked him about his thought on the impact of AI in music arts and culture. Below is the interview, the reply to that specific question is 13:40 but the whole 20 mins is somewhat related I think.
So my take on the topic of DSP futurism is perhaps less futuristic, ‘closer to the metal’…
In my opinion it is very natural to describe DSP in terms of a circuit diagram or graph, similar in appearance to the familiar ‘patching’ environments such as puredata or patching a physical modular.
A beautiful thing about DSP language Faust is that it expresses (in code) the form of a graph. This combines the easy ‘maintainability’ of computer code with a natural expression of the inherently parallel nature of DSP computations.
In the future, our DSP code will closely resemble a graph or circuit diagram and this code will compile into a massively parallel fractional fixed point assembly language. There are some compiler challenges here!
How alien is this concept of a single thread of execution to the analogue circuit designer?