interesting… I’d actually thought about doing something similar to the polyend presets , for a similar reason. not too difficult to code really - but I’d need to think about the UI a bit (as salt obviously is very different to presets)
I’ll probably wait till the Bela Pepper arrives though, as my Salt is already ‘allocated’ to other tasks :slight_smile:

1 Like

One of the themes I’ve found interesting so far in this thread is the crossover between having a desire for an instrument and having an interest in coding / building that instrument.

When I first read comments to that effect I saw a bit of frisson (can’t think of a less wanky word, sorry) between wanting immediacy, responsiveness (don’t break the flow, express myself quickly, etc.) and (what for me would be) a long and labour-intensive process of building something. These things don’t actually contradict, imo, but it got me thinking.

I think of a friend who has played clarinet and sax for decades, another who has played the violin for decades, another who plays synths and other keyboards, another who plays drums. You can probably guess where I’m going… none of them has ever talked to me about wanting to make their own instrument.

Any thoughts about this?

Is it more about costs or benefits?
Is it mostly that making software is easier than making a physical object? Because e.g. the posters who are into making instruments already code?
Or is it mostly that coding an instrument gives you a heaps better result than you’d expect from building a physical one? Or to flip that: is the option to trade up to an instrument that better fits your needs a worse one with software instruments than with e.g. clarinets, drums, violins, keyboards?

Is it that my friends are a crap sample? :wink:

5 Likes

Interesting questions :slight_smile:

I think there’s a few differences that makes it more approachable to make your own software instruments.

There’s a lot of tools of relative ease to access that allow to build software instruments. And the cost associated to make decent software instrument is lower than hardware. And the cost is static: once you have your software toolkit, you can easily make many prototypes.

In my mind, creating/customising a physical instrument is an additive process. You put together physical component that did not exist together to get something new. The process of making a software instrument is similar, but you could see it as a substractive process: what the computer can achieve is infinite, and we chose to consciously reduce that infinite into a finite set. Instead of doing anything, it will do a specific thing, with codified gestures that we would consider optimal to our creative process.

In a way building software is resting on the shoulders of giant: we can, but we don’t need to invent new effects or sound sources, piecing them together in a new shape will already bring interesting results.

To end this, I have a different question: who after building software instrument, would like to explore (or explored already) building physical instruments?

2 Likes

The current trend of “soundboxes” lile this > http://cdm.link/2019/05/10cars-electro-acoustic-instruments/ makes me think there’s also a desire to build specific physical tools for specific sounds. And just like most software self-created instruments, those are maybe less focused on tones, scales and score-reading tools, and a little more on very specific physical approaches, a way to reach certain kinds of textures, atmospheres, sensations, more quickly and with a different physical relation to the instrument than when trying to play the right note at the right time with enough dexterity.

So just like these soundboxes do not try to emulate a Clarinet or even a Kalimba or Violin, despite having some tools of the former (bows, strings, plucked metal stripes), I think self created software instruments are not trying to emulate a moog, a prophet 6 or whatever “synth as a classical score-reading, dexterity inducing, piano playing instruments” are out there. It’s about trying to find a different way to connect to sound, prototyping ideas without the overhead of thinking about the perenniality of the instrument, without having to consider how other people might use it and how you should accomodate that.

Just like you absolutely can do what SPCs do on a laptop, you absolutely can do what most diy built soft instruments do with other synths, but still, doesn’t mean you will, and doesn’t mean you’ll enjoy the same things about doing it along the way.

Also a final point, I know a lot more people building their own websites with little knowledge in coding and 0 money, than I know people building their own houses with little knowledge in architecture and 0 money.

8 Likes

tbh the main point I take away here is how it’s a bit weird that “build” can be used for things as different as websites and houses! But I guess it also suggests that in general working with software is a low-stakes option compared to reworking physical things.

1 Like

One thing I don’t think I’ve seen mentioned is that some of these devices are great “learning platforms.” I’ve wanted to learn more about linux, embedded programming and audio DSP, and Norns has been a really fun vehicle for doing this (I understand Lua scripting is barely scratching the surface in terms of learning these subjects). The educational aspect is one reason I think it’s so important to have an active community and good documentation for a platform like this.

Granted, I don’t NEED a norns to learn about these things, but it’s a fun way to do it, and it still feels magical when I can draw to the screen and make a grid light up with a few lines of Lua.

I guess I’m also saying “fun” (remember fun?) is reason enough for me.

14 Likes

it’s how we think about things that constructs our reality

played shows
and made vinyl records with an mpc and also a yamaha su10
both are cool, and different single purpose computers

also played shows with a laptop, sometimes ableton live sometimes dj64 and a grayscale grid

all great shows
:slight_smile:

live, it’s just not the same as having a cymbal and a kick drum to reach out through the crowd and get people moving

it’s fair for artists to create concepts

putting together a drum kit has resonance with modular
it’s very particular what one has got where
and this impacts how one plays
every kit drummer creates their own instrument

clearly it’s personal

in Paris they built the Eiffel Tower and in San Francisco they built Sutro Tower
one’s a cultural landmark and one is supposed to be ignored…
although maybe nowadays Sutro Tower is kind of NASA cool and we could sell statues of Sutro Tower to go along with the cable cars and the Golden Gate bridge sculptures reminding ourselves where we left our heart

9 Likes

This totally triggered me to pickup an x230 tablet for the purpose of only running Reaktor and Madrona Labs. My first PC, ha! It was less expensive than almost every module in my case.

9 Likes

As someone who owns a proper norns, arc, and grid, I’ve been following yours and others’ work in that mega thread for raspi norns and it’s just fascinating. Do you have a link to the DIY arc/grid work you’ve done?

Not as yet - it’s on my long long list of things to write-up/document. :slight_smile:

1 Like

For the longest time, my studio consisted of my Ensoniq SD-1, a Lexicon Vortex (my two single-purpose computers), and Garageband with a very few processing plugins, limiting myself to piano and electric piano. I’m considering returning to that or something similar for my next project. There was a lot of bandwidth saved by already knowing that I was succeeding or failing based on actually playing something.

2 Likes

There’s this new modular system that (as it seems…) combines many modules that are run on a single computer: https://3dpdmodular.cc/. This seems like a better way of doing the digital modular thing… at least more economical?

Surely better to not go AD/DA-A/D-D/A…

It makes me wonder if a bunch of digital modules from different manufacturers could use a single processor mounted in a case.

2 Likes

The modules could be USB peripherals. You’d still need a local MCU for A/D in the client modules. Not sure that’s actually a simpler/cheaper architecture.

1 Like

I’ve thought about something similar but you’d need some kind of blazing fast low latency digital transport layer for signals like PCIe which is expensive and complicated (and not ideal for ‘streaming’ behavior).

If you wanted to, I supposed each module could be its own audio interface and use USB or something but then you run into limits on whatever the host USB chipset is (on most systems’ USB chipsets, I still have issues getting 2x BRIO 4K cameras to capture at the same time due to bandwidth–bunch of individual interface modules would certainly eat bandwidth quickly too, not to mention getting those to behave under one host OS).

I really feel like AD<->DA is the way to go if you want DSP processing in modules because you maintain that physical compatibility with existing sound systems–otherwise you need some specific thing to get your sound in/out, getting microphones interfaced, etc

1 Like

In case this is interesting, I’m going down a slightly different hole right now and developing a software/hardware system that uses PCIe between a host computer and a removable, upgradable m.2 FPGA coprocessor card.

The idea is that you can bring your own audio interface and use it to “construct” a digital modular synth by assigning inputs/outputs of different software synthesizer components to your audio interface.

The disadvantage being that everything’s within a single software/hardware environment. But if that software environment is small, open-source, and easily extendable I don’t necessarily see that as a drawback.

Here’s the current, simplified flow:

  • A user on the host computer organizes a bunch of low-level DSP operations within a network through a visual dataflow JavaScript web interface in a browser.
  • Once ready, the host computer reduces and optimizes the user DSP graph into a machine graph
  • Machine graph is compiled into machine code, which is sent over PCIe and “programs” a soft cpu running on the FPGA
  • For a predetermined block period, the FPGA runs your program against the inputs and produces the outputs, over the PCIe link
  • User decides to be make changes, go back to top
1 Like