Raspberry Pi for music

Ive been setting up a rPI3 recently for use as a ‘musical device’

and I wondered how others are using rPI, and for what?

To get better latency I’m going to be using a pisound hat (blokas.io).
Ive connected a 7" touch screen, and a mini wireless keyboard/mouse…
(got fed up, connecting it to a TV, when I couldn’t connect via wifi :slight_smile:)

I might adds some pots for direct control via gpio, but really Ive got bela for that.

currently, running raspbian, but might move back to ubuntuMate

primarily, I’m going to be using it to run ML Soundplane and Eigenharp software initially into midi MPE synths (Axoloti mainly)

probably will also use a bit of PD, perhaps Zynthian… perhaps might play with traktor waveform.

I thought might be interesting to have this topic after seeing @shreeswifty & @jasonw22 posts on the pure date thread.


I have two RPis, an original version B running Satellite CCRMA and a RPi3 running Raspian Lite. My intention has been to use these as small autonomous instruments against which I can improvise on another instrument (e.g.: Shnth or Tocante Bistab). They should run headless with some kind of controls for switching modes and for safe shutdown and run off of rechargeable battery packs for street performing.

The RPi3 has a Sense Hat that provides an 8x8 LED matrix, a joystick, and acceleration & orientation sensors. My first goal has been to program it as a drum machine that’s hard to dance to. I’ve written a Chuck program that randomly generates 4-voiced MOS rhythms, with long and short beat lengths that are randomly determined and gradual pattern mutation. For each rhythm, the program randomly selects one sample each from four categories of metallic & mechanical sounds. It also controls flashing patterns on the LED matrix by sending an OSC message on each beat. Next step is to write a Python program that sends OSC messages to the ChucK program whenever the RPi is jostled, causing it to generate a new rhythm, and that manages a menuing system controlled by the joystick for changing functions and initiating shutdown.

I also plan to use the RPi3 for uploading patches to my Shnth, as a drone machine, and perhaps as a synth controllable by the Shnth (as an HID) or my EWI via MIDI over USB. I’m also considering changing the abstract patterns on the LED matrix to scrolling generative text that changes color in time with the beat. The drones might use some PD code I’ve found and for the synths I would likely use ChucK.

I’m about to reinstall Raspian Lite on the micro SD card because I screwed something up while setting up Jack, following instructions for a low-latency audio setup. Audio works but there are errors that keep Jack from running in the background. I’m constantly astonished at how arcane Linux is with so many config files in so many directories…

I’m not sure what I’m going to use the RPi1B for – perhaps as a sequencer, or an effects box for an Arduino-controlled FM radio sequencer.

Long term overly-ambitious goals for creating RPi based instruments are: 1) a microtonal strumming instrument for accompanying vocals, 2) a portable live-coding instrument with an LCD display on top for the performer & one in front foraudience, held like a concertina with some kind of split, one-handed or chording keyboard (if I’m really dreaming big, I imagine developing an iconic live-coding language derived from Ixi lang – because icons would be much more visible on a small LCD screen), 3) an accordion analog with two independent halves, one held in each hand and each with its own amplified speaker, that uses sensors to determine relative position/orientation/acceleration between the two boxes.


I’ve already gone on too long, but forgot to mention:

  1. I’m just using cheap USB sound cards. I don’t know how much this adds to latency, but I’m not doing anything interactive at the moment. Audio quality is good enough for a small powered speaker.
  2. As I said, I’m running these headless, using SSH from my MacBook to access the command line. Ultimately though, I want to install video displays in any “finished” instrument I build with an embedded RPi, whether for audio visualization or for flashing text like a Lawrence Lessig PowerPoint slideshow. Because it’s the 21st century and there are real cheap LCD modules on ebay.
1 Like

I’ve never used an RPi for music directly, but I use them all the time for audio installations. I usually run some combination of PD and Python scripts, collecting sensor data, etc. A basic USB audio card makes it perfect for this and really simple/cheap (especially when they need to stay running as part of an installation for months or longer).

Lots of the ideas in this thread sound amazing and I can’t wait to follow along with the experiments and learn a lot.

1 Like

I spent a while playing with an RPi using a behringer UCA for audio in/out. I wired my own midi port to the rpi uarts. I was running puredata and had pretty good latency and performance. I wrote an external in PD to send and recieve CV and gates via i2c from a PIC. The PIC also ran a pair of seven segment LED’s. I should really finish it up and find a case for it.

I never bothered with a screen or keyboard, I found it much more easier to connect remotely with ssh and vnc.

1 Like

Hey, my long-weekend-project is to muck around with a R.pi, too!

My aim is to make it a “remote station” for a digital recording / monitoring rig. I want it to do these tasks:

  • Be the MIDI host for a number of USB controllers, and feed that MIDI to my synth (Digitakt, also via USB).
  • Take audio from the synth, and route it through Jack:
    • over network (ethernet) to central unit (an Intel NUC)
    • take the monitor mix back from the central unit, and play it out headphones
  • Maybe also mix in one or two synth voices (Pd? Supercollider?)
  • Maybe also be a looper / effects (also Pd? Supercollider?)

I don’t have a PiSound unit on the way (didn’t get in on the Kickstarter) - so for now I was going to use a spare small Focusrite USB audio box until I can get something better. I was going to use Raspian to start with and see just how far I can get.

Looking forward to seeing what you all do, too!


I’m trying to do the same. The aim would be to do two things with it:

  • use it as a headless Pd device, with just a MIDI controller hooked up.
  • use it as a polyphonic synth, either with Pd or with some linux softsynth (or a plugin via a minimal Host). This I have to research more thoroughly though.

For this I’ve backed the Pisound kickstarter, since it will let me have a really tiny computer with a decent built-in audio interface. I was really back and forth between this and the Axoloti (though I might get that as well), but decided to give the PI a chance and see how that works first.

1 Like

just posted this :slight_smile:


Well… my long weekend of Raspberry Pi for audio has run into sadness. Here’s my “score” so far:

  • :thumbsup: USB MIDI host: Using aconnect it all just works! Yay, I can route MIDI from some USB only controllers to USB on my Digitakt and lovely!
  • :thumbsup: Audio via a USB audio interface (Focusrite Scarlet) works in that both alsa and jackd can get and route it around the unit.
  • :thumbsdown: Starting jackd with -d net so that I can send the audio to another computer (to mix), and laoding audiomanager to get the audio from the USB interface (so: Scarlet via USB -> pi w/jackd2 & net2 -> other computer w/jackd2 for mixing -> back to pi -> Scarlet) works for about 10 seconds and then the kernel panics with some mess in the eth driver.

Couldn’t find a solution to this… On Wednesday, someone suggested that since on the R.pi the ethernet shares a bus connection with the USB, this was unlikely to work well. They lent me two other devices: a Banana Pi and a ODroid C2, both which have supposedly better ethernet systems.

  • :thumbsdown: jackd2 crashes on start up with a Bus Error for memory alignment problems. This is true using either device’s standard OS, and standard built packages (Debian for one, Ubuntu for the other).

Again, no amount of googling has yileded an answer: Alignment issues in Jack for ARM seemed to have been common in 2012/13, and supposedly squashed in the code base in 2014. I suspect that the “standard” deb builds, while working for most things, don’t work for Jack on these devices for some reason. I tried building myself, with all sorts of options - no luck.

:play_pause: Anyone figure out how to get Jack to run on these boxes? With netjack2?

1 Like

sorry, not tried to send audio to the network yet, which seems to be where your problems lie.

I assume you are on the latest kernel etc? (its worth doing a kernel update, as its continuously getting updates)

Did you just try the jackd network side on its own? (i.e without the usb) , as it would probably be useful to isolate the issue from the usb side… or even to know for sure, both usb and Ethernet audio works fine separately, but combined have an issue.

if the Ethernet is an issue, another option to may be try (but might be a big time sink, as ive not tried it yet ;)) is to see if you can use g_audio (audio gadget), which should make the PI appear as a class compliant USB audio interface.

Yup yup - absolute latest stable system software (apt-get update ; apt-get dist-upgrade) on all systems.

I’ll try the net only jackd thing this weekend. Suspect that will fail, and the problem will be the Ethernet. I have a spare compatible Ethernet USB dongle… will try that too, (but eats up a USB port, and makes the rig yet one more bit non-compact… )

:tada: Success!!! Turns out you just need to shave a lot of yaks:

Here’s what worked for turning a R.pi into a “performer station” (my synth into pi, pi to central mixer (nuc), mix back to pi, to sound out headphones for monitoring):


* download smaller raspian image * just use dd method of burning the sdcard on my mac * boot pi & find it & ssh into it * set up static ip by editing `/etc/dhcpcd.conf`:
interface eth0
static ip_address=
static routers=
static domain_name_servers=
  • update & install:
sudo apt-get update ; sudo apt-get dist-upgrade
sudo apt-get install screen vim --no-install-recommends
sudo apt-get install alsa-utils jackd2 jackmeter ecasound ecatools --no-install-recommends
  • => aconnect works for simple MIDI routing
    => aplay -l shows sound cards

  • edit /etc/dbus1/system-local.conf:

      <policy user="pi">
        <allow own="org.freedesktop.ReserveDevice1.Audio0"/>
        <allow own="org.freedesktop.ReserveDevice1.Audio1"/>
        <allow own="org.freedesktop.ReserveDevice1.Audio2"/>
  • export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/dbus/system_bus_socket
    • or put it in your .bashrc

on nuc (mixing machine)

run (adjusting `hw:1` to your sound card:
jackd -P 90 -d alsa -d hw:1 -r 44100 -p 128 -n 1 -s &
jack_load netmanager

on pi

run (adjusting `hw:1` to your sound card on the pi, and `-n` to the name of this station):
jackd -P 90 -d net -n pi -s -l 1 &
jack_load audioadapter -i '-d hw:1 -r 44100 -p 128 -n 1'

# route audio in to the network (`system`) and then the network back to audio out
for c in 1 2 ; do jack_connect audioadapter:capture_$c system:playback_$c ; done
for c in 1 2 ; do jack_connect system:capture_$c audioadapter:playback_$c ; done

back on nuc

# route audio from network to audio card, as well as back to network
for c in 1 2 ; do jack_connect pi:from_slave_$c system:playback_$c ; done
for c in 1 2 ; do jack_connect pi:from_slave_$c pi:to_slave_$c ; done

# if you had multiple performer stations you could mix them with `ecasound` before routing to sound card and network.


* all works! But must keep `-p` at 128 or things are very sad * latency can be as low as 1 (`-n` for alsa, `-l` for net) * I haven't measured it, but feels ~1ms - probably a bit more with `ecasound` in the loop.

Reviving this thread for a lo-fi angle at this. I’m running my B+ on Satellite. I have a very crude PD patch throwing out 4 sine waves and 5 types of noise, over which I have very limited control. The noisy built-in audio output on the Pi is a benefit here, since it adds some nice flavours to things. I have a simple feedback loop on a mixer to add some more spice. It sounds like this:


There’s an open source DIY eurorack module “Terminal Tedium” which is a nice platform for running PureData patches in your modular rig. I’ve built a handful of these and there’s a small group of folks hacking pd to work with it - like getting organelle and automatonism patches running on it. I think someone may have Orac working there too

I also did some work to integrate the most recent Terminal Tedium version with some Machine Listening research by Chris Latina at Georgia Tech.

I’ve hacked a Tedium pcb to use it’s DAC (along with adding encoders and rewiring buttons) with a new PI to run Norns.

There’s a handful of other DAC boards out there that I found while doing research this week like these:

MIKROE-506 (commercial)
I2S AudioPhat (DIY Open Source)

For the moment, I’m not really doing anything with these projects, but hope to do more with the norns stuff as I learn more.


Hi there. This is my second post to “Lines”. Hope you’ll find the guide useful.


We built this for our Isn’tses setup to play field recordings, loops and soundfonts/samples without a laptop: http://homspace.xs4all.nl/homspace/samplerbox/

There are a few different versions, here’s the main/original http://www.samplerbox.org/

Update: I’ve written up a full set of instructions for building a “headless Pi” music machine from start to finish. While the instructions are aimed at RaspberryPi + Pisound, most of it is applicable to any general music RaspberryPi situation:


thanks for making this. i’ve been curious about a little minimal pisound setup. for someone with no PD / SC experience - would you recommend one over the other? will you be sharing the “performance” patches you’ve used?

edit: i have played with automatonism a bit, so maybe that’s a good starting point.

mzero’s guide is really great and easy to follow!
If you have a spare monitor/keyboard/mouse you can skip some steps but since that is often not available, it’s really great that the guide walks you through setting things up headlessly from the start.
I have only dipped my nose into SC, but maybe getting a feedback from somebody who – like you – did start with very little previous experience is helpful anyway.
I can say that Pd is pretty doable, while SC is much more intimidating.
My aim is always to get the technical stuff over quickly so I can start to make music and Pd seemed to be much better for me in those regards.
I have to say though that the whole Raspi/Pisound/Pd combo has been a big time hog for me, and I’m not sure I’d do it again. There’s so many simple things that just don’t work for totally unexplicable reasons… and take a lot of time and help from forum members, until you have figured them out.
There’s certainly a learning value in it, but I’m not entirely sure it’s worth the hassle.
But it really depends on your goals.


in passing - my rPi/Pisound is currently a Norns in conjunction with a push2 but as soon as Norns/Grid are on sale (or sooner if someone fancies selling me one ;-)) it’ll have to find a new use. I’ve been thinking a lot about this.

I know PD quite well and SC a bit (spent a couple of weeks at christmas learning it so not a total novice). from Norns I really like the crone/SC,Matron/Lua interaction with the whole thing - it feels very natural and there is a lot of heavy lifting done for you.

My current thinking for headless rPi is to leverage that somehow - I guess in someways that could just be making headless patches that don’t need display or encoders…

(edit: on the PD/SC thing - I was a great enthusiast for PD (& Max) for a long time but somehow SC feels more natural - I guess I’m just a coder at heart - it is how I’ve earned a living for a most of my career)

1 Like