Software Physical Modeling


#1

I’m enjoying all the physical modeling discussion happening recently, but I’m personally more interested in general software implementations than standalone hardware units, so here is a thread! (Based on @alanza’s suggestion in the hardware thread)

Over the holiday break I tried porting Julius O Smith’s pluck.c to cython for inclusion in my python computer music system.

This is an “elementary” implementation of digital waveguide synthesis (the same basic algorithm used in the VL1, so I have read) which he has written about in depth here;
https://ccrma.stanford.edu/~jos/swgt/ (Added scare quotes because it doesn’t feel very elementary to me right now.)

Here’s the original C version from 1992: https://ccrma.stanford.edu/~jos/pmudw/pluck.c

And my cython port: https://github.com/luvsound/pippi/blob/master/pippi/pluck.pyx
Plus a usage example: https://gist.github.com/hecanjog/1970f78a51779d18fac239a540c0b2e9

Which generates this:

I do not grok the algorithm! Re-writing it was a good first step for me though… anyway it sounds neat!

I wonder how similar this implementation is to the version used for David Jaffe’s Silicon Valley Breakdown?

Anyway, lets discuss software implementations of physical modeling synthesis!


#2

great!
i like what he says in the video at 1:56 about the instrument builder and “defining carefully the domain in which you want to work” regarding sound synthesis
and in addition being composer and performer, a fluid combination of these three and you’ll master electronic music :wink:


#3

Playing around with this some more with a friend of mine, we tried seeding the delayline buffers with different impluses and it’s a very promising way to modulate the sound!

We were trying to work out a way to modulate the parameters within a given note, but the buffer lengths, base frequency and the behavior of the feedback and writing reflections back into the buffers are so dependent on each other it didn’t seem to make sense… but playing with the initial impulse really leads to cool variations in the sound.

Here’s a squarewave at 2x the frequency of the delayline buffers for comparison:

He also had a really cool idea to add additional pairs of strings on top of the fundamental string at integer multiples (so, harmonics) but don’t seed the other strings with an impulse at all, instead leak the reflections from the fundamental string back into the extra resonator string buffers, and then just sum them all… (very curious to try that out…)


#4

While this synth uses a bar (rather than string) model, it has a very nice implementation of a sympathetic vibration model similar to what you describe (and oddly enough for a bar model, you can get really nice bowed strings sounds with the sustain cranked up).

https://physicalaudio.co.uk/PA3.html

Also just wanted to drop a link here to the max resources I mentioned in the other thread:


#5

Sorry if this is a dumb question, does the bar model also use delay lines a la waveguide synthesis? JOS calls out two general approaches here: ‘lumped models’ and ‘distributed models’, the latter being delay-based. Later on this makes it sound like the approach is similar, but uses “meshes” of waveguides. (Whatever that might mean!)


#6

Unfortunately I don’t know any more about how it works than the tiny bit of description available on their web site.


#7

Dang, it’s really nice-sounding! Thanks for the hat tip!


#8

Faust examples are another rich resource to mine for physical modelling synthesis examples. They have lots of bars, pipes, strings etc to look at and there is an (optional) automatic midi interface giving you a certain type of n voice polyphony for free…

There is a really cool bowed string model in there which does some really interesting things as you modulate, e.g ‘bow pressure’ or ‘bow position’ (or digital simulation thereof) the more I messed with it the more I thought midi sucks for controlling something like this…


#9

MPE helps quite a bit!


#10

Yuh, interesting I was reading more about mpe and thinking about the state of things. Seems to me the needs of software models can be met by mpe (by hook or by crook)

But maybe the midi paradigm is a bad mental model for software instrument designers, perhaps one should think first in terms of an osc interface representing the physical model, then secondly define as good an mpe mapping as can be achieved.

@zebra’s pitch representation standard on aleph is really good definitely encourages/enables ‘controller software’ designer to think out of the 12 semitone box:

  • 16 bit log ‘Hz’ quantity (iirc 256 bits per log semitone)
  • 16 bit log ‘tune’ quantity (linear scale going up 4 octaves)

For a string model why shouldn’t ‘pluck’ ‘bow’ and ‘damp’ gestures operate on a single stimulated string? Very possible to imagine how this could be implemented as a monome grid controller ‘app’.

Sequencing these kinds of commands is another thing I want to experiment with in future and have a good framework for starting (a generalised sequencer which currently juggles mostly midi but can be adapted to osc with a little more work)


#11

Pitch bend is a 14 bit value, and MPE works today. Hell, it worked decades ago if you wanted to, it’s more of a convention than any new technology.

So, sure you could probably one up MPE for a singular custom instrument, but for things that need to interop with other things, it’s not a bad choice. Seems to work pretty well for the Haken Continuum, and I’m hard pressed to imagine a more expressive controller.

But that being said, I wouldn’t want to stand in the way of progress! If I’m missing your point, please let me know.


#13

Just to weigh in from a similar perspective, I find the note paradigm that’s baked into MIDI not really conducive to dealing with sound-events as sound-mass, for lack of a better way to put it. It’s certainly possible to shoehorn a lot into the MIDI standard by combining keyboard-centric affordances like aftertouch & pitch bend with CC modulation, but I agree that at least for my purposes, adding MIDI support is something that I’ve done on top of a core design that looks more like osc parameter bundles and messaging. For me the “note” concept in MIDI is something I’ve found I need to work around, and doesn’t make a whole lot of sense when say orchestrating events at the microsound level which may have various hierarchies of abstraction that don’t map naturally to a note-centric design. (Eg, the relationship between particles & clouds in a granular synthesis context.)


#14

Totally acknowledge that this is the case. That being said, it’s possible to do, with existing technology that is already widely deployed.


#15

Is it too late to refine my original stance of ‘midi sucks at this’?

Mpe mappings for those Faust physical models could be quite interesting and fun I guess!

Unfortunately I don’t have a suitable mpe device to play with…

Coming at it from the perspective of my sequencer, adding a four-voice virtual mpe device to the rest of my ‘virtual midi rack’ probably better than disappearing into a black hole of infinite osc possibilities (thinking about the possibilities there makes my head hurt)


#16

:smiley:

I’ve been through that same merry-go-round circular thought process.

I do wish there were more affordable MPE options, but there are more options now, and at lower costs, than when I started.

The Roli Lightblock M is probably the most affordable at the moment. I haven’t tried one. The first gen lightblocks were pretty universally hated as being unexpressive, but the M iteration purportedly made the surface much more responsive, and I have heard some positive reviews. It also uses an isomorphic grid layout, which I feel is more amenable to microtonal/non-12-TET music than any piano-esque layout.

If you can spring for a Linnstrument eventually, I highly recommend it. I absolutely love my Linnstrument and expect to be learning more about how to play it for years to come. The only instrument that tempts me to move on is the Haken Continuum, but I may never feel that flush.

There are other affordable options I have given less thought to. Joué, Artiphon, K-Board Pro, are all options. I wonder if there are others? I also wonder who’s going to be the first to come up with an affordable DIY approach…


#17

I wonder if Haken continua are distinguished from other kinds of continua by containing an incompressible surface :thinking: Sorry, this is a completely contentless reply…


#18

:laughing:

Funny, but the surface is compressible, so that you can actuate the hall effect sensors.

More detail here:


#19

MPE is useful for “expressive notes” but as @lettersonsounds points out there are a lot of possible performance systems for which discrete notes are not a good mediating layer.

In my thesis work that led to the Soundplane I hooked the surface of my prototype controller directly to a 2D waveguide drum model, with both damping and excitation at each of 8x8 points at around 1 kHz. Despite the limited spatial resolution it was playable in ways that collections of keys just aren’t. The hand drumming gesture of tightening the drum by pressing with one hand while striking with the other hand is one example.

More info: https://madronalabs.com/media/randy/RandallJones_MSc_FINAL2.pdf


#20

@randy, wouldn’t you say that a continuous playing surface, such as that presented by the Madrona Soundplane, can make use of MIDI note+pitch bend in a pitch-unquantized manner that de-emphasizes the role of 12-TET for all practical purposes?

Here’s Dolores Catherino (of polychromatic music fame, a system of microtonal composition using tone “colors” to further divide the octave) demonstrating on a Continuum at 1:58.

I can configure the Linnstrument such that the pitch bend is present at the onset of any note, simulating unquantized pitch.

These seem like simpler problems (which are also already solved) to me than trying to get multiple people to agree on a new OSC grammar, but maybe I’m confused.

I guess I’m also a little confused as to what the significance of pitch quantization (or lack thereof) is to expression of physical modeling.


#21

I wasn’t talking about pitch quantization at all. By “discrete notes” I mean notes that are bounded in time by a start time and a stop time. In the paper I present an approach for using continuous 2D signals as controls for physical models instead. They are an especially good fit for 2D waveguides in a system where they excite as well as damp the model. Other less physically motivated systems are possible too.

What happens when you fade from a firm touch to a light one to just a scratch? Or play by varying the shape of just one touch, or merging two touches into one? By keeping the entirety of the sensor signal, a 2D pressure matrix over time, and applying it to the model we can explore different possibilities for connections without having to define what threshold of pressure or velocity constitutes a note or how to represent a note’s area. The equivalent thing for a keyboard would be to record the displacement of each key over time.

I like how she compares the pitch resolution of the Continuum vs. Seaboard in this video—I never noticed that limitation of the Seaboard before.