Quoted from Jliat’s post on Special Interests forum (won’t link as I think 90% of you will prob be happier never looking on that forum of power electronics edgelords), this sounds interesting:
Recently involved with a comp using segmod, sofware for PC / Mac which writes sound files.
You can download the program here https://doebereiner.org/segmod/Segmod2.zip
Segmod is a non-standard sound synthesis that embraces the discrete nature of digital sound. All sounds created with Segmod result from the concatenation of simple periodic waveforms, such as sine, triangle, and square waves. The sixteen contributing composers have employed a vast array of different compositional, aesthetic, and technological strategies, ranging from inaudible sounds, to neural networks, chaotic functions, careful micro-montages, and analysis-resynthesis techniques. While the results differ widely in sound, all lead back to the idea that synthesis can be seen as a form of composition.
I have 12 freebees - digipack CDs if anyone wants a copy (free) PM.
github too: https://github.com/lucdoebereiner/segmod
haven’t tried it yet but hope to soon
This is great! Gonna take some time to really explore this one – I love working with text files! The compilation is really interesting too…
always exciting to see a new synthesis technique out there. this one is particularly juicy.
but my question is, what qualifies as “non-standard”? i understand that if the code doesn’t attempt to approximate the properties of acoustic instruments then it’s “non-standard”…but then it follows that most synthesis techniques are non-standard, and this term isn’t as widely employed as it should be.
I’ve seen synthesis divided most usefully into camps of “compositionally motivated” vs mimetic. Meaning there is a whole school of synthesis with the goal of creating sounds we know from nature, while “compositionally motivated” synthesis is concerned with exploring synthetic sound without attempting to model or reproduce natural or instrumental sounds per se. Obviously there’s plenty of overlap in techniques, but for me that division is more useful. Luc Döbereiner is kind of famous in this world – he might have coined the term “compositionally motivated sound synthesis” in this paper: https://www.researchgate.net/publication/268264612_Compositionally_Motivated_Sound_Synthesis
At any rate it’s a good tour through the history of sound synthesis with an emphasis on folks in this camp like Xenakis, Gottfried Michael Koenig. Odd that the link on his website is broken to the PDF, I’ll see if I can dig up a copy from a backup… this is also relevant where he uses the term “nonstandard” which I agree is a little misleading: https://www.mitpressjournals.org/doi/pdf/10.1162/COMJ_a_00067
I wonder how the results differ from other wave chopping synthesis techniques like what’s used on the Dove WTF oscillator https://dove-audio.com/wtf-module/ or the MOK Waverazor https://mok.com/waverazor.php
This doesn’t do any waveshaping from a quick look over the code. It’s concatenating basic waveshapes at a discrete frequency based on some procedurally generated score. Could think of it like a microsound tracker, maybe.
As an aside I’m partway through the bandcamp comp now and it’s fantastic, thank you for sharing this project! And OF COURSE Jliat’s score is all rests, haha. Makes a nice interlude.
I appreciate this very informative response. There is certainly a lot of overlap between the “compositionally motivated” and “mimetic” camps, especially from a historical perspective. It seems to me that the distinction rests largely in the hands of the user: do you use your FM synthesis to sound like a trumpet, or…to sound like a nightmare or a memory or an alien language? Following that tangent, is it more interesting to think about synthesis techniques in terms of what they do, as opposed to their originary principles? Whence the motivation to make standard/non-standard distinctions?
I can’t see the contents of Döbereiner’s paper, but the emphasis on composition sounds like it renders the technique in service of art or creative practice…whereas for me the beauty and relevance of this Segmod technique lies in the perceived intimacy with digital processes.
The term “microsound tracker” is great, by the way, and thank you @Cementimental for sharing.
judging from the clicks i hear when I skip thru it, it’s of course DC offset ‘silence’ ha
Ah ha! Good call - it’s actually a minute-long single cycle of a sinewave. Nice.
As I am very lazy I decided to write a small DSL for segmod so I don’t have to type a lot. Main idea was that I can write something like:
[<short 1> <short 2>]*2
which would output short duration of index 1, short duration of index 2 and repeat that sequence twice:
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2
The whole syntax is outlined in source
live demo is here: http://firmanty.com/wave/
Hopefully someone else will find this useable (I was also thinking of maybe also generating output audio file in browser?) and feel free to fork the project
Instaparse Makes me wonder if we can squeeze segmod into webassembly and turn it into a web playground
I think even if webassembly compilation would fail then the whole segmod source with includes etc. is around 300 lines long so it should be fairly straightforward to port? I haven’t worked before with creating raw audio buffers in JS but from the quick skim it looks like plain WebAudio
AudioBuffer should do the trick. Another idea that I had for
segmod extension was to allow users to draw waveforms which then could be used alongside inbuilt
And if I had to chose my favourite Clojure library I think Instaparse would be the choice
I’m currently learning Rust, mostly for use with WebAssembly. Not sure if I’m already familiar enough with it to port the code, but sounds like a nice project for the upcoming days of isolation
Almost done with the webassembly integration! Tomorrow morning I’ll do the last bit (playback) and hook it up to your DSL editor!
Oh I think this would be very fun to do in Rust!
Just took some time to listen through the compilation, really good stuff! I was actually quite surprised by the diversity of sounds, considering the restricted approach to sound design and composition.
I really like those snappy percussice sounds on FREQUENCIES III:
Does anyone have an idea how something like this can be done with segmod? The album description on BC says:
Here, all the used frequencies are octaves of one another. They are arranged in such a way that the sum of their wavelengths equals the wavelength of the lowest fundamental frequency used. This arrangement gives rise to a rhythmic structure that is indeed danceable, especially since beats repeatedly emerge whose timbre is strongly reminiscent of a bass drum.
Not sure what exactly this means though. Maybe some kind of multi-layer technique, mixing several individual tracks created with segmod? Also, at least some of the percussive hits sound like having a frequency envelope - I guess this could be emulated by continuously lowering the frequency of the waveform…
Fascinating stuff - just the kind of rabbit hole I’ve been waiting to fall into!
Ok! I’ve managed to get the audio playback working! Now working on adding the missing features and adding wave-dsl in
Quick UI idea, inspired by Orca:
Nice work! I really like the minimal UI design. Do you already have plans on how to implement it? Text editors can be quite a pain, even with sophisticated libraries like draft.js…
This WASM port of Vim looks pretty interesting, seems like they are rendering everything directly into a HTML canvas. So no broken by default
<textarea> here Haven’t figured out yet where the framebuffer’s data is coming from though…
twenty characters of appreciation of minimal UI idea <3 Now I am starting to think this would look really nice as terminal app built using
ncurses or something similar
Hah yeah that has crossed my mind as well. Would be fun with trikl, but I don’t know much about cross platform audio.