i agree. there’s some slippage (which is fine.) i jumped in the thread on a whim, b/c i saw @csboling’s fun salty beep post and the response, and thought it would be a fun counterpoint exercise to try and implement a major scale of sinewaves in C, concisely and clearly, and see how long it would take with no dependencies (answer, maybe 10-15 minutes. this would be a great interview problem.)
following is a rambly Too Long bunch of thoughts that will probably close out my involvement here for the time being.
as often happens, i think the discussion has uncovered some important points here, one of which is that “programming” or the verb “to program” can encompass a variety of tasks.
to me, “programming” always been a sort of fundamental tool for implementing parts of all kinds of systems. in my own experience those have include audio hardware / firmware / software devices, but also quite a few scientific research projects and some other kinds of HW/SW products.
the more projects i’ve done, the more my concept of “programming” has broadened. it is a craft that encompasses many considerations beyond the algorithmic. projects have a tendency to grow in scope. the more stuff your platform can do, the more true this is.
e.g.:
ok let’s make an interactive sound sculpture…
what if we add a bluetooth chipset to that… and a camera… and a webserver…
and and… and suddenly my architecture has grown from having to process a stream of samples, to a dizzying tapestry of processes, threads, data streams, platforms, protocols, and libraries. that’s where things fall apart without serious attention to craft. (so maybe what i need to know is “how do you learn software architecture” or something.)
maybe a more relevant point: how to learn really depends on what kind of goals you want to be able to achieve. the assumption is that we’re talking about audio synthesis and processing. clearly, there are many tools that do low-level audio manipulation well. the only reason i can see to go to the sample level is to experiment with algorithms. ironically, as @PaulBatchelor has pointed out, probably the best way to experiment with audio algorithms programmatically is with a high-level and possibly domain-specific language. there are some standouts:
-
Faust in particular is kind of a killer app if you are comfortable with the functional paradigm and expressing algos as signal flow. (TBH, i haven’t personally gotten as comfortable with it as i’d like for eperimenting at low levels; usually i have to have a super clear idea of what i want to do first. for building strings of blocks and cross-compiling them, it’s great.)
-
Octave/Matlab is the de facto standard for a lot of academic and engineering work, and comes with many tools for visualization, numerical analysis and optimization, &c. (SciPy is also becoming very popular; i still find python sort of a drag to work with personally). it’s a one-liner to export an Octave matrix to a soundfile. (audiowrite('foo.wav', data, samplerate)). pretty much everything i do algorithmically starts in Octave before becoming C code; i’ll write DSP test cases in Octave and re-use them on the C output.
-
C/C++ get honorable mention from me here because they are the bedrock languages, learning them is good for you, and they can extend all those other environments. there are many, many ways to deploy C in audio programs without dealing with the boilerplate: write a JACK program, VST plug, Audio Unit, JUCE app, csound opcode, PD external, use libsndfile, use soundpipe, alt firmware for your favorite Eurorack computer, &c &c.
(i’ll emphasize again that the hard part here is understanding what operations produce what results, and not actually performing the operations.)
if you don’t want to work at the low level of the DSP (you want to string together building blocks and probably control them with musical logic), then there are the usual suspects, largely to taste:
- textual (csound, supercollider, chuck, sporth, &c)
- patching (pd, max, reaktor, bidule, &c)
you absolutelydo not have to know much DSP math to effectively use these things; that is of course their basic reason for existence. the further up you go in terms of abstraction, the less this activity resembles general-purpose programming. (which is fine! i’m all for stepping away from, like, Instruction Set paradigms.)
the core concepts of “programming” for the purpose of processing audio can be absorbed in a day. digital audio is just a stream of numbers. loop over them and perform a handful of arithmetic operations and “that’s it.”
nobody does everything “from scratch.” very few people would ever get anything done. PD uses Tcl/Tk to implement the UI, Max uses JUCE. (Tcl/Tk is a scripting language with a GUI toolkit built in C; JUCE is a C++ framework wrapping native rendering with a lot of helpful abstractions. C and ASM are abstractions. it’s turtles all the way down. i haven’t even seen applications of substantial scope written in ASM since the 80s, and frankly… those people were maniacs. (ok, shlisp is done in ASM, and peter B is also kindof a maniac, but that’s a set of opcodes and much simpler.))
for all my evangelizing about learning C, it’s not because i think doing stuff at the low level is particularly smart or cool. usually it’s not the most efficient way to do some particular thing (because there is a domain language / framework / library / program / device that already does that thing.) i recommend it for pragmatic reasons, because knowing C/C++ (and now, JavaScript) allows you to see into such a large amount of what’s going on in the digital world.
if the context wasn’t specifically about audio, i would say just learn (modern) javascript. syntax is clean, there are tons of resources and tools, computers are fast and it’s not just for websites anymore (e.g. Orca).
in the context of audio, learn C. you need the performance and it will help you customize everything from Max to modular.
“real programmers set the universal constants at the start such that the universe evolves to contain the disk with the data they want.”