I believe its only fragment shaders, no vertex shaders. Can recommend Quil (also Clojure) for live coding of 2d graphics. Can be made audio reactive through Overtone. Here’s an interactive tutorial in the web browser.

1 Like

Sure!

Pippi is getting better, and I’ve learned a lot working on it, but ultimately I don’t know what I’m doing and have been having fun experimenting. It’s a set of modules for python that let you write programs to produce rendered sound. There is also a “console” program which lets you use pippi to write instrument scripts that produce a bit of audio a few cycles long or a few minutes long that can then be streamed back (at a latency that corresponds to the amount of work involved to render the audio – which is very practically workable, but might put some off) in different ways. (Specifically, there are sequencer constructs that let you orchestrate the rendering of many sounds into phrases and loops and whatnot.) It’s a labor of love by a musician, not a computer scientist or a digital signal processing expert.

ChucK is a fresh pass at a domain-specific language for music and was expertly executed by Ge Wang and Perry Cook – the latter of whom is kind of a legend in computer music. The CLI interface isn’t sooo different from Pippi, i got a lot of inspiration from the workflow, but there’s no concept of sessions and a shared parameter space in ChucK as far as I know. Of course, those things are implementable in ChucK too. ChucK also has a ton of cool front-end programs that let you visualize your code and manage bits of running scripts in interesting ways. Here’s a demo video of where it was at 10 years ago:

ChucK is a well-thought out tool that has pushed computer music further; Pippi is a party and it would be fun if you could attend.

2 Likes

Oh yeah and that’s not to mention even that ChucK has analysis tools built in, not to mention Perry Cook’s entire Synthesis Toolkit and all the classic unit generators in there and is just way more sophisticated in general.

I’m working on integrating Paul Batchelor’s SoundPipe libraries into Pippi though, which will open things up a lot – he’s ported a ton of stuff over from csound and everywhere (there’s a paulstretch algo!) and there’s just sooo much to draw from… Here’s SoundPipe, it’s super cool: https://github.com/PaulBatchelor/Soundpipe

Paul should be on this forum too.

3 Likes

wow excellent replies

i’m gonna dig thru all of this…makes me rethink a few things about my plans

btw for anybody considering chuck, it seems kadenze has some courses pretty regularly

3 Likes

Thanks hecanjog for introducing me to this place… looks cool!
I’ll quickly add a few of my own thoughts:

  • Sonic Pi + Overtone both use Supercollider under the hood, which is a crazy sophisticated language for algorithmic composition and has live coding.

  • ChucK: the distinct thing about it is that it’s a language built around the notion of time. sample accuracy is great, and it handles “concurrent” events really well (not true concurrency with threads, but sample-accurate concurrency with “shreds”) it also looks a little bit like C, which can be quite nice at times. Also, puns. So many puns.

  • While I haven’t seen it used too much, Csound 6 has the ability to re-evaluate code. Here is a video I made years ago demonstrating this: In the right hands, Csound turns into a very powerful synthesizer…

  • Gibber is really neat. One thing I really like about it is how the editor itself works with the code you are writing. makes it “alive” in this sense. He also makes an effort to perform with his software (something I need to do with more of)

  • For live coding and composition, I hack on my own quirky little language called Sporth, which is built on top of my DSP library Soundpipe.

8 Likes

Hi Mike! and Hi Paul! and Hi to anyone else I know in this thread whose handles I might not recognize.

A friend told me I should join this forum to announce a new live coding environment, gibberwocky, I’ve been developing in conjunction with Graham Wakefield. It plays off of ideas found in Gibber but is used to sequence / control Ableton Live. It also takes the “alive” code features (dynamic source code annotations / visualizations) that Paul mentioned quite a bit further than I had previously done in Gibber, including using sparklines to display the values of gen~ modulations.

In terms of features, a big advantage as compared to MIDI sequencing environments is that you can dynamically create gen~ graphs and assign them to control any parameter in Live. So, full floating-point resolution, audio-rate modulation.

Nice to find this thread here! And double thumbs up for all the other environments mentioned in this thread. One of the really amazing things about the live coding community is the plurality of environments that exist; in fact, there were at least four new environments introduced just last week at the international live coding conference. - Charlie

16 Likes

Welcome here! That’s funny I started this thread right after seeing your video posted above! Inspirational work!

This is so cool! I love the in-editor visualisations! Please tell me this is not only for Ableton Live and it can be used with anything midi/osc! :smiley:

@saintcloud awesome, glad you’re enjoying Gibber and I hope gibberwocky proves to be as much fun. At this point, the gibberwocky beta’s (lack of) documentation definitely means some Gibber knowledge (in particular the pattern manipulation / sequencing) will help you get up and running faster. And I’m curious… anyone know if there is a video that prominently features Teletype in more of a live coding performance role? I would love to see that. There’s actually a fair amount of discussion in live-coding literature about overlap with modular synthesis.

@lijnenspel We have a WebSocket API that theoretically should enable other environments to hook up to gibberwocky. Basic sequencing would require two things:

  1. The host environment can send a request, on each beat, for the next beat’s messages. Perhaps this could be simply done with MIDI clock though? In that case we’d just need a shim to translate the MIDI clock messages into the appropriate WebSocket messages… hmmm. That might actually be easier than I thought it would be now that I think it through.

  2. On the flipside, the host would have to accept timestamped messages and be able to accurately sequence them… again, maybe MIDI would be fine for this.

While the automatic code annotations (like for Euclid etc.) should work automatically if the above conditions are met, getting the sparklines to work would probably be really painful in most hosts. And even before that would work, you’d have to figure out how to easily dynamically generate modulation graphs in the host. Not many will support this.

Are there specific other host environments anyone would be interested in using with gibberwocky? Thanks for the feedback! - Charlie

Could it send info to an op1?

[quote=“charlieroberts, post:31, topic:5032”]
anyone know if there is a video that prominently features Teletype in more of a live coding performance role?
[/quote]not to my knowledge

teletype has been my gateway into exploring this world
before using it I thought writing code for music was way beyond my capabilities

I’d love to try other languages but I’d also like to learn techniques and strategies that function “cross-platform” in some way and utilize them w/ tt

1 Like

@glia Tidal-MIDI is definitely worth a look… it’s cross-platform in terms of OS and of course MIDI is fairly generic: https://github.com/tidalcycles/tidal-midi

@steveoath Does the op1 output MIDI Clock or MTC? A quick search leads me to think it doesn’t… at least in the near-future I’d like to avoid gibberwocky needing to have its own clock / scheduler.

1 Like

Tbh I’m not sure since I’m a recent owner. It can sync with teenage engineerings pocket operator series. (I think this may be only be a feature of the leaked beta os though). Perhaps someone a bit more knowledgeable than I could shed some light?

Gibber + p5 + Ableton / Max seems to be my particular rabbit hole. It’s a very powerful combination.

4 Likes

Tried to use it too with kids, but most of them were annoyed because most of the code has to be written in English (I’m French), too bad :frowning: hope one day I’ll hook some of them up.

I decided to focus on tidal for the next few weeks. With so many options i realize some priority must be set for me to actually make progress and tidal seems the most manageable/fun for what I’d currently like to do.

Part of what has me taking code more seriously now is the prospect of using really concise strings that generate something unique. In the past I assumed that the only way to build my own software sounds was to attain complete mastery over the chosen language. I guess that would be the best option if my mind worked that way but I know myself. If learning code is similar to developing ability in spoken/written languages…I prefer basic conversation and translation skills over years spent to achieve fluency.

Seeing the supercollider tweets (compositions confined to 140 characters) was further evidence. Also a while ago Alex McLean started a similar challenge involving tidal.

Reverse engineering these tiny snippets and hearing immediate results for each adjustment is how I think I’ll learn most quickly. Just thought I’d share here in case anybody else who’s new might benefit from a similar approach.

5 Likes

Atom here is my editor

1 Like

I just had a crazy weekend with Gibberwocky too. Charlie is killing it

2 Likes

two i like

2 Likes

@shreeswifty Pat, glad you’re liking it! Graham is also in the process of working on a gibberwocky backend that will enable live-coding of Max/MSP patches (as opposed to only supporting M4L).

@glia Glad to hear that Tidal is working out for you! It’s pretty amazing and has a really active community around it. In case you ever want to continue down the rabbit hole and make your own mini-language, here’s some notes from a workshop that Graham and I (mostly Graham) recently led on the topic: https://worldmaking.github.io/workshop_iclc_2016/

3 Likes