Generative visuals: video, graphics, art, etc

Having seen things like the Ming and critter and guitari video scope and teenage engineering opZ concept I was wondering if there are software equivalents - even mobile “toys”. I guess some on here have had way more experience of this in their shows/art installations. I love the idea of feeding something an audio track and it creates an images that responds (like a visualiser on wmp I suppose) to it. The more glitchy and lofi the better.


::sets up binoculars and coffee to watch the thread unfold::


also interested!!

1 Like

This looks promising (disclaimer - not tried it).
Open and customisable.

Very into this topic.

I’ve always loved Skoltz Kolgen after seeing them play at a festival. (What happened to them??)


And Ryoji Ikeda

Anyone with insight generating visuals in this style, please chime in! he makes alternative audio and video modules
and maybe ? (i guess one can do ikeda style vids)
now this is a good one : chis novello’s illucia

1 Like

There’s an absolute tonne of great software that requires a lot of coding and hard work in order to generate visual (ie VVVV or TouchDesigner))

Alexander Zolotov creates some amazing bits of software, PixiVisor, that may be exactly what you’re looking for; something to quickly generate video from an audio input. No coding or messing about!


(available for iOS, Android, OSX, Linux and Windows)

PixiVisor is a revolutionary tool for audio-visual experiments. Simple and fun, cross-platform application with unlimited potential for creativity!
It consists of two parts: Transmitter and Receiver.
Transmitter converts the low-resolution video (stream from camera, static image or GIF animation) to sound in real time, pixel by pixel (progressive scan). So any image or animation can be transferred to the other devices through the sound.
Receiver converts the sound (from microphone or Line-in input) back to video. You can set the color palette for this video, and record it to animated GIF file.

Key features:
file formats supported by Transmitter: JPEG, PNG, GIF (static and animated);
real-time video export to animated GIF;
64 predefined color palettes;
input from camera (iOS, Android, Linux);
iOS: iTunes File Sharing;
iOS: Wi-Fi Export/Import (in the File menu of the Transmitter);
more functions in the next PixiVisor updates…

Examples of usage:
wireless Lo-Fi video transmission over audio;
video signal transmission through audio cable; you can then modify that signal by some mixers or audio FX processors;
sound visualization;
save any sound to animated GIF;
hide some images and animation in your music;
searching for hidden messages in the ambient noise; EVP (Electronic Voice Phenomenon), ITC (Instrumental Transcommunication);
something else…

Lastly, he has also created a very cool bit of tracker software called SunVox. Also multi-platform and surprisingly deep and feature rich given the low cost.


Random googling also brought up a visualizer entitled KUBUS

Kubus is a minimalist audio visualizer, written in C/C++. It essentially maps the audio buffer to a grid, which can create some really interesting lo-fi patterns. It also can do FFT analysis, as well as sibilance and RMS detection. More info can be found in the README on the github page.

I am pretty happy with this turned out. Kubus was an assignment for class, so it finally gave me an excuse to look at OpenGL. As it turns out, OpenGL 1.1 isn’t as difficult as I pegged it to be, especially for doing the simple 2d animations that interest me. I also figured out that you can write GL code in C without writing a line of C++ code. This makes me very happy.

Kubus also gave me a chance to use some Soundpipe code and explore FFT libraries for Soundpipe. I am hoping to get some FFT-based effects into Soundpipe very soon!

I am pretty excited about how well GL seems to work. I’m hoping to o wander down this path a little more to explore more ways to create audio-visual compositions and toys.

Oh, I also figured out how to get Cairo and GL talking together, but that is for another blog post.

More projects are on their way…

1 Like

Maybe a bit off topic here, but i love the work redfrik (Frederik olofson) makes with Supercollider, for instance:



These look great. I love sunvox so will def be giving pixivisor a look.

Have also been chatting on Instagram with a guy who is in the process of developing a JavaScript (canvas API) visualiser called modv which looks like it could be really nice. His username is 2xaa and as he says it’s not quite at version 1 yet

1 Like

i was really into doing generative video off of the monome grid for a while using processing and OSC. i don’t have much time to code these days but some of it survived:

the only time we ever did it live was in new mexico:

personally i’d love to see a node based version of this, there is so much good javascript these days i’d love to have a web version. don’t have time to port now but would love to hear if people would be into this…


You can make Ikeda style video with GEM and pd on linux if you use pdp/pidip [Pure Data Packet & Pure Data is definitely in Pieces" I have a ton of video patches that react to sound and are similar to that.


Hello Lines, this is my first post here, although I’ve been lurking here for the past year or so.

Let it be the thread about generative graphics, animated or not. Let’s post yours, or your favourite artist’s work; discuss favourite tools, processes and resources.
I’ve seen a couple of threads in the past here, but they are quite old now. Hopefully I’m doing it right with creating a new thread, instead of reviving an old one.

I’ll start: some years ago I had an idea of creating a small web-service where you can generate and buy a patterns/prints online. Project went to nowhere for different reasons, mainly me being lazy. Yesterday I’ve stumbled upon several pictures that I’ve generated back then and thought that I’ll share them, instead of leaving them forgotten somewhere deep in the backup folders:

Also, drew the new one yesterday’s night:


Cool thread! Here is some pretty old stuff I did, long ago when I had access to a laser cutter and was really into burning generative images into big pieces of plywood


Manfred Mohr and Clark Richert come most readily to mind.

Before Richert went out on his own he was a founding member of Drop City, a long-sustaining artists’ commune with influenced by the ideas of Buckminster Fuller. The history of that alone is well worth investigating, as well as that of Gerd Stern’s USCO collective.


Great stuff! I love generative design especially when it is in physical domain.
For now unfortunately I only work in digital but I hope in the future to maybe work with small plotters.
Some of my works can be viewed here:

I managed to get great effects when working in such way:
generate images
convert them to audio
pass them through synthesizer filter etc
convert audio to image back again in computer
Example of such work:

Also not really generative in purest sense but great tool nonetheless: Paracosm Lumen -
It is a analog style video synthesizer in software and I managed to get great results using demo version synchronized through midi with octatrack

I would also be happy to hear some book recommendations when it comes to more advanced topics in generative graphics because most of the books that I found focus more on artists that want to create stuff programatically and I would like to find book for programmers that want to create art :wink:


Great stuff, everyone!

Ha, I’ve just had a safety instructions/introduction to laser cutter at work a couple of weeks ago. Also had some plans for engraving some blind panels. Can’t work with aluminum tho, as I’ve originally wanted — only acrylic/plywood.
Love your works, especially the first one. I have always debated whether to use simplex/Perlin/vector fields or not when I’m doing something. It’s so ubiquitous, but so damn pretty at the same time!

Heard about Mohr for the first time in my life a couple of months ago — beautiful stuff as well. Gonna check the other guy now.

I’ve also had my time opening pictures with Audacity and messing with them there, but never had any results that I’ve liked. Also, sometimes you are destroying header of the image file and it’s simply not opening anymore :slight_smile:

Since you posted some animations, here are couple of old mine’s as well:



I’ve ditched the idea pretty quickly though, because of the .gif format restrictions


There’s a Drop City documentary I’d love to see if anybody can locate a way to view it.

So much more expansive than the intentional community ideas that eventually became commonplace. Art (let alone generative art that makes use of computers) has too often become an afterthought in communities that are primarily focused on survival. (Most communes I visited in the 90s were lucky to have one or two underpowered beige boxes with perhaps a single dial-up modem between them.) I could never get inspired by subsistence as the highest goal for survival. And yet, I remain in possession of the notion that a more communal way of life will become necessary for sustainability of humanity.

Occurs to me that this probably deserves its own thread, though it has been touched on by many other threads… perhaps it’s part and parcel of agrarian interdependence.


I never managed to get working results using audacity so I wrote a simple processing script but that introduced other problems like for example 200x200 image with 16 bit color depth gave me 80 000 bytes just for one frame and given that sound card has 44100 sample rate processing of just one frame took two seconds and also introduced artifacts due to loss of synchronization etc. but it was great for glitch art kind of stuff.
Your animations look great and it is a shame that you ditched them. I like this noise that they have on them. Is this because of gif compressions or something else? I frequently use chromatic abberation and noise to make digital images look less “perfect” and I think that this noise looks good on your images.

1 Like

Processing time (as in “process”, not language) is a whole another story, yes. I’ve been struggling with that as well, as I was trying to generate ~8-12k tall images for printing in 300 DPI in a normal poster size. I think the answer is to calculate different parts of image/animation separately and then stitch it together, but I’ve never dived into that.

Also, you just gave me an idea that it would be interesting to try and “stream” image through analog synth and then reconstruct it back :slight_smile:

As for the books, there is an oldie but goodie: — but it’s a little bit more on a beginner’s side.

Regarding gifs: yeah, I believe some of the noise is introduced while converting the animation itself, but I remember that I was adding some dithering noise before conversion. Just for the looks, plus without noise some parts of the animation with gradients etc. look weird because of the quantization of colors.

Here are few more:




Love your gifs too, by the way. Several years ago I had a folder on my PC where I was saving all of the animations from internet with similar vibes. I love the creepiness (in a good way) of some of them. Also, I would’ve bet them some of them are naturally produced, like wiggling broken VGA cable.