I believe that some of the image processes in this may be of interest

http://www.akirarabelais.com/o/software/al.html

2 Likes

+1 for touchdesigner
heres an old video i made https://www.youtube.com/watch?v=qg1ugFJodfk

for the particle things, check this tutorial: https://www.youtube.com/watch?v=M8X_FFB-ikQ

Lumen is great!

4 Likes

Since real-time isn’t a factor, Houdini is also worth a look, it’s got a more robust ecosystem of tutorials around it than Touchdesiger, although most of that is very VFX-industry oriented.

Entagma offers good tutorials around procedural images and motion graphics in Houdini, both free and through their Patreon. To me those images look like they could be right off the Entagma site!

https://entagma.com/

3 Likes

I highly recommend playing around with Shadertoy for web-based procedural visual generation. Check out their gallery and play around with the code in any of the pieces.

1 Like

For programming them there are lots of approaches but I’ve been far more impressed with artbreeder. It is a great image generator using some large GANs (neural networks). Try it out at http://artbreeder.com

2 Likes

Code is in the title but this is a great introduction to code that mimics natural systems.

https://natureofcode.com/

It’s tough to avoid math. If you read every post I’ve written here you can see years of me trying to avoid math. The “precalculus for dummies” book in my bag is my final acceptance that I made a mistake taking study hall in 12th grade instead of calc. Math.

:turtle:
:turtle:
:turtle:

2 Likes

This is really cool! Ever since I learnt about formal languages I’ve wondered about using them for generating musical patterns and images, but never stumbled upon software built for that. I’ll have to give it a try!

I’m playing around with Cables now, thanks a lot @lsky!
Nodebox also seems interesting, but I like the idea that the stuff I make can run in the browser. I can see some good use for that!

1 Like

Too bad you’re stuck with the preset images. Trying out deepart.io now, which is basically the old Google deep dream thing I think, but it seems to let you upload your own images.

Cables looks awesome. No idea why I glossed over this before.

2 Likes

and btw. I just found out that the shaders you can make in shadertoy can also run on the Structure eurorack module. I don’t think I’ll want to venture down that rabbit hole, but it sure is fascinating.

3 Likes

OpenGL/GLSL is where all the powerful magic is. The rest is just about how you connect the rest of your I/O to the shaders.

The hardest thing to learn about shaders is the way they are executed. It’s all at once in parallel. Makes it hard to do procedural stuff like loops and conditions. I’m still a bit baffled to be transparent.

2 Likes

The general network (on art breeder, there are 5) has a tremendously large number (read thousands) of parameters and doesn’t take input images (for example, 120 dog breeds): the system is orders of magnitude more capable than deep dreams (which is interesting in itself).

Uploading images to deep dreams simply iteratively reapplies the network to an input, feeding back an internal layer emphasising characteristics. This may feel like it gives you more power (read artistic control) but the output is far more limited and harder to control than a generator. Since it’s not a true generator network (just inference feedback), providing input makes sense but doesn’t afford much control over the output form (after a few iterations you won’t see the relation to the original).

The generator networks are created by adversarial “evolution” of a detector and generator over a huge, annotated corpus of images. In the case of 4 of the 5 networks (general, album art, landscape and anime portrait) you have no way to use the detector (on the website, the GANs are free/open source), but freedom to set theie (human understandable) parameters. This is likely for a combination of copyright concerns (uploading proprietary imagery from animes, album covers etc) and for confusing (or even offensive) results (e.g the detector saying an image of you can be constructed with a rat, a garbage truck and a lobster).

The portrait GAN exposes the detector meaning you can upload a (portrait form) image of a person (or anything else, with novel but hard to predict or use results). If the picture is “normal” you will see the detected parameters applied to the network (i.e what the network thinks you look like, not the image itself). You can then alter the inputs, cross breed, randomly alter etc.

Overall, if you like deep dreams, this is an entirely different level of neural net image generation technology. It’s worth registering and trying it out, I think you will be amazed at what it can do.

3 Likes

Gotta say, this is yet another example of why lines is so great. I have discovered multiple new rabbit holes in this thread alone.

5 Likes

wow, I’d been thinking of making a thread like this for a while! Excited to dig into a few of these suggestions this weekend. The only two things I know in this field (both of which I enjoy but feel limited and almost preset-like) are Lumen on Mac and Whorl on IOS.

:100:. It’s been linked before and I know for sure @jasonw22 knows it exists (so this isn’t for him, just wanted to be sure it was in the thread): The Book of Shaders is approachable and Kodelife is a great way to avoid needing to figure out where to execute your shady stuff. A nice feature of Kodelife is easy integration with Syphon/Spout so you can pass your colorful malarky around between programs.

Speaking of which, interested parties may want to check out Signal Culture Apps. They’re not free, but the :moneybag: is a donation to the Signal Culture foundation (?) which does a bunch of neat stuff.

1 Like

Not questioning the superiority of GAN. I was mostly just referring to the fact that it’s not so fun having to stick with the images they give you. Artbreeder It’s fun, but not so useful for my purposes.
I guess though that I’d have to install and train my own GAN software to get where I’d want to get. Feasible it seems, but maybe later. I’m mostly just exploring things right now. There’s just too much out there :slight_smile:

The tricky part is that you can’t train a GAN from a single image, you need a huge corpus of images to train from if you want it to generate. In particular if you wanted to use your own images you’d need 10s of 1000s of different examples. For the most part, making and annotating the image set is the hard part: training the networks is trivial. With art breeder you aren’t composing with images but rather vectors from a huge space. The trick is really that you can find images with the style you want (it is very much not picking a starting point) by iterating towards it from the “mix gene” starting point. Pick the mutations you want moving toward your preference. In the end, using neural networks for media creation requires a big mind set change.

The GAN they provide is probably trained with orders of magnitude more media (of higher quality) than you’d reasonably be able to create. In the end they will never give you exactly what you want or have such fine grained control as to make very explicit changes. You end up having to explore the space and accept it’ll be what it is and enjoy it… Or take the time to generate media you like then take it into image manipulation software and go from there. The reality is that the direct output isn’t necessarily usable in any case. 1024x1024 is the largest output the GAN they provide has and even then the accuttance isn’t necessarily super high. In other words, for now it’s about finding new ideas. You can create a lot of things you’d normally get a concept artist to create.

Once you find a workflow that works for you, you’ll be able to turn them into a tool you can call on for many purposes. In the end though, it’s more like a talented artist that you talk to in an obscure code neither of you fully understand. You’ll get fantastic images you’d never see anywhere else, which may not be quite what you wanted, but perhaps it’s what you need. I’d love to hear the outcome of any other exploration you arrive at. I’ve been interested in neural networks for about the last 25 years so I’m always up for new discoveries.

1 Like

Well, there’s machine-learning-based tools to upscale images, so that would have you covered for more resolution :slight_smile:
Thanks for the detailed explanation! I’m really quite new to all of this. I’ve never been very interested in this kind of thing.
But it’s definitely something one can’t ignore and it’s likely not going away too soon. So one needs to look into it!

1 Like