awesome! thanks for sharing. cant wait to tuck in!

1 Like

this blow my mind. Thank you, it’s inspiring project, will absolutely check this out.
My first instinct is to feed ETC into the video input so probably I try to start with that.

1 Like

thank you so much

I’ve been around on these forums since before they were we moved over to lines and have been so inspired over the years by tons of members

this kind of community drives and supports this kind of sharing

here is a little wiki I’ve been working on putting together that may answer some questions you may have

video into Scrawl is so much fun

especially with drawing on and noFill it makes for slit scan type of effects you just don’t get elsewhere

like I said in my stream I made that same box scanning/video painting patch with a memory palace a while back and it has been a goal to make that patch much cheaper and accessible for other people (and myself haha :wink: )

that patch was a visual cortex, memory palace and escher sketch at least and that is a whole different investment compared to a raspberry pi, MIDI controller, mouse, and webcam/ USB capture device

8 Likes

To dip my toes in the waters of video synthesis I’ve built a CHA/V (Cheap Hacky A/V) unit designed by Jonas Bers (PCBs available from Thonk) - snapshot from my playing around with it below:

Already have spent an inordinate amount of time staring at colourful squiggly lines on an old computer monitor, so … success? Not sure where I’ll take this next, some of the DIY LZX stuff looks good but I’m wary of ending up with 6U of video modules - so for the time being I’ll probably stick with this little CHA/V.

Having read through this thread, it’s mind boggling how much more complicated video synthesis is compared to audio. Digital vs analogue also seems to be a lot more relevant in this space.

9 Likes

I have this: Analog Reason GR3


full analog, black&white, composite out, easy
2 Likes

Looks very nice. I also felt that I didn’t want a whole rack of video gear but still wanted to play with analog video. I have the Discret11, FFG and the CBV001S. I have all of them in a row and there is much hope that I will have time soon to really play with them. They are very nicely made with clean 3d printed cases and simple power supplies. Each one is a unique thing and you can patch them however like guitar stomp boxes. I had them shipped from France at 3 separate occasions and each time they came quickly with perfect packaging and tracking that actually worked. I highly recommend Bastien’s creations - he also makes video euro things and all his prices are pretty low considering LZX or others.


1 Like

Hi!
A few days ago I watched Alex Pelly’s workshop on Perfect Circuit and it blew my mind. I’m pretty familiar with eurorack and modular audio synthesis, and would like to learn more about analog video synthesis. I’m interested in basics like oscillators (and how do they translate to video), sync, colors, basic modules etc.

Does anyone have any resources they can point me to?
I’ve exhausted my googling skills without much luck. I’m gonna start reading some video modules manuals but not sure they would explain the basics…
Thanks!

I got a lot out of this workshop from Stromkult

https://www.stromkult.com/workshop-video-synthesis-stephane-lefrancois-lzx-industries-visual-cortex/

lots of basic things like what audio modules you can use with video and how to hook things up.

1 Like

Thank you! Will check it out.

I’ll take a stab at this but it will be general and quick as seeing it would be better but…

Video oscillators run at a much higher frequency then most any audio oscillators go and if you listen to it it’s more like dialing up aol or some wierd digital garb. They also operate between 0-1v as that is a video standard.

In basic when you sync an oscillator you can sync it vertically or horizontally and your frequently is one bar to many bars so your waveforms are the sort of snaps of each bar. An unsunk horizontal oscillators can do nice scrolling through the frame but unsunk vertical oscillators is very erratic and a very slow oscillator looks like a slow full screen fade up and down. And for color it is just black to white in shades or gradients as you add color other ways.

I would watch all the videos on the lzx YouTube and especially the ones where they go through 3 patches and show you how they build them.

1 Like

Thank you for the explanation.

I feel like I’m missing some basic piece - how does an oscillator get translated to an image?

1 Like

you need a core module like a visual cortex or chromagnon. They have many functions but one is to take an oscillator output and visualize it through its output (composite/component…)

so the visual cortex for instance has two channels each have R,G,B inputs
so you plug one oscillator output into R for instance
that would show you the pattern in B&W because the same image is going to all three RGB channels
if you plugged different oscillators into R,G and B then you would get some kind of mix

or use something like a TBC2 to take external video and translate it into LZX RGB standard outputs and then patch within a system
check out the LZX forum lots of good into over there!

That helps, but I meant on an even more technical level.
A wave is an oscillating voltage right? what are the mechanics of translating that voltage to an image? Does the voltage at a point in time translate to a pixel with certain opacity in a certain location on screen? What is the mapping between time and location?

I feel that that kind of low level understanding helps me be more purposeful with patching.

I’ll definitely check the LZX forum, that’s a great idea!

From a “theory” perspective, here’s a diagram that may be helpful (Principles of Television Engineering by Donald G. Fink, 1947).

For a monochrome image, you have a voltage signal corresponding to brightness. With the original technology used in analog TV, this voltage signal would directly modulate the intensity of an electron beam. The TV would contain two sawtooth oscillators which drive solenoids (electrically controlled magnets), the horizontal and vertical deflection solenoids. These solenoids would generate controlled magnetic fields that deflect the electron beam to hit a specific point on the (phosphorescently treated) screen.

The “horizontal” sawtooth oscillator controlling the horizontal deflection solenoid is set to a high frequency, scanning from left to right across the screen thousands of times per second – each such pass is called a “scanline”. The vertical deflection oscillator runs more slowly, at 50 or 60 Hz depending on what television standard is used in the country the equipment was made for. Each pass of the vertical deflection oscillator is called a “field”.

When you feed in some signal to modulate the intensity of the beam (the brightness), that signal needs to be synchronized with the cycles of the horizontal and vertical deflection oscillators. This is where sync comes in – horizontal and vertical sync pulses are transmitted along with the intensity modulation signal, and these sync pulses are used to reset the phase of the deflection oscillators, exactly as a sync input works on many oscillators used for audio synthesis. When the signal you are using to modulate the beam intensity is also being generated by an electronic oscillator (LZX Prismatic Ray, Cadet IX, other “video oscillator” modules), you may wish to also reset the phase of that oscillator so that your Prismatic Ray or whatever gets its phase reset at the same time as the deflection oscillators do. If you don’t reset the phase of your “video synthesis oscillator” with the horizontal or vertical sync pulses being fed to the display, then your oscillator will not display as a still image: for instance if you don’t sync the video synthesis oscillator to the vertical sync pulses, the oscillator will appear to scroll vertically. Without horizontal sync, an oscillator running at horizontal scan frequencies (a few kHz up to 10 MHz) will generally have an unrecognizable waveform.

This is how it all basically works for monochrome images. For color images, “how color is encoded” depends some on the display format (composite vs YPbPr vs RGB) but video synthesis systems are designed to let you patch with the individual color channels as though they were independent monochrome signals. (In the “glitch” domain, there are devices which deliberately interfere with the color encoding and/or sync to produce interesting visual effects.) And of course non-analog displays like LCDs don’t work like this at all, but may include some circuitry to translate from a standard analog format like a composite input to a digital representation appropriate for driving the display.

11 Likes

@csboling’s explanation is much better than the one I just wrote :wink:

That’s a great explanation! Thank you for taking the time to write it.

I hope everyone is getting a chance to check out Vidicon a free online video synthesis conference!

check out the agenda above lots of stuff for the rest of the weekend if you missed yesterday

we are playing Saturday Nov 14th at 1:00 PM

here are some pictures of us getting ready for our talk/performance

and a video preview

We will be discussing combining physical objects with a video synthesis system. The flexibility of creating your own already visually interesting content to then process further allows for much smaller systems to be more than viable.

come join us!

6 Likes

Do you know if there’s any recordings of yesterday? Definitely some talks I’d like to see!

1 Like

things are being archived here I’m not sure how quickly it will happen for all the talks but it looks like a couple are already up
1 Like

Here is a thread about the video synth patch book (wip) we just put out.

enjoy :slight_smile:

5 Likes