Last weekend, I managed to get OBS and vdo.ninja working, thanks to a great deal of very patient help from @tyleretters, @synthetivv, and my friend Lucas, who accessed the stream as a “layperson” (i.e., without any familiarity with OBS, vdo.ninja, lines, etc.). It occurs to me that this whole setup is, in a sense, only as reliable as my home wifi router. That’s a little scary. But it worked just fine last weekend, and that certainly bodes well.
I wasn’t sure what to do about visuals for this set. The most important thing is to show the screen, of course. This is Flash Crash, after all. But there’s also a eurorack aspect to this performance, and it felt important to show that too. And what about me? What about my face? People like faces. Faces are nice. So that should be in there too, I think.
What I wound up with is three live video feeds overlaid on each other with OBS. One is a screen-share from my laptop running druid. On the one hand, this fulfills the general Flash Crash dictum I’ve alluded to above. On the other, it serves a specific purpose for this performance insofar as it will convey the text of the piece. I think reading the text, which is also the code, which is also the sequencer, will provide some sense of trajectory, structure, and development for the viewer.
The other two video feeds are cameras. One is an overhead shot from my phone, of the small lunchbox eurorack system I’ll be using and the mechanical keyboard I’ll be typing on, both sitting on a white table. The other is a shot of me sitting at said table, which is situated in front of a bay window in my apartment. I don’t have any actual cameras, so I wound up using the built-in webcam on a second laptop I had lying around to get the head-on bay window shot. I kind of like the “lo-fi” quality this gives the video. It feels like a product of the zoom era. The timing should work out nicely for the video too. Sunset is a little after 8 around here these days, so the bay window should be getting some nice light around 7:30 local time.
Early on in the process of developing this piece, I decided not to use my own voice. People often compliment my reading voice, and I thought it would be interesting to do a piece in which that wasn’t in use at all. I mentioned this on the phone with a friend last weekend, and he made the very good point that this is also me completely ignoring people’s feedback on my work. After that conversation, I considered some ways in which I might re-integrate my voice into the piece. But none of them felt right. So I’m going to go with what I’ve got. But I imagine I’ll go back to using my own voice in the next piece, maybe in the w/tape “reel.”
In a similar vein, I mentioned in my last entry that I was considering adding some field recorded material to the w/tape reel for this piece, but I decided to forego that too. I think that’s for the next piece. I realized I was faced with a decision: do more or do less but do it right? I chose the latter. I think it was the right choice. So instead of spending time doing more field recording, arranging those various field recordings into a single audio file, and then updating the w/tape reel (which is time-consuming because everything has to be recorded into w/'s input manually, rather than dragging an audio file in from a computer or something like that), I spent that time refining the text of the piece. I think that’s what the piece was asking for and what needed to happen. The field recording stuff is for the next piece. It’s an exciting idea, but it’s a different idea. So now I’m excited about this piece and about this nascent idea for something else to work on after Flash Crash. That feels like a good place to be.
It occurred to me today that, in a sense, the performance practice that I’ve developed is an inversion of the traditional poetry reading. Normally, typing the text would be precomposition for the poetry performance, while reading into a microphone would be the performative aspect. In the practice I’ve developed, reading into a microphone (or, in the case of this flash crash performance, playing an audio recording into w/'s input) is precomposition, and typing the text is the performative aspect. That inversion wasn’t my original intention, but it’s an interesting discovery, I think.
One more note: I was running into some strange behaviors from w/ (more on that here, if you’re curious). I haven’t been able to get to the bottom of them, but I know that pushing crow’s memory limits can sometimes elicit weird behaviors, so this got me to streamline the krahenlied script a little more, and now w/ seems to be behaving a little more reliably. I’ve also come up with a way to recover from a crash pretty quickly and seamlessly, without power-cycling, so even if it does happen, it won’t be a disaster. The eagle-eyed among you might notice me doing this during the set. So there’s something fun to look forward to.
Again, this whole experience has been a great opportunity for me to up my coding chops. I think I’ve learned more in the last month than I had in the six months prior. I’m very grateful to Tyler and to the whole fc crew for that. And I’m very much looking forward to Saturday.