BOTNllllllllK (lines / botnik collab)


#1

Botnik is a community of writers, artists and developers collaborating with machines to create strange new things. Its core was established by an ex-Chicago bud, Jamie Brew, who began creating content with a predictive text emulator a few years ago. Now, Botnik has satellite teams across the country that collaborate through Slack to generate everything from Unwanted band names to Harry Potter fanfiction.

I figured there might be some cross-over between lines and Botnik re: Algorithmic composition and Emergence and Generative Art, so I reached out to Jamie and got the green-light to start piecing together a lines-grown branch of Botnik. It feels like level 3+ folks should be the first envoys, but happy to open this up to the larger community with feedback.

If you’re interested, we’ve received this first “assignment” from Jamie:

I think the first step is identifying what the hallmarks of each group are (in more specific terms than “algorithmic art”) and where they overlap. A first stab at that for Botnik would be:
— Generative art with an emphasis on human control, with a small feedback loop
— Transparent interfaces for manipulating output in ways that are legible to both the artist and the audience
— Sudden discovery of true patterns latent in data

Botnik uses corpuses to generate their content (e.g. their web app), so the human elements are focused on corpus selection and relying on their discretion/taste to guide the final output.

Jamie’s given hallmarks check my boxes, but how do others feel?

Some starting points:

  • Is there anything specific to composition or audio processing that is not covered by Botnik’s tenets?
  • What is unique about your generative processes that might depart from Botnik’s approach to written content?
  • Where are the intersections?

Once we nail down some ideas and define interest, I’ll invite Jamie to sync up with us – cart after the horse.


#2

Just a quick response until I can post the longer, articulate + engaged response that this deserves, but:

WOW. So exciting all above - I’ve been hugely impressed + inspired by everything Botnik has been doing. Will respond in full ASAP!


#3

Almost every monome app and teletype script…

That actually narrows it down considerably and in fact even stumps me for a bit. Monome is not especially “transparent” (and this is, in fact, my one big gripe with the type of creative work we tend to do around here).

This is new territory for this group, I think.

Regarding the concept of “corpus”, LCRP comes to mind. Not much “bot” about that project though.

I’m really interested in this project, but I think it would require us to stretch quite a bit in new directions. That’s a good thing though.


#4

@Dan_Derks - So, a bit belated, but want to re-iterate that wow, I’m hugely excited about this. I’ve long been interested in machine learning, RNN, etc as a toolset for artistic production in the age of ‘mass customization’.

All that aside, very exciting about the fact that they’re organizing as branches/nodes. I had no idea. To be clear, the questions from the “assignment” above are w/r/t what would set our ‘branch’ apart from already existing ones? (thus justifying our algorithmic existence lol)

If so, here’s some off the cuff thoughts:

Generative art with an emphasis on human control, with a small feedback loop
-I think this was touched on above, but this is definitely built into the ‘monome as tool’ ethos. To unpack that a bit further in the context of Botnik projects, this seems unique in that we’re talking about “instruments” for control in performative/production environments. Also, these tend to deal with real-time media (sound/video/etc) in a way that a lot of text/image based Botnik projects aren’t currently (as far as I know). Also, just the use of machine learning in the production of audio/music seems hugely exciting + unique. __ The frame of a small feedback loop feels super idiomatic to the idea of “musical performer” or “studio artist” with a fixed set of tools/methodologies, too.

Transparent interfaces for manipulating output in ways that are legible to both the artist and the audience
-Picking up another thread from above, I think this idea of “transparent interfaces for manipulation of output” can mean more beyond the typical scope we see in the context of performance (IE, the “well, theres not much ‘gesture’ in electronic music performance” angle). I think transparent interfaces/manipulation in this context might also refer to just clearly defined system of input->processing-> output, ie, what the corpus is, what the process is, and how the two might conceptually/logistically relate. (more on this below)

Sudden discovery of true patterns latent in data

  • I think this is why this type of work is so exciting. Has anyone seen the work Sam Lavine did with the auto-supercuts of CSPAN? Was amazing, and felt like seeing the ‘real content’ underlying all the padding and misdirection. I think this type of thing is also interesting w/r/t cultural production in the context of sample packs, digital readymades, presets, apple loops, etc. I did a project awhile back where I was using audio->MIDI to create ‘scores’ for various popular trap songs, then mapping that MIDI back onto trap production sample sets to see which compositional ‘gestures’ stayed intact, and where the contours that the computer recognized were. (whole bunch of text and recording from that project here) would love to continue something in this vein or even broader with folks smarter than myself and access to more interesting tools! (maybe google Magenta, etc)

#5

This sounds very interesting! I somehow had not heard about Botnik before, though I am certain I have seen their work and just didn’t look too deeply into the creators and their process. I’m a lurker in the twitter generative text / Bot Summit / Tracery world, and I’ve done a couple of things in the generative text space.

I have a video installation called One Hundred Billion Trillion Haiku About Spring that incorporates an illucia dtr patchbay controller for participants to manipulate generative geometry (cherry trees) and generative haiku, using the knobs to control parameters and the patch points to remix the vocabulary. Its legibility to the audience tends more to the instinctive and inscrutable than transparent, though. Patching generative text seems like it sits right at the intersection of botnik and lines!

And on the tools/interfaces side, for the poet Julie Ann Otis I made a software tool that lets her compose poems on a laptop live in front of an audience, passing along the key words and images to the audience-visible projection, while hiding the connective tissue and punctuation until she’s done editing. Not very generative on its own, but we used it in a performance where she wrote poetry on stage, I mixed the projection output with synthesized and liquid-tank visuals, and our friend Adam improvised music (known to the monome community as the author of 7up, I don’t think he’s made it over to llllllll.co yet) with each of us theoretically responding to the others.


#6

Just wanted to say this is super, and I’m pondering ideas.


#7

musical pattern generator in the vein of their predictive writer could be interesting

web audio synths

much “corpus” exists or is easily constructed for pitch/time data (midi)

http://davidtemperley.com/music-and-probability/

http://kern.humdrum.org/cgi-bin/browse?l=/

also, midi is humorous, and botnik is humorous

web midi is a thing

[ https://github.com/cotejp/webmidi ]


#8

I know Kate Compton! She’s the bee’s knees. Just received her Generonimos cardset from her Kickstarter campaign. Really cool stuff.


#9

Hopefully this is not too diagonal, really enjoying the conversation going here on the specifics. Also botnik seems really cool, and I’ve definitely seen some of this stuff online before, but had no idea there was a community built around it!

Considering this in isolation of what botnik does, my try at defining the hallmarks of the lines community would be:

  • Inspirational and [compositional/“lifestyle”] process exchange through conversation - sound+process, our many (and active) music/picture/visual art/book/podcast/etc./etc./etc. threads, the disquiet junto, LCRPs, threads that explore the more “entrepreneurial” aspects of artistic work and release.
  • blurred boundaries between instrument and application - open-sourced and community-involved design (telex/16n/i2c-ification of everything/community member-led teletype releases), information sharing between various makers (both professional and novice/budding), and accessibility of contribution to ideas from people of non-coding backgrounds (this point is quite rare and unique for open source/technical communities at least that I’ve seen. Seems like botnik may share this quality with us)
  • ability and interest in challenging conversations - lines has had several active threads that deal with “politics”. These conversations are usually (somewhat) diverse in viewpoint and passionate/intellectual. I don’t really think there’s been a lot of crossover between these conversations on lines and the places on here that are focused on a creative “goal” (I may be totally wrong on this one!). Looking at some of the botnik.org community projects, this could present an opportunity to merge the conversation or in some other ways explore this idea? I’m not sure everyone would want to do that, just thinking out loud here.

I feel like this community is really productive when people can contribute their finished-ish thing (while maybe working together with a small-group of focused collaborators) while being in-conversation with the larger group on some sort of (intellectually-stimulating) theme, ultimately being given a deadline for a compiled release of those things (usually with a person or two putting in a really impressive amount of effort to make that part happen).


#10

wowowowow, very very cool ideas. lots to parse.

agreed that teletype and apps all meet the ‘generative art w/ human control’ aspect.

OOOOH. this is cool. MIDI has a very clear parallel to text corpuses. audio to MIDI seems like the most direct translation of Botnik’s existing practices. is Ableton Live the only game in town for audio to MIDI?

I think this is a really great opportunity to address some of the less (immediately) accessible hallmarks of acousmatic music and reminds me a lot of The trend of keeping the controller tilted towards the audience, or the woes of performing live with electronic instruments.

dfscore. Immediate idea using this tool: coordinate multi-location simultaneous performances using the same generating score.

the cheekier side of me wonders if we could use lines as a corpus to create a community bot that predictively generates new posts?

there’s also a (later on) question of if we’re going to use our existent tools to create content or if at any point we’d like to create tools through this specific lens for others to use? could be a fun project.

shweee!


#11

melodyne is a far more powerful audio>midi program, though people usually only think of it as a pitch corrector

http://www.celemony.com/en/melodyne/what-can-melodyne-do


#12

Does anyone know Dadabots? (maybe they’re already affiliated with Botnik) - they’re doing really incredible stuff - maybe related to this convo?


#13

idea: assemble corpus of concert reviews. generate new reviews of nonexistent concerts based on the corpus. then record the concerts.


#14

oh, damn @Dewb for the killer bump.

let me spend some time this week to build out a version of the requested hallmark list (still open to input on this particular aspect) and let’s make this happen.


#15

bumping this down from lounge to involve new members and to introduce @mechanicalyammering / nicky, botnik’s managing editor that’s taking the reigns on music dev. nicky and i had a really energizing convo the other day around the ideas presented here and what botnik’s plans are — nicky, floor is yours.

really excited by the possibilities of converging our communities.


#16

hi lines, (is this what I call y’all as a collective, yes?) this thread was awesome, loving all these ideas. Gonna respond to individual comments in this comment/tell you what botnik is currently doing with music making in another. We use slack, and I’ve never used this type of forum (discourse?) before so if there’s a bettter way to respond to stuff with replies, lmk.

I’m Nicky, in Chicago. If you want a botnik crash course, we just made a medium account that will give that. Esp. watch the Verge video for a tutorial on how to write with predictive text voicebox. here’s our public github repo if you wanna take a look.

@mdg our look at “transparent” as input->processing-> output, ie, what the corpus is, what the process is, and how the two might conceptually/logistically relate. is also very much exactly how botnik sees it. everybody can make their own keyboard; sometimes we give out the keyboards we make to everybody but not all the time; other times (like when we RNN generate words) we do not allow the audience to try it themselves (be $$$).

Overall, you are spot on: what data go in > what we do to it > how it looks afterwards is part of the process, and often a big part of what we mean by transparency, and also tbh, what sells the joke.

checked out Dadabots, cool but not aff. with us (yet?). This rules though

@Dewb, would love to see some outputs or video of that haiku installation! When you say ‘patching in predictive’ might be using the stuff on our github? idk! It’s linked above tho. If you need other stuff, we’re kinda working on a way to make botnik easily patched in and would love feedback, lmk.

concert review idea, writing a predictive concert and making the music, is great. currently we have a bunch of music press writing and an like 1000s of RNN band names, song names, all the language-driven components of a song. I’ll detail this more in post 2.

@zebra , dan told me about this idea when we had coffee, yes yes yes this one is very exciting. yes. do you want to talk about this idea in our slack channel with the labs devs? this midi to text link is something I don’t know much about, any suggested reading? or any similar projects?

@Dan_Derks the cheekier side of me wonders if we could use lines as a corpus to create a community bot that predictively generates new posts? yeah we can do that totally, can you get the posts here downloaded as a .csv or better yet .txt. might be fun way tutorial of how to write with botnik

I’ll share some links in post 2 for song lyrics when I tell you what we’ve been working on with music so far, spoiler, it’s song lyrics.


#17

Hi Lines!

Here’s that long awaited pt 2. post.

Right now, we’re writing a lot of predictive songs. Our founder Jamie Brew started using Voicebox specifically to write songs like these. He has been playing them on acoustic guitar at comedy shows in Chicago, NY, LA,

Our process has been scraping lyrics of bands, using predictive text to write new song lyrics, and typically juxtaposing them with a strange/complementary vocabulary. For example, I wrote a song that’s Nirvana lyrics + an instruction manual for a Sony Vizio TV. We have all sorts of music keyboards made and we can give them to you or make you some requested keyboards, whatever the case, if you want to goof around writing with them… How do you share things on this community? Anyway, here’s a big google sheet.

We’re writing these parodies without machine assistance; like a Robot Weird Al I guess? We will record this music if we can secure funding to do it (trying kickstarter, any advice?). If you want to write predictive lyrics or make music for lyrics like that, let me know.

We’d like to release machine assisted generative music too and writers are really stoked on the idea of making a Vocaloid song, but we do not have anyone in the community familiar with the program.

The one dev I talked to about predictive midi idea thinks it sounds very cool and also achievable but he cautioned me from saying much else because I’m not a dev, lol and also wants to chat with whoever’s interested.

If you know about Vocaloid or the midi idea and want to make songs/a tool/something like that, lmk in a DM?

Also meant to make this clear above, if you’re interested in talking to devs in botnik about doing generative music, ping me so you can join our slack and I’ll connect that. Thanks!