flash crash. end.

the reserved spot is… FOR YOU!

4 Likes

:pleading_face: i am in. thank you!!!

lately i have been working on a norns/crow thing that explores gestures that slide through the maqam bayati shuri; i am debating exploring that for FC but it is a mostly complete script so it feels less in the spirit of live coding… more likely i will do a teletype/crow setup and integrate it with a computer somehow :heart:

9 Likes

Curious how you are working with microtuning on norns/crow?

3 Likes

just hard coding scales! not the most elegant but fine for my narrow use cases

3 Likes

Are you using scala scales? I’m trying to figure out how to get decent scales to use with Ableton’s new microtuning bevice…

1 Like

no scala, just lua tables. i dont use ableton so tbh no clue there but this might be of interest! z_tuning (tuning mod for norns)

2 Likes

Thanks, that’s cool!

A norns is in my near future!

5 Likes

Thank you very much for all the Crashers! Learned a lot Saturday - and I have now a deep material for livecode studies. Hopefully in the mid term I will manage to do a true live coding live.

I need to praise too @MDN and @tyleretters for the amazing generative art and all the work organizing flash crash and been sure that everything runs smoothly. Big up!

It was very fun to be part of flash crash - and also very important to help my set very clear Norns study goals. I was instantly fascinated by Arcologies - probably before getting my first Norns - the whole sequencing idea and the somehow oldschool, Blizzard-esque manuals approach to the descriptions.

However, It took me some time to actually learn how to (minimum) use the script in a more structured (“musical”) way and having the flash crash responsability forced me to study it in a more focused and practical perspective - indeed, I can say the same of the whole Norns + grid basic structure (i.e: scripts, MIDI sync, tape use, FX, recording, etc).

As a side note, I extensively did lives in raves between 2015-late 2019 and I was really missing the anxiety/joy of creating a new live, something that I somehow lost in the pandemic context (and I´m still struggling a little to be confortable again in parties). So it´s good to be back.

As a second side note, I got Covid-19 :microbe: three weeks ago so It was very therapeutic to have the live as joyful, physically easy activity to develop during my convalescence time.

20 Likes

that set was soooo damn good. every time it floored me, it kept going up another level.

4 Likes

what
-TT, i2c2midi, Crow, Deluge
-Test vdo.ninja until comfortable, practice live streaming
-high actions per minute
-tempo synced DEVICE.FLIP for style points

deluge / i2c2midi / TT
-map midi notes and ccs to a synth preset using i2c2midi
-test midi clock sync vs CLK IN sync
-practice quick stereo audio input routing
-read through some a773 code to get ideas for melody / scene sequencing?

rhythm sequencing on TT
-ratchets, loops, probability, conditionals, random
-macro rhythm controls tied to IN and PARAM, use Deluge CV for IN to TT?
-Obakegaku has meshuggah drums Teletype haiku - #121 by Obakegaku

crow
-write script to set sound parameters in crow using command, repurpose CROW.C2
-rewrite parameter indexing to make it easier to remember
-stress test
-bytebeat percussion oscillator??? Bytebeats - A Beginner‘s Guide! YO FELIX SWEET

13 Likes

William Hazard FC diaries: Wednesday 220608

tested out vdo ninja yesterday and figured out how to use multiple “cameras” (one for screen sharing druid, another for an actual camera + audio from modular). I wish I could overlay them like I did here, but that doesn’t seem like an option.

So, two “cameras.” Books tend to display two pages at a time (with some exceptions). Now I am thinking about heteroglossia.

Two texts — one poem to be read by viewers on the druid screen and also by the synthesizer, another to be read by me and experienced as sound — the first should be lineated; maybe the second shouldn’t be — like haibun, the narrow road to the deep north. What is the relationship between these texts? How will the second be composed? Maybe by listening to the musical results of the first and free-writing in a way that is guided by the music — telling a story, drawing on memories, honesty.

10 Likes

ive threatened to do this a couple times, but i dont think its been done yet in a FC — major style points would be in order for sure, especially if it happens at a crazy drop :crazy_face::slightly_smiling_face::upside_down_face::slightly_smiling_face::upside_down_face::slightly_smiling_face::upside_down_face::partying_face:

9 Likes

the license FC diaries: Thursday 220609

still very much have my net out, and it doesn’t have much in it yet but sediment. found a post on scsynth with a slick example of using events for polyphony (i.e. multichannel expansion) that I might use.

more importantly I think @WilliamHazard’s idea about honesty is something I want to have a good think on. vulnerability is really interesting to me lately. I’m not sure how to work that in with my vibe and skill set, but doing this little process log (thanks for the prompt, @tyleretters!) seems a good start.

9 Likes

Process 10/06/22:

crosspatch_to_gif_A

(noodle with new objects / zen of the silent patch)

13 Likes

William Hazard FC diaries: Friday 220610

Yesterday, I revisited an old favorite that continues to inspire. Susan Howe’s collaborations with David Grubbs are what got me interested in doing poetry with synthesizers in the first place. At the time, I thought, “This is really cool. I know a bit about music and a bit about poetry; maybe I could do something like this.”

I was in graduate school doing an MFA in poetry, and I had a MicroKorg XL+, so I started doing multimedia performances with that synthesizer, multiple speakers, and PowerPoint presentations with drum loops in them. I made the drum loops in Hydrogen. It was a far cry from what I’m doing now. But my pursuit of that interest in poetry and synthesizers was what led me first to eurorack, then to SuperCollider, then to monome, and ultimately here, to lines.

Clearly, I’m feeling a bit nostalgic today.

Tonight, some friends and I are going to hear Chuck van Zyl perform at the Chestnut Hill Skyspace. I imagine that will be pretty inspiring too.

“Every single mark that you make on paper is an acoustic mark.”

12 Likes

SQUIM FC logs 6/11/22

Starting a new job in an institution that I’ve worked at for a long time. Retreating further away from the things that I love about it and sacrificing my time for security. I’m in a large desk on the top floor and don’t see many people other than meetings. There’s been a little bit of anxiety around things in my relationships shifting around. A lot of change and a lot of chaos.

Very satisfying to settle in to my tools for this performance. Coming back to some software that I used to use in live settings all the time (back when I did that sort of thing). I gave up on it some time ago because it’s advancement halted in 2013. Coming back to it now is satisfying and a little bit sad. There are just certain things it does that feel like they make sense to the way I approach electronic music.

I was going to get even more elaborate but I’m glad I’ve settled where I have. There are a lot of possibilities here and it’s easy to go overboard quickly. I have time to arrange things carefully and plan out the arc of my performance. This snapshot on the other hand is complete chaos.

13 Likes

i love pieces like these that start in the middle of a thought and just evolve based on their own internal logic. i enjoy what you’re doing with the webcam. lots of really nice variety with that.

2 Likes

tyleretters fc process log 220615

guitar. i am taking lessons. it is so refreshing to play an instrument that can think on it’s own.

15 Likes

Week consumed, update the Thread

How:
-reviewed audio in/out on Deluge, midi learn, autoslice samples, TT ratchets, i2c chords, random, basic patterns, time, P.MAP, latest i2c2midi firmware is excellent
-Seems like Deluge can’t set a synth track’s midi channel a priori, I can only press and hold [learn] + [audition pad] and then send a midi command from TT to learn that channel. This is annoying for live coding because I have a bunch of midi messages going all the time which means I have to stop, midi learn, then start the sequencing again. I could get around this by tastefully looping audio on deluge while pause/learn/restarting the midi. Alternatively, I could lean in and have 3 or 4 midi channel sequences triggering from TT and randomly learn midi channels to X number of synth tracks on Deluge, resulting in a bunch of strange sequences. Could explore this as a performance tool. Here’s a small example of what this resulted in:

What:
-nervous thoughts of inadequately authentic self-expression
-industrial void is the current vibe
-20-30min
-start out with industrial soundscape and build from there
-The Deluge discord monthly challenge for June is “found sounds recorded with Deluge’s mic”. I took Deluge into the electronics research lab I work in and recorded a bunch of metal clangs, machinery, metal scrapes, electricity drones, wind tunnel, variable frequency drive, footsteps, ambience. Zip of all the recordings attached, free to use for any purpose. Panned and looped fence clangs sound cool. Was excited to finally find and record that background electrical drone.


Found Sounds Zip:
https://drive.google.com/file/d/131_QvREPViZWx8W7xSIajF8Yo1IdEAu9/view?usp=sharing

Will:
-involve crow more this week (TODO: more crow puns)
-freeze on learning new techniques, practice existing skills
-finish chopping up recorded samples, load into Deluge
-walkthrough live streaming setup
-practice recipe of industrial soundscape then build into something, 20 min reps starting from blank state on Deluge and TT

11 Likes

These are 20 characters of great!

4 Likes