holly herndon and her partner have been cooking up a neural network that created the song that herndon just released, called “Grandmother.” here is a note on the process, from NPR:
“Godmother” was generated by listening to Jlin, Herndon’s friend and spiritual sister in unclassifiable electronic music, and then reinterpreting the data in Herndon’s voice. There’s a raw, newborn quality to the track as it hums and sputters like a swarm of glitching bees, just trying to find its mother.
and here’s part of herndon’s statement on the AI:
Going through this process has brought about interesting questions about the future of music. The advent of sampling raised many concerns about the ethical use of material created by others, but the era of machine legible culture accelerates and abstracts that conversation. Simply through witnessing music, Spawn is already pretty good at learning to recreate signature composition styles or vocal characters, and will only get better, sufficient that anyone collaborating with her might be able to mimic the work of, or communicate through the voice of, another.
Are we to recoil from these developments, and place limitations on the ability for non-human entities like Spawn to witness things that we want to protect? Is permission-less mimicry the logical end point of a data-driven new musical ecosystem surgically tailored to give people more of what they like, with less and less emphasis on the provenance, or identity, of an idea? Or is there a more beautiful, symbiotic path of machine/human collaboration, owing to the legacies of pioneers like George Lewis, that view these developments as an opportunity to reconsider who we are and dream up new ways of creating and organizing accordingly.
I find something hopeful about the roughness of this piece of music. Amidst a lot of misleading AI hype, it communicates something honest about the state of this technology; it is still a baby. It is important to be cautious that we are not raising a monster.
on one level, i am disturbed by this eerie interpretation of a person’s voice by a neural network and the suggestion that the program can interpret and copy human compositional styles. on another, this kinda just seems like generative music, now with the buzzy phrase “AI” included (which might be a misinterpretation on my part).
Disclaimer: I am no expert w/r/t data science, and yet I still must opine…
I think all these machine learning techniques are purposefully and deceptively named with a lexicon designed to relate to words typically used to describe human intelligence. “AI” in this context has nothing to do with human intelligence - it’s classification technology, using some arbitrary system to improve what is probably a surprisingly simple model. I think the results still sound fascinating, but the way this kind of work is often presented obscures what is actually being done behind the scenes - what data is being used to train the model and how the output sound is actually being synthesized.
I think it’s disgraceful that “AI” is still being described within the science fiction context of a machine which can think like a human (or even an animal). The danger of “AI” is not replacing people - it’s management exerting more power over labor, it’s governments prioritizing bureaucracy and cost efficiency over humans, etc.
There are so many people out there misinforming us about what “AI” is, let alone how it’s being used already. Unfortunately, I feel like Holly Herndon is one of them, even if I respect her as an artist.
And of course Dryhurst is involved, the same snake oil salesman who spent years telling everyone how crypto coins would save musicians! I think the real issue being revealed here is not that of originality in a world of machine mimicry - it’s about how to generate the PR to be a successful musician when human mimicry is already so commonplace.
I think it’s totally fair to categorize ML-generated or -assisted music under “generative music.” The interesting differences, to me, are that (a) the initial step of assembling a corpus of training data can be super interesting – it’s a critical creative input – and (b) the network of “rules” the system learns is so complex its ends up being quite mysterious or even opaque. Which is interesting!
Sound from these systems also tends to have a particular “character” right now, which I think you can hear in the Herndon track. Most (all?) of the ML methods for generating raw waveforms operate on 16 kHz data. We’ll get to higher sample rates eventually, but for now (a) 16 kHz seems to give you the right trade-off of moment-to-moment quality to long-range structure, and (b) a lot of these ML techniques were initially designed for voice applications, for which 16 kHz is fine. As a result, in addition to all the other interesting artifacts introduced by these techniques, there’s a cloak of grainy, tape-y, 16 kHz quality. I happen to really like it!
OK, I’ve never been much of a fan of Herndon (I’ve always thought James Ferraro as a much better artist in this direction… i.e. I don’t think that Herndon’s embracing of more advanced technical means or her institutional support really adds anything) … but I think this is a bit unfair as her remarks are actually highly critical of the entire AI project. Here’s the crux:
She’s basically comparing the atomistic perspective in which AI was originally conceived (as something that can think like a human) and suggesting we replace it with Lewis’s holistic perspective of cybernetic theory, that we look across complex networks of humans, machines, desires, concepts, compositions etc. and ask how AI’s actively transform those networks and thus become what they really are.
In other words, what AI’s do is not what they claim to do (i.e. “represent”). Insofar as AI’s are conceived reductively they end up actually reducing human activity (i.e. Facebook’s reduction of human relationships into “status updates” and “likes”; or a million other examples…) and in this way actually do tend to close the gap. They fulfill their quest to understand the human as clockwork-apparatus by actively transforming the human, not by “improving” their understanding – I think Herndon gets this.
But then Herndon suggests, can we subvert the process from within and deploy AI’s in other, more complex feedback configurations, configurations that would be expansive instead of reductive, configurations that have nothing to do with “representation”? In other words, can we consider the AI for what it really is – an abstract machine, a module in a complex feedback patch of other modules: humans, computers, concepts, desires, past musical pieces etc. etc. and utilize it in ways that make its origins no longer thinkable? That perform an immanent critique of AI or deconstruct it from within?
Some really important background on the “two perspectives” is Gilbert Simondon’s essay “Technical Mentality”, written in the 1950’s but published posthumously, where he contrasts the atomistic view of the “Cartesian mechanism” (i.e. the AI paradigm) with “cybernetic theory” which incorporates network and feedback-effects … and by implication, describes what "Cartesian mechanisms"actually do.
Herndon is adopting Simondon’s “cybernetic” perspective here, and this is completely against any treatment of AI in normal academic engineering contexts, in major tech companies, and so on.
Other background here… the mention of George Lewis is hardly innocent, Herndon is bringing in the ideological battles between Lewis (representing the cybernetic view, also with “small data/small machines”) and Pierre Boulez at IRCAM (representing the top-down, atomistic, “big machine/big data”… “Cartesian machine” view; i.e. total serialism; rationalization of composition down to the level of sound ‘atoms’ by digital synthesis; the Chant/Formes project etc). It’s so hard to go into the background here in a mere paragraph, but these were real and very painful battles, and they are summarized in Georgina Born’s book Rationalizing Culture. Basically, I think Herndon is bringing in Lewis specifically to be critical of Boulez, who represents the mainstream AI perspective at its very worst, and thus specifically to be critical of this perspective.
So OK, the irony is not lost on me that Herndon has always enjoyed institutional support and particularly that of CCRMA, an outgrowth of the original Stanford Artificial Intelligence Lab… and I presume a network of top Big Tech companies as donors. And of course, Herndon’s entire field of computer music has followed either this model or the even more restricted IRCAM model… and that’s why I think her position limits her effectiveness. But I do think she’s at least raising some important issues and seems more of an ally at this point. But I’m also not fully convinced on this, and I would love to hear more about where you disagree…
I’d also love to know more about why Dryhurst is a “snake oil salesman” (again, not a rhetorical statement, I’m genuinely interested in hearing more…) My take right now is more positive – I’ve mostly been totally uninterested in his art, but I’ve found his essays on post-capitalist configurations and decentralization exciting, they get at the same issues Jaron Lanier addresses but are much more practical; they seem to avoid the silliness of Lanier’s solutions because they build solutions out of actually existing technologies. I haven’t worked through Dryhurst’s proposals in detail, so there indeed may be some “snake oil” — but at least, I hope, less than with Lanier! (maybe that’s too low a bar?) But I would love to know more about what you find problematic.
Crypto despite its origins I’m not really ‘against’, it seems to be an important weapon against centralized big-data or harvesting of creative human activity… also a way to transform restricted economy into general economy, although the scarcity algorithm needs to be revised to say the least (!) But it’s also a way artists can own platforms and thus put a stop to their exploitation by Facebook/Spotify/etc.
Alexander Galloway has an interesting argument about crypto being anti-computational; and thus inherently opposed to the entire AI/big data project: I haven’t put these ideas together with Dryhurst but there might be something more fundamental here worth following up. I’m not saying I can directly connect Dryhurst with Galloway…
Anyway, maybe I’m wrong about Dryhurst… would love to get more of a sense where this animosity is coming from. My problem is that I’m just not interested enough in his art/music to have much of a sense beyond what I stated, so my knowledge remains limited.
Every implementation of crypto currency I’ve seen works by essentially regulating finance through pure supply and demand, mediated by a sophisticated technological infrastructure - this hardly seems “distributed” to me; it sounds like an even more unforgiving version of what exists now. The institutions we have to regulate finance are far from perfect, and they often benefit powerful individuals and organizations, but I would argue that the adoption of a highly technical and Hayekian economic system would be far worse. At the end of the day, these institutions (central banks and governments) in some way must respond to political pressure, whereas crypto is mediated purely by supply, demand and technology. How is it that a financial scheme which can literally only exist with the technology produced by handful of powerful corporations will empower individuals and “decentralize” anything? How is that “anti-computational”? If you’re naive enough to assume that the only villains in the world are banks and governments, I guess that’s enough. I really cannot see any realistic use case for blockchain that does anything but reinforce existing power structures. “Smart contracts” to me sound like an extreme libertarian fantasy of a world without trust or recourse. Western society has used the myth of individual empowerment and freedom for centuries to justify its course… which in my opinion often leads to perverse and unexpected outcomes. I see crypto as an unnecessarily technocratic “solution” to the wrong problem - an ultimate realization of a world in which society has been utterly replaced by transactions.
I don’t and can’t know that either Herndon or Dryhurst’s points of view are “wrong”, but they rub me the wrong way. The argument for AI as a collaborative, “symbiotic” entity is the same kind of rhetorical nonsense you see espoused by so many in the tech and business world today. Both of them are annoying, and I wish they’d stop.
I’m sure there are many arguments to be made against me, but it’s something I feel strongly about. Dryhurst explicitly challenges leftists regarding the adoption of technology without looking to the history of technology and the internet in particular. It was techno-utopian “thinkers” like him who brought us the “decentralized” and “uncensored” internet as it exists today in all its glory. The insistence on technological solutions to political problems, at least from a leftist point of view, is foolhardy.
after a little more consideration i’m a little embarrassed about posting this in the first place. the phrase “AI” seems to be used purely as clickbait here - for me (and probably for many people who will encounter headlines about this project), talking about “AI” takes me out of reality and into a sci-fi daydream almost immediately. it seems like it could be a cool ML tool, but i can’t help but feel the project is selling a lot of… snake oil. or at least it isn’t being honest about what’s really happening and is relying on the audience not thinking about it too much. and it seems to be working! this project seems to have been picked up by virtually every publication that covers music news.
note to self: think for another minute before posting (lol)
No, I’m sorry, because I’m just spamming my opinion here and by no means intended to delegitimatize your post or the work itself. I think the technology (both in the case of blockchain and ML) iis actually really interesting - what I don’t buy is when it’s sold as somehow “revolutionary”, especially by people who profit from it directly.
OK, thanks for this (I still need to digest the economic theory in the first part of your post as well as re-read Dryhurst’s article.) At least I think I understand where you’re coming from and agree more or less in spirit.
No, of course one can’t transform the essence of technology simply by developing better tools – one must first probe its essence, which is basically commodification; in other words, revealing everything as a manipulable, exchangeable resource to be either stockpiled or recirculated. This revealing also claims us - as “human resources”; as infinitely replaceable and AI has played an important role in this replaceability.
Again, the essence of technology is prior to actual technologies; this is why Philip K. Dick’s 1955 story “Autofac” perfectly predicts what will likely be the future of Amazon, and so on. This is also why James Ferraro or Jon Rafman can deliver potent critiques of the essence of AI without actually having to develop AI, and why Herndon’s gesture seems a bit superfluous here.
We don’t get our fundamental understanding from our tools; if so, we could just develop different ones and solve the problem then and there. This is the point Lanier constantly misses and it is always annoying. I detect that this is also how you read Dryhurst and for me the jury is still out, but I can see coming around to your position.
But there’s an interesting corollary which remains unaddressed in your critique. If the tools themselves are irrelevant to the understanding that forms the essence of technology (they only implement or reveal this understanding, but do not transform it, at least not by design), it also makes sense that existing tools such as AI and blockchain can and will be repurposed – they will come to reveal very different things about how we understand ourselves and the world. Since the tools themselves actually don’t matter, why not speculate as to potential future understandings which make use of these tools? You have to start somewhere in other words – not just “burn it down” and leave no resources with which to build in light of a new understanding. This speculation does not posit tools themselves as revolutionary. I think Herndon actually gets that AI is not revolutionary, and is looking towards how AI can mean something entirely different under a new understanding that is revolutionary.
Sean Booth : I wouldn’t say it’s a living entity, really. It’s about as much like as an entity as a shit AI in a game is. That’s how intelligent it is, which is not intelligent at all, but it might at least resemble the way a person thinks. It’s funny, I’ve been reading about Markov models and Markov chains recently, the results from Markov Chains are remarkably similar to what you get out of Watson or DeepMind, these super advanced language modelling things. And this article was about how unwieldy that kind of mega-gigantic, expensive AI is, because you can actually achieve very close results using Markov chains, and they’re really fucking simple, they’re computationally really easy to deal with, they’re what people use for Twitterbots and things like that. So in some ways these simple conditional responses can resemble very high-end AI. Even though it’s very simple, the result is close enough not to matter.
Otherwise, I find AI music kind of unremarkable. The Actress/Young Paint thing, for example didn’t do much for me. Haven’t really unpacked why really. I seem to prefer the Autechre style “soft” AI - ‘it’s just a bunch of “if” statements’ type thing
this “Soft AI” was of course the lesson of Joseph Weizenbaum’s Eliza (1966)… Consciousness is relational, it is something achieved through the interaction of entities… not as something in the AI’s “head”… Eliza was a dead-simple “mirror” program…basically, repeat what the person says – yet people had much more meaningful experiences with Eliza vs. almost anything else at the time… (or vs. modern “assistants” such as Alexa/Siri… although the purpose was somewhat different…)
I was excited to see this thread pop up here, but a lot of the commentary has really heavily bummed me out. I get that AI is a bit buzzword bingo, but it’s 2018, not 1988; we should reasonably able to hold a coherent conversation about the field’s algorithms being applied to art without being sexist and dismissive.
And while the “snake oil salesman” comment may match your reality, it’s both aggravating and depressing that even here on lines, a woman’s work is reduced to criticisms about her partner’s.
There is so much more interesting potential to think about and discuss here than one of the artists’ boyfriend, or whether or not AI is an appropriate acronym to use in place of ML.
I agree that a discussion of the work shouldn’t devolve into sexism or unrelated comments about her partner, but I just wanted to add that I saw Herndon and Dryhurst (along with another collaborator whose name I can’t remember right now) discuss this exact work when it was still under wraps at CTM earlier this year and there was no doubt that it was a highly collaborative project. Yes, it’s been released under her name, but it was presented as a work in which the three of them had equal authorship and control.
Now threads like this are (one of many reasons) why Lines is great!
I’ll have to write a proper response to this once I have a moment to sit down, but I’m all in favor or someone like Herndon using her position as the recipient of institutional support to further the conversation re: ML/AI + cultural production in the public sphere.
I mean, I suppose I can appreciate the read of this ‘sensationalizing’ the AI element of this for HH’s own benefit, but I don’t buy that as the primarily motivating factor in the work or its publicization. (also it’s a banger, and jlin is amazing!)
Also, ditto @analogue01that the Yasunao Tone works are incredible (saw a performance of this at Gavin Brown Enterprise and pretty much ripped my face off).
With that being said though, my hunch is that HH’s intent with this work is a bit different, and I appreciate her using her ‘platform’ in an intentional + public way.
I think one of the reasons ‘fine art’ in the age of culture-as-content is still important is that it allows us to look into the poetics/problematics of new systems + spaces in a way that fosters critical dialogue and investigation.
This could be why folks like Ferraro or Rafman can contribute meaningfully to this dialogue without actually developing their own AI’s like @ht73 mentioned above (they’re dealing with the ‘affect’ of AI?)
I think HH + MD’s approach to this is different but equally important as members of the institution that also exist as public facing cultural producers rather than straight ahead academics or industry engineers.
Whether that overlap will bring actionable change, i donno, but fingers crossed i guess?
I see this slated as a bad joke by industry engineers over in twitter…as for them its clickbait for AI when its actually not, but for the eternal catchphrase of HH/MD and the 3.person nobody ever knows abt (yes, they always act as an entity,for gender conformity?) BUT what is that catchphrase? I still dont really get it. I like Herndons music per se, but i always had difficulties taking in their ‚constructive criticism‘ of the internet. To me this always ended in being outdated by the industry-never being able to actually get a point across before the industry crushes catchphrase technology enabling by non industry ppl…the blockchain/cryptomoney/copyright triangle is a very good example where so much did go wrong in so little time that it’s actually way better to call this a distopian semifuture and abandon the fact that we humans have to fight for integrity within the machine world. And yes this has been done by Ferraro in a very convincing human way. As for the video, i haven’t listened to it just now, i need a human that tells me ‚patrick i know your taste, you‘ll like it‘ because up to this point i see ppl hate or love it in this virtual world but it hasnt sprung to my ears yet…maybe this is HH/MD/3.s agenda?
The neural-network chip forms the heart of the synthesizer. It consists of 64 non-linear amplifiers (the electronic neurons on the chip) with 10240 programmable connections. Any input signal can be connected to any neuron, the output of which can be fed back to any input via on-chip or off-chip paths, each with variable connection strength. The same floating-gate devices used in EEPROMs (electrically erasable, programmable, read-only memories) are used in an analog mode of operation to store the strengths of the connections. The synthesizer adds R-C (resistance-capacitance) tank circuits on feedback paths for 16 of the 64 neurons to control the frequencies of oscillation. The R-C circuits produce relaxation oscillations. Interconnecting many relaxation oscillators rapidly produces complex sounds. Global gain and bias signals on the chip control the relative amplitudes of neuron oscillations. Near the onset of oscillation the neurons are sensitive to inherent thermal noise produced by random motions of electron groups moving through the monolithic silicon lattice. This thermal noise adds unpredictability to the synthesizer’s outputs, something David found especially appealing.
The synthesizer’s performance console controls the neural-network chip. R-C circuits, external feedback paths and output channels. The chip itself is not used to its full potential in this first synthesizer. It generates sound and routes signals but the role of learner, pattern-recognizer and responder is played by David, himself a vastly more complex neural network than the chip.