OK, I’ve never been much of a fan of Herndon (I’ve always thought James Ferraro as a much better artist in this direction… i.e. I don’t think that Herndon’s embracing of more advanced technical means or her institutional support really adds anything) … but I think this is a bit unfair as her remarks are actually highly critical of the entire AI project. Here’s the crux:
She’s basically comparing the atomistic perspective in which AI was originally conceived (as something that can think like a human) and suggesting we replace it with Lewis’s holistic perspective of cybernetic theory, that we look across complex networks of humans, machines, desires, concepts, compositions etc. and ask how AI’s actively transform those networks and thus become what they really are.
In other words, what AI’s do is not what they claim to do (i.e. “represent”). Insofar as AI’s are conceived reductively they end up actually reducing human activity (i.e. Facebook’s reduction of human relationships into “status updates” and “likes”; or a million other examples…) and in this way actually do tend to close the gap. They fulfill their quest to understand the human as clockwork-apparatus by actively transforming the human, not by “improving” their understanding – I think Herndon gets this.
But then Herndon suggests, can we subvert the process from within and deploy AI’s in other, more complex feedback configurations, configurations that would be expansive instead of reductive, configurations that have nothing to do with “representation”? In other words, can we consider the AI for what it really is – an abstract machine, a module in a complex feedback patch of other modules: humans, computers, concepts, desires, past musical pieces etc. etc. and utilize it in ways that make its origins no longer thinkable? That perform an immanent critique of AI or deconstruct it from within?
Some really important background on the “two perspectives” is Gilbert Simondon’s essay “Technical Mentality”, written in the 1950’s but published posthumously, where he contrasts the atomistic view of the “Cartesian mechanism” (i.e. the AI paradigm) with “cybernetic theory” which incorporates network and feedback-effects … and by implication, describes what "Cartesian mechanisms"actually do.
Herndon is adopting Simondon’s “cybernetic” perspective here, and this is completely against any treatment of AI in normal academic engineering contexts, in major tech companies, and so on.
Other background here… the mention of George Lewis is hardly innocent, Herndon is bringing in the ideological battles between Lewis (representing the cybernetic view, also with “small data/small machines”) and Pierre Boulez at IRCAM (representing the top-down, atomistic, “big machine/big data”… “Cartesian machine” view; i.e. total serialism; rationalization of composition down to the level of sound ‘atoms’ by digital synthesis; the Chant/Formes project etc). It’s so hard to go into the background here in a mere paragraph, but these were real and very painful battles, and they are summarized in Georgina Born’s book Rationalizing Culture. Basically, I think Herndon is bringing in Lewis specifically to be critical of Boulez, who represents the mainstream AI perspective at its very worst, and thus specifically to be critical of this perspective.
So OK, the irony is not lost on me that Herndon has always enjoyed institutional support and particularly that of CCRMA, an outgrowth of the original Stanford Artificial Intelligence Lab… and I presume a network of top Big Tech companies as donors. And of course, Herndon’s entire field of computer music has followed either this model or the even more restricted IRCAM model… and that’s why I think her position limits her effectiveness. But I do think she’s at least raising some important issues and seems more of an ally at this point. But I’m also not fully convinced on this, and I would love to hear more about where you disagree…
I’d also love to know more about why Dryhurst is a “snake oil salesman” (again, not a rhetorical statement, I’m genuinely interested in hearing more…) My take right now is more positive – I’ve mostly been totally uninterested in his art, but I’ve found his essays on post-capitalist configurations and decentralization exciting, they get at the same issues Jaron Lanier addresses but are much more practical; they seem to avoid the silliness of Lanier’s solutions because they build solutions out of actually existing technologies. I haven’t worked through Dryhurst’s proposals in detail, so there indeed may be some “snake oil” — but at least, I hope, less than with Lanier! (maybe that’s too low a bar?) But I would love to know more about what you find problematic.
Crypto despite its origins I’m not really ‘against’, it seems to be an important weapon against centralized big-data or harvesting of creative human activity… also a way to transform restricted economy into general economy, although the scarcity algorithm needs to be revised to say the least (!) But it’s also a way artists can own platforms and thus put a stop to their exploitation by Facebook/Spotify/etc.
Alexander Galloway has an interesting argument about crypto being anti-computational; and thus inherently opposed to the entire AI/big data project: I haven’t put these ideas together with Dryhurst but there might be something more fundamental here worth following up. I’m not saying I can directly connect Dryhurst with Galloway…
http://cultureandcommunication.org/galloway/anti-computer
Anyway, maybe I’m wrong about Dryhurst… would love to get more of a sense where this animosity is coming from. My problem is that I’m just not interested enough in his art/music to have much of a sense beyond what I stated, so my knowledge remains limited.