AI can’t escape its past. Not the hype-cycle that keeps helping it evade regulators. Nor the roots of its computing in colonialism and worker control

“Dr Heinz Doofenshmirtz, trying to control the tri-state area, original cartoon style”, prompt to Midjourney

We’re noticing that many of the more trenchant critiques of AI hype are coming from women technologists, commentators and academics - perhaps not unsurprising, given how male-dominated the actual echelons and ranks of AI production are. Here’s two powerful examples of the femme-reaction to the weird, “we may destroy the world but we can’t help it”, thrilled nihilism of the tech bros.

First is this extract from a piece by long-standing ethical tech designer Rachel Coldicutt. She’s recoiling from her experience at AI godfather Geoffrey Wilton’s lecture at Cambridge, where her expertise was slightingly questioned. But her main critique is media - how certain mythologies are built up by powerful tech advocates:

The question of “what qualifies you” to understand a technology is particularly relevant at the moment, as we enter the nth week of Sam Altman’s AI Hype Roadshow, a cavalcade of open letters and AI doomspeak from World-Leading Authorities, in which the term “AI” has been a compelling vehicle for a wide range of as-yet imaginary concepts.

In this instance, the ability to understand a technology is neither here nor there, because the point has not been to discuss any of the relevant technologies. Instead, the project of Altman and his merry band of doomsayers appears to be to capture power and create obfuscation by making new myths and legends. If there has been a teachable moment, then the lesson has not been one about the potential of technologies but about the importance of media literacy.

And this is by no means a new move, it just happens – this time – to have been astonishingly effective. For several decades, tech companies have been aware that political influence is as important as technological innovation in shaping future market opportunities: from tactical advertising to political lobbying to creating well-paid public-policy jobs that have improved the bank balances of many former politicians and political advisers.

The importance of getting in first with a compelling political story has played a critical role in creating, expanding, and maintaining their incredibly lucrative markets.

The current “existential threat” framing is effective because it fits on a rolling news ticker, diverts attention from the harms being created right now by data-driven and automated technologies; it also confers huge and unknowable potential power on those involved in creating those technologies. If these technologies are unworldly, godlike, and unknowable, then the people who created them must be more than gods; their quasi-divinity transporting them into state rooms and on to newspaper front pages without need to offer so much as a single piece of compelling evidence for their astonishing claims.

This grandiosity makes the hubris of the first page of Steward Brand’s Whole Earth Catalogue seem rather tame, and it assumes that no one will pull back the curtain and expose it as a market-expansion strategy rathe than a moment of redemption. No one will ask what the words really mean, because they don’t want to look like they don’t really understand.

And yet, really, it’s a just a narrative trick: the hidden object is not a technology, but a bid for power. This is a plot twist familiar from Greek myths, cautionary tales and superhero stories, and it’s extremely compelling for journalists because most technology news is boring as hell.

Altman’s current line is roughly, “please regulate me now because I’m not responsible for how powerful I’m going to turn out to be – and, oh, let’s just skip over all the current copyright abuses and potentially lethal misinformation because that’s obvs small fry compared to when I accidentally abolish humanity”.

If it reminds me of anything, it’s the cartoon villain Dr Heinz Doofenshmirtz from Phineas and Ferb, who makes regular outlandish claims before trying, and failing, to take control of the Tri-State Area. The difference is, of course, that Phineas and Ferb always frustrate his plan.

My point is not that so much that we need Phineas and Ferb to come and sort this all out, but that we need to stop normalising credulity when people with power and money and fancy titles say extraordinary things.

When I went to Hinton’s Q&A in Cambridge last week, he spoke with ease and expertise about neural nets, but admitted he knows little about politics or regulation or people beyond computer labs. These last points garnered several laughs from the audience, but they weren’t really funny; they spoke to a yawning gap in the way that technology is understood and spoken about and covered in the media.

Computer science is a complex discipline, and those who excel at it are rightly lauded, but so is understanding and critiquing power and holding it to account. Understanding technologies requires also understanding power; it needs media literacy as well as technical literacy; incisive questioning as well as shock and awe.

If there is an existential threat posed by OpenAI and other technology companies, it is the threat of a few individuals shaping markets and societies for their own benefit. Elite corporate capture is the real existential risk, but it looks much less exciting in a headline.

More here.

Chris Ballance

Next is an extract from a blog by someone we featured here recently - Meredith Whitaker, who we described as “a prominent AI researcher who was pushed out of Google in 2019 in part for organizing employees against the company’s deal with the Pentagon to build machine vision technology for military drones”. In the excellent tech-critical magazine Logic, Whitaker has written an extraordinary, Arendt-like essay on how current computation is rooted in the control ambitions of plantation and industrial owners.

We’d really recommend you to settle down with it, but here’s a flavour of her argumentation:

The blueprint for modern digital computing was codesigned by Charles Babbage, a vocal champion for the concerns of the emerging industrial capitalist class who condemned organized workers and viewed democracy and capitalism as incompatible.

Histories of Babbage diverge sharply in their emphasis. His influential theories on how “enterprising capitalists” could best subjugate workers are well documented in conventional labor scholarship. However, these are oddly absent from many mainstream accounts of his foundational contributions to digital computing, which he made with mathematician Ada Lovelace in the nineteenth century.

Reading these histories together, we find that Babbage’s proto-Taylorist ideas on how to discipline workers are inextricably connected to the calculating engines he spent his life attempting to build.

From inception, the engines—“the principles on which all modern computing machines are based”—were envisioned as tools for automating and disciplining labor. Their architectures directly encoded economist Adam Smith’s theories of labour division and borrowed core functionality from technologies of labor control already in use. The engines were themselves tools for labor control, automating and disciplining not manual but mental labour.

Babbage didn’t invent the theories that shaped his engines, nor did Smith. They were prefigured on the plantation, developed first as technologies to control enslaved people.

Issues alive in the present—like worker surveillance, workplace automation, and the computationally mediated restructuring of traditional employment as “gig work”—echo the way that computational thinking historically emerges as a mode of control during the “age of abolition,” in the early nineteenth century.

Britain officially abolished West Indian slavery in 1833, and Babbage was very aware of the debate on abolition. He was also aware of the questions that were roiling the British elite as they sought alternatives to enslaved Black labour—particularly the question of how to control white industrial workers who persistently rebelled against industrialization, such that they could produce at the pace required to maintain the British empire.

Both Babbage’s influential labour theories and his engines can be read as attempts to answer these questions—ones that, knowingly or not, rearticulated technologies of control developed on the plantation…

…The links between computation, plantation technology, and industrial labor control raise questions well beyond who gets to control systems of automation and computation in the present, assuming that systems controlled by those with benevolent intentions will produce positive outcomes.

They request that we engage in more fundamental inquiries, examining the technologies of control that structure the core logics of computation and attending to the enabling conditions in which computational technologies are designed to work—the imaginative landscape that we structure our relations and practices to accommodate.

As we see with Babbage’s engines, this landscape presumes the presence of plantation technologies of labor division, surveillance, and control “from above”: Babbage’s engines “work” only within these contexts.

The specter of the plantation that hangs over computation and industrial labor regimes also speaks to the need to revisit the terms of “free” industrial labor, and to recognize the contested process through which this particular category of “freedom” was created and guaranteed.

To do so, we must directly confront the unmarked presence of Black unfreedom that haunts “free” labor and reweave links that have been strategically severed between race, labor, and computational technologies.

My hope is that such analysis can help identify leverage points for change, and shift attention from tinkering at the edges of technologies of control to articulating futures that claim the right to redefine categories of freedom.

More here.