How can AI become a "machinery of the commons"? First, we deal with its powers to rank, classify and exclude

We’re not Luddite about computation at tAg (The Alternative Global) - we believe it can be a tool for community empowerment. But we do realise, even with the smartest of machine learning, that GIGO (garbage in, garbage out) still pertains - particularly if the garbage in comes from data-sources wracked with racist, sexist and ableist assumptions.

Our focus is also now on the deep design of the software - how its very operations take the world in a limited, culturally-specific way.

Below are some extracts from an essay in the excellent Logic online magazine, titled Deep Learning and Human Disposability by Dan McQuillin. We recommend you read the whole piece - but below are some concepts and reframings that really stood out:

AI as we know it is not only a set of algorithmic methods like deep learning, but a layered arrangement of technologies, institutions, and ideologies that claim to offer solutions to many of society’s trickiest problems.

Its offer of generalizing abstraction and action at scale appeals directly to the state, because bureaucracy and statecraft are founded on the same paradigms.

From the state’s point of view, the arguments for adopting AI’s alleged efficiencies at scale become particularly compelling under conditions of austerity. There, in the years following the 2008 financial crash, public administrations have been required to deal with increased demand while having their resources cut to the bone.

There are more working poor, more children living below the poverty line, more mental health problems, and more deprivation. But social services and civic authorities have had their budgets slashed, as politicians choose public service cuts over holding financial institutions to account.

The hope of those in charge is that algorithmic governance will help square the circle between rising demand and diminished resourcing. In turn, they hope it will distract attention from the fact that austerity means the diversion of wealth from the poorest to the elites.

Under austerity, AI’s capacities to rank and classify help to differentiate between “deserving” and “undeserving” welfare recipients. It also enables a data-driven triage of public services. The shift to algorithmic ordering doesn’t simply automate the system, but alters it—without any democratic debate.

As the UN’s special rapporteur on extreme poverty and human rights has warned, so-called digital transformation and the shift to algorithmic governance conceals myriad structural changes to the social contract. The digital upgrade of the state means a downgraded safety net for the rest of us…

The very concept of “artificial general intelligence”—the ability for a computer to understand anything a human can—is inseparable from historical efforts to rationalize racial superiority.

[We once lived] in an era when having the machinery to enforce colonial domination was itself proof of the superiority of those deploying it. [Is AI reviving this capacity? - ed]

What lies in wait for AI is the reuniting of racial superiority and machine learning, in a version of machinic eugenics. All it will take is a sufficiently severe social crisis.

The pandemic foreshadows the scaling up of a similar state response to climate change, where data-driven decision boundaries will be operationalised as mechanisms of global apartheid…

OCCUPY AI!

The question, then, becomes how to interrupt the sedimentation of fascistic social relations in the operation of our most advanced technologies.

The deepest opacity of deep learning doesn’t lie in the billion parameters of the latest models… [Instead, it lies] in the way it obfuscates the inseparability of the observer and the observed, and the fact that we all co-constitute each other in some important sense.

The only coherent response to social crisis is, and always has been, mutual aid. The toxic payload of unconstrained machine learning is the state of exception [meaning that AI systems can help the powers-that-be make decisions that “except” them from accountability and democracy - ed].

The inversion of is an apparatus that enacts solidarity.

This isn’t a simple choice but a path of struggle, especially as none of the liberal mechanisms of regulation and reform will support it. Nevertheless, our ambition must stretch beyond the timid idea of AI governance, which accepts a priori what we’re already being subjected to.

Instead we must look to create a transformative technical practice that supports the common good.

We have our own histories to draw on, after all. While neural networks claim a generalizability across problem domains, we inherit a generalized refusal of domination across lines of class, race, and gender.

The question is how to assemble this in relation to AI. How do we self-constitute in workers’ councils and people’s assemblies? How can this happen, in ways that interrupt the iteration of oppression with the recomposition of collective subjects?

Our very survival depends on our ability to reconfigure tech as something that adapts to crisis by devolving power to those on the ground closest to the problem. We already have all the computing we need. What remains is how to transform it into a machinery of the commons.

More here.