The digital power to simulate an entire society, and make good guesses at our motivations, is nearly here. What do we do with it?

One of the things that being involved in a politics of localism and new citizenship brings you is a renewed scepticism about centralisation - and particularly one driven by digital software and networks.

We are through the 90s and 00s veils of sheer optimism about our connected world, and into a space where the question of “who’s in control” of these networks can be attacked from a number of angles.

We are interested in the powers and efficiencies of automation, but we’re reaching for those innovators who seek to put this tech as the service of a primary democracy, or empowered communities (like Holochain or the P2P’s cosmo-localists).

Yet the dream that a “good state” could operate its centralised algorithms and server farms with more justice and fairness than a “bad” one (let alone our passivity vis-a-vis Facebook, Amazon, Apple, Google, etc) is still a potent dream, for some. Take the technology known as “agent-based modelling”. As Wikipedia describes it, this is

a class of computational models for simulating the actions and interactions of autonomous agents (both individual, or collective entities, such as organizations or groups) with a view to assessing their effects on the system as a whole

If you’ve played any “God games” in computers, like Sim City or Civilisation, then you’ve been playing with “agent-based modelling”. Your (or your groups) behaviour in these cities, or historical moments, is both enabled and constrained by how their systems - in economics, land, values - are set up by the games makers.

But recent advances in artificial intelligence and computing power have pushed beyond these games (whose rules can be eventually mastered and predicted). What the New Scientist calls (here’s the PDF version) “multi-agent artificial intelligence (MAAI)” can, as they write:

allow predictions to be made with extraordinary accuracy by testing them in highly detailed simulations that amount to entire artificial societies. If, for example, a campaign team wants to decide how and to whom to pitch their messages – how to fight an election – it can do so, multiple times, inside a computer simulation.

Straight to our contemporary anxieties there. Indeed, the whole New Scientist article imagines MAAI’s as the follow-up technology to the “psychographic profiling” of entities like the late Cambridge Analytica, as they operated in UK and UK elections (though there’s some scepticism as to their effectiveness).

Yet before we apply our decentralising critiques, we should hear the case for MAAI’s and agent-based models, as useful tools for progressive societies. The most avid advocate in the UK has been the radical social democrat and journalist Paul Mason.

In the context of economics’ disillusionment with its abstract mathematical formulas, so often refuted by history and social reality, Mason wants to consider these new models for their richness and openness to novelty. See this from 2016:

The agent-based model, instead of reducing reality to a few variables, tries to replicate reality – and its randomness – in detail. Such models are common in weather prediction, or city transport planning: think of them as a professional version of the computer game Sim City.

sIn an agent-based model, you don’t try to work out whether a million people will, on aggregate, buy more bread or less bread. You create a million digital “people” and unleash them in world with digital bread and digital money.

Oxford professor J Doyne Farmer has long advocated the adoption of agent-based modelling in economics; the Bank of England’s chief economist, Andy Haldane, is a convert. Reality, says Haldane, is not only more complex than the maths-based economics imagines, it is also not rational.

The sum of buying and selling decisions we take each day – from the cappuccino and croissant on the way to work, to the fund we keep our pension in – are driven by something other than the rationality that mainstream economists assume.

As a result, while the old, maths-based economist expects stability and assumes a “gremlin” where it is disrupted, the heterodox economist expects big and unpredictable shocks.

…The answer lies in large, agent-based simulations, in which millions of virtual people take random decisions driven by irrational urges – such as sex and altruism – not just the pursuit of wealth.

What the left can bring to the design of these models are the insights that still draw lines of emnity through elite campuses:

  • that class, gender and race exist as economic facts;

  • that the 1% always acts with more information than the 99%

  • that crises are unavoidable but can be mitigated by accepting they might happen.

  • that sacking or excluding people who insist “capitalism is unstable” is a bad idea if you are running, say, a treasury, a major political party or a central bank.

More here.

But note that Mason assumes that the “design” of these models is still open (in this case to the left). 3 years on, the New Scientist article (PDF version) opens up how agent-based models, turbo-charged by new AI, are already in a kind of arms-race. They report on F. LeRon Shults, director of the Center for Modeling Social Systems at the University of Agder in Norway, who is:

simulating a typical Norwegian city with a sudden influx of refugees. It is a relatively small model with just 50,000 agents but will run for three generations to test the long-term outcomes of various policies. Models such as this take between hours and days to complete a run, depending on the number of parameters involved.

“It allows you to do experiments that are impossible in the real world,” says Shults.

Because of this power, MAAI technology has the potential to tackle the world’s most complex problems. This month, Shults and his colleagues are sitting down with experts on climate, energy and conflict to start modelling a refugee crisis triggered by climate change.

“Most experts think that climate was a big factor in the Syrian refugee crisis,” says Shults. “A million people flowed into Europe. As sea levels rise over the next 20 to 30 years, we’re talking at least 100 million. Where are they going to go? There will be massive human suffering. Our goal is to come up with policy initiatives to change behaviours and avoid conflict.”

Other modellers are working on preventing ethnic conflict and breaking up protection rackets and sex trafficking rings. Shults also sees applications in politics: “I’d like to understand what is driving populism – under what conditions do you get Brexit, or Le Pen?”

…The power brings great responsibility. “The ethical question bothers me,” says Shults. “Could this technology be used for evil?” We already know the answer. Shults’s team modelled a society with a majority religious group in conflict with a minority one.

They found that such societies easily spiral into deadly violence. When they ran the simulation to find the most efficient way to restore peace, the answer that popped out was deeply troubling: genocide.

There is also a very real fear of the technology being exploited, as many feel happened with Cambridge Analytica. “They used AI to trick people into believing something so they would vote a certain way,” says Shults. He and his colleagues fear that something even more manipulative could be done with MAAIs.

Scenarios about the US election are hypothetical, but plausible. Using simulation technology, theoretical insights could be weaponised for electoral gain. “Yes, it can be used for doing bad,” says Diallo. “It could be used to psychologically target people or groups and work out how to influence them.”

Or worse. A group at the Center for Mind and Culture in Boston has created an MAAI to test ways to break up child sex trafficking rings. Team leader Wesley Wildman points out that the traffickers could hire someone to build a rival simulation to disrupt the disrupters in a technological arms race. “It could already be happening. As far as I know, we’re ahead of them, but they will catch up,” he says.

It’s tempting just to answer, “the more we construct our systems to be responsive to full human agency, and the more we reestablish trust and reciprocity at the local level [see the work of David Bollier on the viability of “the commons” as a vehicle for that], the less these simulations will work, as we become more ‘free, fair and alive’. We want to be less predictable in our behaviours, not more.”

And that would be a good answer. Yet we should keep an eye on the politics of technology, the power claims embedded in these technologies, in order that we can harness and design our networks and protocols appropriately, and at a human scale.