AI should herald not an inhuman politics, but a “prohuman” politics. David Wood responds to Jon Cruddas
At A/UK, we believe that localities, communities and everyday citizens should be able to shape the future use of radical and powerful technologies. Many of the ultimate dreams of progressives - more time, more security, more self-expression, more health - can be realised, if we can harness the powers of megatech to the outcomes of a good society.
But some in that community disagree - in the sense that they think tech like AI and biotech potentially subvert the individual and social rights of humans. We spotted a very eloquent example this week from the Labour MP Jon Cruddas in article for UnHerd, titled The inhuman politics of artificial intelligence.
Jon is consistently one of the more thoughtful political representatives in the UK - and this article is part of a theme he's developing (see "The humanist left must challenge the rise of cyborg socialism" in the New Statesman earlier this year).
The person we know from our networks who is 1) a transhumanist and radical-tech advocate, 2) is has written a book about the political consequences of mega-tech like AI, and 3) a fellow traveller with A/UK, is David Wood, founder of London Futurists.
So we asked David to respond to Jon. As Jon says in his piece: "These are deep waters and should be dominating political debate. Yet discussion is virtually non-existent". So we invite Jon in his turn to reply.
The prohuman politics of artificial intelligence
Does the rise of Artificial Intelligence (AI) risk an inhuman impact on politics? That’s the issue raised earlier this week Labour MP Jon Cruddas.
Cruddas is surely correct to question uncritical adoption of systems that are powered by ever more powerful AI. As he points out:
Estimates of UK jobs that could be replaced by AI and related technologies over the next two decades tend to range from 22% to 40%.
We have already witnessed how data analytics can be malignly used in political campaigns. This capacity will become more sophisticated, possibly at the expense of the democratic process itself.
Possibly even more potent is the recognition software being trialled in marketing to detect the efficacy of advertising by judging facial expressions. It suggests business has the potential to reach into our lives in ways Orwell imagined a totalitarian state would do.
I also sympathise with the assessment by Cruddas of the recommendations contained in the recent report by the House of Lords on AI:
The policy proposals to meet these challenges are shockingly weak: that developers undergo training in ethics as part of their computer science degrees, that companies ensure their workforces are diverse and that individuals made redundant, perhaps repeatedly, by AI are enabled to train for a new career.
Cruddas is right to ask for a deeper conversation on the challenges posed to society by the forthcoming rise of new waves of AI technology. Surprisingly, however, he seems to want to exclude from that very conversation a community that has already given that matter considerable thought: “those who approach these issues from a transhumanist position”.
This exclusion results from an unwarranted conflation of transhumanism with a couple of other bugbears Cruddas highlights:
- ‘Techno-solutionism’: the idea that all ‘problems’ which humanity faces can be ‘solved’ using technology
- ‘Libertarianism’: the view that as the role of technology expands, the role of the state should contract.
It is because of this undue conflation that Cruddas expresses surprise at a perceived “conflict of interest” experienced by the likes of Oxford professor Nick Bostrom. Bostrom’s role as director of the Future of Humanity Institute is deemed to be in tension with his position as one of the founders of Humanity+ (an organisation which Bostrom co-founded in 1998 under its previous name of the World Transhumanist Association).
My own view is that the challenges facing humanity from accelerating technology – including the challenges to the integrity and competence of our political processes – are so large that we need to engage thinkers from multiple different perspectives in order to develop and progress solutions.
We should avoid prematurely closing down conversation with groups of people who express (or who appear to express) views that defy mainstream orthodoxy.
As it happens, that is the same outlook which features heavily in the nearest thing which the transhumanist community has to a canonical document – the Transhumanist Declaration.
No fewer than four out of the eight clauses of that declaration emphasise in various ways the importance of taking an inclusive approach to the “serious risks” which humanity faces “from the misuse of new technologies”:
3. We recognize that humanity faces serious risks, especially from the misuse of new technologies. There are possible realistic scenarios that lead to the loss of most, or even all, of what we hold valuable. Some of these scenarios are drastic, others are subtle. Although all progress is change, not all change is progress.
4. Research effort needs to be invested into understanding these prospects. We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented.
5. Reduction of existential risks, and development of means for the preservation of life and health, the alleviation of grave suffering, and the improvement of human foresight and wisdom should be pursued as urgent priorities, and heavily funded.
6. Policy making ought to be guided by responsible and inclusive moral vision, taking seriously both opportunities and risks, respecting autonomy and individual rights, and showing solidarity with and concern for the interests and dignity of all people around the globe. We must also consider our moral responsibilities towards generations that will exist in the future.
In short, there’s absolutely no “conflict of interest” between someone being a core member of the transhumanist community, and someone promoting the future well-being of human society through the wise and strong management of new technological possibilities.
Specific policy proposals from transhumanists can be found, for example, on the website of the IEET (Institute for Ethics and Emerging Technology), and in my own book Transcending Politics: The technoprogressive roadmap to a comprehensively better future.
Let’s take one specific example: How to respond to the threat of AI displacing ever larger numbers of people from the workforce.
I share with Cruddas the view that this is a serious problem, which it is irresponsible to downplay. I also agree that significant difficulties stand in the way of implementing the oft-suggested proposal of Universal Basic Income.
To quote Cruddas again: "The state would take on a phenomenal welfare burden, alongside a shrinking tax take."
For that reason, transhumanists seek to explore a complementary set of changes, which could take place in parallel with the gradual adoption of a worldwide basic income scheme. These changes would prioritise drastically reducing the costs of all the goods and services needed for citizens to experience a good standard of life.
In the same way that digital goods have dropped in price, time and again, it is within the power of society to adopt methods such as 3D printing, synthetic biology, regenerative medicine, and nanoscale manufacturing, in order to drive down the costs of material goods (including housing and healthcare). *
That might sound like “techno-solutionism”. But the difference is to recognise clearly that technology is not enough. Alongside improved technology, we also need improved politics and, yes, improved philosophy. We also need an improved conversation! So let’s keep our minds open.
If we get the conversation right, we can look forward to AI heralding, not an inhuman politics, but a “prohuman” politics, with significantly greater human flourishing, and no-one left behind.
* The possibility of high-quality, near-free, digitally-produced social goods, where there is a regulatory harnessing of advanced tech, is a point often made by Paul Mason in his postcapitalist writings (see here)