Stephen Hawking knew the score on how to use automation for human good
An outpouring of warmth and admiration for the physicist Stephen Hawking, whose death was reported today. Not much in the physics file for the Daily Alternative yet (although a piece on Karen Barad is well overdue)...
But we note how much of the commentary highlighted Hawkins political sentiments over the years. He also issued visions and warnings about the future - which relate to our continuing interest in the way that communities can exert control over automation and radical innovation.
Hawking's last contribution to the social network site Reddit gets straight to the point:
Q: Have you thought about the possibility of technological unemployment, where we develop automated processes that ultimately cause large unemployment by performing jobs faster and/or cheaper than people can perform them? Some compare this thought to the thoughts of the Luddites, whose revolt was caused in part by perceived technological unemployment over 100 years ago. In particular, do you foresee a world where people work less because so much work is automated? Do you think people will always either find work or manufacture more work to be done? Thank you for your time and your contributions. I’ve found research to be a largely social endeavor, and you've been an inspiration to so many.
Hawking: If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.
Hawking proved to have very sharp political instincts, proving to be far from some blithe advocate of the progress of technology. In this excellent 2016 column for the Guardian - and sounding very Einstein-like in his soft socialism - Hawking reflects on the Trump and Brexit votes:
What matters now, far more than the choices made by these two electorates, is how the elites react. Should we, in turn, reject these votes as outpourings of crude populism that fail to take account of the facts, and attempt to circumvent or circumscribe the choices that they represent? I would argue that this would be a terrible mistake.
The concerns underlying these votes about the economic consequences of globalisation and accelerating technological change are absolutely understandable. The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining.
This in turn will accelerate the already widening economic inequality around the world. The internet and the platforms that it makes possible allow very small groups of individuals to make enormous profits while employing very few people. This is inevitable, it is progress, but it is also socially destructive.
We need to put this alongside the financial crash, which brought home to people that a very few individuals working in the financial sector can accrue huge rewards and that the rest of us underwrite that success and pick up the bill when their greed leads us astray. So taken together we are living in a world of widening, not diminishing, financial inequality, in which many people can see not just their standard of living, but their ability to earn a living at all, disappearing. It is no wonder then that they are searching for a new deal, which Trump and Brexit might have appeared to represent.
...With not only jobs but entire industries disappearing, we must help people to retrain for a new world and support them financially while they do so. If communities and economies cannot cope with current levels of migration, we must do more to encourage global development, as that is the only way that the migratory millions will be persuaded to seek their future at home.
More here. The article also makes the point that "we now have the technology to destroy the planet on which we live, but have not yet developed the ability to escape it". So global justice and sustainability, right here right now, might be a good idea...
Hawking also issued stern warnings about how we should by trying to build in benign tendencies to our artificial intelligence. This again from the Reddit discussion:
Hawking: The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use...
...There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.
Rest In Protest, Stephen Hawking.