We were told the robots were taking over. We didn't realise it was to make us humans work harder. Yet is it also right to make them our slaves?

It is true, despite the predictions of many futurist gurus, that robots suck. The Register recently reported on the cutting edge robot CSIRO (nicknamed “Bingo” here), undergoing tests to negotiate simple workplace interiors in Australia:

Do Bingo and her ilk herald the arrival of ubiquitous 'bots that do our dirty work – be it folding laundry, walking the dog, or digging coal?

Sadly, demos of robots moving seamlessly through rough and crowded terrain remain a pleasing glimpse of a fantasy future. Autonomy technology remains well short of widespread use.

CSIRO's test run is a case in point. The challenge before Bingo sounds simple: the robots need to navigate a series of obstacles that wouldn't trip up a human child and scout out several objects. They don't need to use them – just locating them is enough.

Yet by the end of the hour test run that The Register attended, all five robots were out of service and only a third of the objects on the course had been logged. One robot, named Kitty, upended itself in a ditch. Bingo got lost, and lost communications with the team outside. A flying drone robot malfunctioned and wasn't even able to enter the course.

Yet at the post-test debrief, CSIRO staff considered this a decent result. "Not many people think of walking through a doorway without hitting the walls as sophisticated. But for robotics this is cutting-edge stuff," said Navinda Kottege, the CSIRO team's leader.

Robo-boffins are not alone in their struggles. Although most big logistics companies use robots to move inventory around, the so-called last line, which handles the final picking and packing of items to and from delivery vehicles, is still handled by humans.

Amazon ran a "pick and place" challenge for three years, asking teams of roboteers to retrieve random known objects from warehouse shelves. The competition ended in 2017 with human workers coming out on top.

Walmart, the world's biggest retailer, recently cancelled a five-year contract with robotics firm Bossa Nova to use its robots to check inventory levels in stores. The retailer has reverted to using human workers.

More here. For those who want robots to do the 3D jobs - that’s dull, dirty and dangerous - this is perhaps not good news. Yet there are other stories, where the robots are much more embedded n code, programming and specialised equipment, rather than bleeping Robbie the Robot human-equivalent figures. And these robots are being used to exploit human manual workers even more precisely than ever.

As Vox magazine reports:

“The basic incentives of the system have always been there: employers wanting to maximize the value they get out of their workers while minimizing the cost of labor, the incentive to want to control and monitor and surveil their workers,” said Brian Chen, staff attorney at the National Employment Law Project (NELP).

“And if technology allows them to do that more cheaply or more efficiently, well then of course they’re going to use technology to do that.”

Tracking software for remote workers, which saw a bump in sales at the start of the pandemic, can follow every second of a person’s workday in front of the computer. Delivery companies can use motion sensors to track their drivers’ every move, measure extra seconds, and ding drivers for falling short

Automation hasn’t replaced all the workers in warehouses, but it has made work more intense, even dangerous, and changed how tightly workers are managed.

Gig workers can find themselves at the whims of an app’s black-box algorithm that lets workers flood the app to compete with each other at a frantic pace for pay so low that how lucrative any given trip or job is can depend on the tip, leaving workers reliant on the generosity of an anonymous stranger.

Worse, gig work means they’re doing their jobs without many typical labour protections. 

In these circumstances, the robots aren’t taking jobs, they’re making jobs worse. Companies are automating away autonomy and putting profit-maximizing strategies on digital overdrive, turning work into a space with fewer carrots and more sticks.

More here. So what makes these robots any different to the Taylor clip-board merchants stalking around factory floors, squeezing every last jot of efficiency out of workers? What happened to robots and automation according to Roberto Unger’s definition (from this YouTube lecture):

The point of a machine should be to do for us what we have already learned to repeat. So that we can preserve our supreme resource - time - for that which we have not yet learned how to repeat.

[We approach] the ideal of existence. As we grow older, a carapace of compromise, of silent surrenders and self-inflicted belittlement, begins to form around each of us. A mummy in which we die many deaths. To come into the fuller possession of life, our supreme good, we must break out of this mummy…

When no-one has to do work that can be done by a machine, we will be closer to achieving what we should all desire. Which is to die, only once.

So robots are the slave-class that enables us to live creative, unrepeatable, unique and human lives? Are we happy with that power relation? See this revealing interview from AI and robot ethicist Kate Darling:

What is wrong with the way we think about robots?
So often we subconsciously compare robots to humans and AI to human intelligence. The comparison limits our imagination. Focused on trying to recreate ourselves, we’re not thinking creatively about how to use robots to help humans flourish.

Why is an animal analogy better?
We have domesticated animals because they are useful to us – oxen to plough our fields, pigeon delivery systems. Animals and robots aren’t the same, but the analogy moves us away from the persistent robot-human one. It opens our mind to other possibilities – that robots can be our partners – and lets us see some of the choices we have in shaping how we use the technology.

But companies are trying to develop robots to take humans out of the equation – driverless robot cars, package delivery by drone. Doesn’t an animal analogy conceal what, in fact, is a significant threat?
There is a threat to people’s jobs. But that threat is not the robots - it is company decisions that are driven by a broader economic and political system of corporate capitalism. The animal analogy helps illustrate that we have some options.

The different ways that we’ve harnessed animals’ skills in the past shows we could choose to design and use this technology as a supplement to human labour, instead of just trying to automate people away.

Who should be responsible when a robot causes harm? In the middle ages, animals were put on trial and punished…
We did it for hundreds of years of western history: pigs, horses, dogs and plagues of locusts – and rats too. And bizarrely the trials followed the same rules as human trials. It seems so strange today because we don’t hold animals morally accountable for their actions.

But my worry when it comes to robots is, because of the robot-human comparison, we’re going to fall into this same type of middle ages animal trial fallacy, where we try to hold them accountable to human standards. And we are starting to see glimmers of that, where companies and governments say: “Oh, it wasn’t our fault, it was this algorithm.”

Should we give rights to robots?
This often comes up in science fiction, revolving around the question of whether robots are sufficiently like us. I don’t disagree that robots, theoretically, would deserve rights if they were to become conscious or sentient. But that is a far-future scenario.

Animal rights are a much better predictor for how this conversation around robot rights is going to play out in practice, at least in western society. And on animal rights we are hypocrites. We like to believe that we care about animal suffering but if you look at our actual behaviour, we gravitate towards protecting the animals that we relate to emotionally or culturally.

In the US you can get a burger at the drive-through, but we don’t eat dog meat. I think it’s likely we will do the same with robots: giving rights to some and not others.

More here. Anyone with a gram of ecological consciousness would have their alarm bells ringing at the moment. Do we really harness robots in the same way as we have agricultural animals (which a new generation of vegans is now rejecting)? What if those robots are sentient enough to express their suffering?

That robots feel pleasure or pain, like or dislike, attraction or aversion is now actively being proposed as the basis for a machine consciousness. See Mark Solms’s The Hidden Spring, outlined in this review, where it’s proposed that “a self evidencing system concerned with its own survival” could be built:

If Solms succeeds, he will immediately turn it off and try to patent the process to prevent it from falling into commercial hands. Aside from the ethical issues, he notes the danger in building self-concerned systems. I usually think fears about artificial intelligence are overblown, but in this case, I agree with him on the danger. The good news is I don’t know how useful such systems would be for most commercial purposes anyway. Do we really want self driving cars or mining robots being worried about their own survival?

We get to the core of robot ethics - AI’s as self-concerned systems One which Hal 9000 has famously warned us about: