No Hal 9000, Ultron or Skynet here, says the EU’s AI proposals. So will they open up the robots’ black boxes for citizens and communities?

There’s a lot in A/UK about radical, runaway technologies being harnessed, their black boxes opened up, so that citizens and communities can shape their direction and impact. But when a huge para-state actor proposes legislation that seems to serve those ends, we’ve found ourself wondering whether it’s entirely the right approach.

Politico website got a leak the other week of the European Commission’s opening proposals (next sent to the European Parliament) on regulating the use of artificial intelligence in economy and society. As they report:

No HAL 9000s or Ultrons on this continent, thank you very much.

The European Union wants to avoid the worst of what artificial intelligence can do — think creepy facial recognition tech and many, many Black Mirror episodes — while still trying to boost its potential for the economy in general.

According to a draft of its upcoming rules, obtained by POLITICO, the European Commission would ban certain uses of "high-risk" artificial intelligence systems altogether, and limit others from entering the bloc if they don't meet its standards. 

Companies that don't comply could be fined up to €20 million or 4 percent of their turnover. The Commission will unveil its final regulation on April 21.

The rules are the first of their kind to regulate artificial intelligence, and the EU is keen to highlight its unique approach.

It doesn't want to leave powerful tech companies to their own devices like in the U.S., nor does it want to go by the way of China in harnessing the tech to fashion a surveillance state.

Instead, the bloc says it wants a "human-centric" approach that both boosts the tech, but also keeps it from threatening its strict privacy laws.

That means AI systems that streamline manufacturing, model climate change, or make the energy grid more efficient would be welcome.

But many technologies currently in use in Europe today, such as algorithms used to scan CVs, make creditworthiness assessments, hand out social security benefits or asylum and visa applications, or help judges make decisions, would be labeled as "high risk," and would be subject to extra scrutiny.

Social scoring systems, such as those launched in China that track the trustworthiness of people and businesses, are classified as "contravening the Union values" and are going to be banned. 

The proposal also wants to prohibit AI systems that cause harm to people by manipulating their behavior, opinions or decisions; exploit or target people's vulnerabilities; and for mass surveillance. 

But the rules carve out an exception allowing authorities to use the tech if they're fighting serious crime. The use of facial recognition technology in public places, for example, could be allowed if its use is limited in time and geography. The Commission said it would allow for exceptional cases in which law enforcement officers could use facial recognition technology from CCTV cameras to find terrorists, for example. 

More here. The BBC’s report continues:

Michael Veale, a lecturer in digital rights and regulation at University College London, highlighted a clause that will force organisations to disclose when they are using deepfakes, a particularly controversial use of AI to create fake humans or to manipulate images and videos of real people.

He also told the BBC that the legislation was primarily "aimed at vendors and consultants selling - often nonsense - AI technology to schools, hospitals, police and employers".

But he added that tech firms who used AI "to manipulate users" may also have to change their practices.

With this legislation, the EC has had to walk a difficult tightrope between ensuring AI is used for what it calls "a tool... with the ultimate aim of increasing human wellbeing", and also ensuring it doesn't stop EU countries competing with the US and China over technological innovations.

So many minefields opened up here. For example, for a while we’ve been wondering whether regulations might be passed that compel surveillant AI to share the data they take from human interactions, with those same humans and their communities.

What would we find out about our needs and wants if we could see the same information about ourselves, or “social graph”, as Facebook or Google shows to advertisers?

Yet it seems the EU wants to put limits on exactly that kind of data processing by AIs. The suggested list of banned AI systems includes:

  • those designed or used in a manner that manipulates human behaviour, opinions or decisions ...causing a person to behave, form an opinion or take a decision to their detriment

  • AI systems used for indiscriminate surveillance applied in a generalised manner

  • AI systems used for social scoring

  • those that exploit information or predictions and a person or group of persons in order to target their vulnerabilities

Will such regulations compel, within the European area, a different kind of AI application?

If so, they might want to look at Dark Matter and Nesta’s Civic AI project, to imagine how their AI designs would be regarded as “exploiting” or “causing detriment” (which, if found to be so, could mean the EU levying a fee of up to 4% of a company’s global revenue).

As DM and Nesta define their mission:

To address the climate crisis we need to increase the capacity for communities to organise and adapt to a new reality. This requires better tools and methods for mobilising large groups of people to take action, reducing associated costs, and advancing the value of collaboration.

 CivicAI explores how AI can enhance collective intelligence in relation to climate crisis mitigation and adaptation. We see significant opportunities in several key areas, which have been illustrated in more detail through 3 distinct use cases.

Assisted civics

Minimising the time burden and augmenting civic participation whilst maximising common value.

Impact modelling

Measuring and simulating the impact of individual and collective climate actions.

Collective awareness

Understanding the impact of climate actions on ecosystems and future generations to aid collective decision-making

This graphic below repays study (full download here). It shows how a community might express its needs and agendas, be fed into AI systems which can build simulations of a situation (say, a local energy grid), run tests and then consult again with citizens).

Is this the kind of patient, thoughtful sequencing of the relationship between humans and AI that the EU regulations will compel? Is a Brexited Britain going to align itself with this European “third way”? Or will it sample opportunistically from all three regimes, European, American and Chinese?

More report and analysis on this development here from New Scientist, Forbes and Lexology