Extended intelligence, not artificial intelligence: Joi Ito on how our tech visionaries must be humbler, faced with reality

Screenshot 2019-04-28 at 12.29.49.gif

Fascinating Wired column by the head of MIT Media Lab, Joi Ito. The Media Lab has been the creative heart of Western digital culture for decades. So Ito’s plea to the current technological masters of the universe, to abandon their computation-driven arrogance about how the world works, has real force.

He wants them to admit a “humility of design”, faced with what the sciences of complexity and evolution tell us about how we are always participating in the creation of our reality, not just controllers of it.

This may seem a very scientific/technical point - but actually, thinking this way will have real consequences for how powerful technologies like machine learning and artificial intelligence come into our everyday lives.

Indeed, Ito prefers the concept of “extended intelligence” (meaning extended human intelligence). This may be the metaphor that helps us shape institutions and laws that make the best of these powers, rather than seeing them fearfully, as our human displacers.

An extract below:

While one of the key drivers of science is to elegantly explain the complex and increase our ability to understand, we must also remember what Albert Einstein said: “Everything should be made as simple as possible, but no simpler.”

We need to embrace the unknowability – the irreducibility – of the real world that artists, biologists and those who work in the messy world of liberal arts and humanities are familiar and comfortable with.

Today, it is obvious that most of our problems – for instance, climate change, poverty, chronic disease or modern terrorism – are the result of our pursuit of the dream of exponential growth.

They are extremely complex problems produced by tools used to solve past problems, such as endlessly pushing to increase productivity or to exert control over systems that have, in fact, become too complex to control.

In order to effectively respond to the significant scientific challenges of our times, I believe we must respect the many interconnected, complex, self-adaptive systems across scales and dimensions that cannot be fully known by, or separated from, observer and designer. 

In other words, we are all participants in multiple evolutionary systems with different fitness landscapes at different scales, from our microbes to our individual identities to society and our species. Individuals themselves are systems composed of systems of systems, such as the cells in our bodies that behave more like system-level designers than we do.

As Kevin Slavin says in his 2016 essay Design as Participation: “You’re not stuck in traffic, you are traffic.”

Biological evolution of individual species (genetic evolution) has been driven by reproduction and survival, instilling in us goals and yearnings to procreate and grow. That system continually evolves to regulate growth, increase diversity and complexity, and enhance its own resilience, adaptability and sustainability.

We could call it “participant design” – design of systems as and by participants – that is more akin to the increase of a flourishing function. Imagine that flourishing could become a measure of vigour and health rather than scale, money or power.

Machines with emergent intelligence, however, have discernibly different goals and methodologies. As we introduce such machines into complex adaptive systems such as the economy, the environment or health, I see them augmenting, not replacing, individual humans and, more importantly, augmenting such systems. 

Here is where the problematic formulation of “artificial intelligence” as defined by many becomes evident. It suggests forms, goals and methods that stand outside of interaction with other complex adaptive systems. 

Instead of thinking about machine intelligence in terms of humans vs machines, we should consider the system that integrates humans and machines – not artificial intelligence but extended intelligence.

Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.

If you’d like to dive more deeply and technically to Ito’s case, go to Resisting Reduction, his towering crowd-edited essay on MIT’s Journal of Design and Science.