"We must build tech we trust... and trust is about confident vulnerability", says Kevin Werbach & Azeem Azhar
In the spirit of “don’t ask for permission, ask for forgiveness” (or, alternately, “proceed until apprehended”), we’d like to run this great excerpt from Azeem Azhar’s Exponential View newsletter. Which we recommend highly! (sign up here, and here’s a topic overview).
Azeem not only provides an overview of the cutting edge technologies of the moment - like AI and bioscience - but also he also humanely grapples with their environmental and social challenges. (Azeem does a weekly count, for example, of the amount of carbon in the atmosphere, heading for 450ppm - which, creepily and relentlessly, ascends every week).
This stretch - actually from one of his guest newsletter editors, Kevin Werbach - is a great short exposition (and link-laden utility) on the question of trust in the current age. (Werbach’s new book is The Blockchain and The New Architecture of Trust.) It’s packed with insight:
Most of the great challenges we face can be expressed as failures of trust. We don’t believe our leaders and scientists and journalists; we watch corporations exploit those they claim to serve; and we see our fates increasingly in the hands of inscrutable machines.
Weaponization of information through bots, fake news, and recommendation algorithms (plus good old-fashioned propaganda) undermines any consensus about the very nature of truth. Trust is the glue that binds communities and societies toward a common purpose; without it, we are adrift.
What to do? Perhaps, as researchers suggested last week, mice can help ferret out deepfakes, but we shouldn’t hold our breaths for magic technical solutions.
The alternative of governments directing content filtering by social media platforms, the norm in China and perhaps soon to be ordered by the US White House, will only worsen the trust crisis.
We need to step back. What is trust? And how can we build technological systems that are trust-generating rather than trust-destroying?
As I explain in my book, trust is confident vulnerability. It’s more than just willingness to act. Facebook’s user numbers and profits haven’t dropped despite its many scandals, because people value convenience and lack alternatives; that doesn’t mean they trust it.
Revelations this week that Facebook paid contractors to listen in and transcribe users’ audio messages didn’t help. As a result, Facebook’s Libra cryptocurrency was met with a hail of criticism, even though Libra’s design is, on its face, trust-enhancing.
Trust and distrust aren’t entirely rational. That’s something the autonomous vehicle community is learning as well. Statistically lower accident rates in trials won’t convince users and local officials to take the leap and open up public roads.
We must, in the words of a group of internet CEOs and activists that issued a manifesto this Tuesday on accountability for digital platforms, build tech we trust. That means changing our technologies at a more fundamental level, building in human and regulatory structures.
Techniques such as differential privacy and federated learning allow modern AI to function without passing control over data to potentially untrustworthy actors. Such techniques are seeing rapid adoption in contexts such as medical imaging, as this recent article discusses.
As many countries develop AI ethics frameworks—including, late to the party, the US—and the regulatory conversation moves beyond defining rules such as GDPR to means of compliance, such approaches deserve greater attention. (Those interested should fly to Barcelona in January for the ACM FAT* Conference, home base of the exploding algorithmic fairness, accountability, and transparency community.)
But the problem isn’t just the lack of trust; it’s also misplaced trust in the untrustworthy. The disgraced Chinese scientist behind the ‘CRISPR babies’ built a circle of trust of influential experts to support his experiments, but it crumbled quickly when the truth got out.
And researchers recently showed that geographic clusters of extremely old people, a phenomenon of great scientific interest, are largely artifacts of poor record-keeping for birth certificates. Garbage in, garbage out, as the global financial crisis and the LIBOR scandal reminded us as well.
Which brings us to blockchain. Yes, it’s an overhyped playground for speculators, cranks, and criminals. (A new report from Ciphertrace says thieves and scammers have already stolen $4.3 billion of cryptocurrency this year.)
It’s also our greatest hope to re-architect trust in the veracity of information. Blockchain can replace trust in those who record or verify with trust in a decentralized network secured through cryptography. IBM’s Food Trust was one of the first production enterprise blockchain systems (with Walmart).
The French grocery chain Carrefour recently announced that fruits and vegetables whose provenance is tracked through the system sell better.
The biggest impact is in China, a famously low-trust environment. The practical challenge for Food Trust, and other enterprise blockchains, is to convince competitors like Carrefour and Walmart to trust the platform itself.
Although critics appropriately note the paucity of high-volume production systems or consumer applications with significant usage, there are too many experiments and pilots underway to count, in every conceivable industry.
A report this week described how Berkeley AI researcher Dawn Song is building a blockchain-based system to incentivize sharing of medical data to power health AI systems, protected with differential privacy.
Developing technologies of trust will be a powerful theme in the coming years. We don’t have a choice.