We're at an "inflection point" with artificial intelligence. We have to consciously steer and design it to empower all people

From Wired

From Wired

Regular readers to A/UK will know that we don’t just hold up climate meltdown as the only major crisis facing humanity (and Covid as an instance within that). We are as interested in human ingenuity as human toxicity, and so keep an eye on the disruptive progress of radical innovations - like artificial intelligence.

Directly, we will need the computational power of machines to track our carbon usage and other impacts on the biosphere. Our survival has a vested interest in computation to be at the service of humanity.

A grand and authoritative measure of this is The Stanford One Hundred Year Study on Artificial Intelligence (AI100). The first report of AI100 five years ago suggested that AI presents “no threat to humanity”. The new report, out a few weeks ago, was titled “Gathering Strength, Gathering Storms”, and its key paragraph runs like this:

The field’s successes have led to an inflection point: It is now urgent to think seriously about the downsides and risks that the broad application of AI is revealing…. AI research has traditionally been the purview of computer scientists and researchers studying cognitive processes... it has become clear that all areas of human inquiry, especially the social sciences, need to be included in a broader conversation about the future of the field.

An elegantly phrased, but clear warning point. On what basis? This, from the conclusion:

The field of artificial intelligence has made remarkable progress in the past five years and is having real-world impact on people, institutions and culture. The ability of computer programs to perform sophisticated language- and image-processing tasks, core problems that have driven the field since its birth in the 1950s, has advanced significantly.

Although the current state of AI technology is still far short of the field’s founding aspiration of recreating full human-like intelligence in machines, research and development teams are leveraging these advances and incorporating them into society-facing applications.

For example, the use of AI techniques in healthcare is becoming a reality, and the brain sciences are both a beneficiary of and a contributor to AI advances. Old and new companies are investing money and attention to varying degrees to find ways to build on this progress and provide services that scale in unprecedented ways.

The field’s successes have led to an inflection point: It is now urgent to think seriously about the downsides and risks that the broad application of AI is revealing.

  • The increasing capacity to automate decisions at scale is a double-edged sword; intentional deepfakes or simply unaccountable algorithms making mission-critical recommendations can result in people being misled, discriminated against, and even physically harmed.

  • Algorithms trained on historical data are disposed to reinforce and even exacerbate existing biases and inequalities. Whereas AI research has traditionally been the purview of computer scientists and researchers studying cognitive processes, it has become clear that all areas of human inquiry, especially the social sciences, need to be included in a broader conversation about the future of the field.

  • Minimizing the negative impacts on society and enhancing the positive requires more than one-shot technological solutions; keeping AI on track for positive outcomes relevant to society requires ongoing engagement and continual attention.

Looking ahead, a number of important steps need to be taken.

  • Governments play a critical role in shaping the development and application of AI, and they have been rapidly adjusting to acknowledge the importance of the technology to science, economics, and the process of governing itself.

  • But government institutions are still behind the curve, and sustained investment of time and resources will be needed to meet the challenges posed by rapidly evolving technology.

  • In addition to regulating the most influential aspects of AI applications on society, governments need to look ahead to ensure the creation of informed communities. Incorporating understanding of AI concepts and implications into [primary and secondary] education is an example of a needed step to help prepare the next generation to live in and contribute to an equitable AI-infused world.

  • The AI research community itself has a critical role to play in this regard, learning how to share important trends and findings with the public in informative and actionable ways, free of hype and clear about the dangers and unintended consequences along with the opportunities and benefits.

  • AI researchers should also recognize that complete autonomy is not the eventual goal for AI systems. Our strength as a species comes from our ability to work together and accomplish more than any of us could alone. AI needs to be incorporated into that community-wide system, with clear lines of communication between human and automated decision-makers.

At the end of the day, the success of the field will be measured by how it has empowered all people, not by how efficiently machines devalue the very people we are trying to help.

More here.

There are also authoritative questions throughout with excellent summaries - like the three below - which open out into deep and detailed background consideration:

SQ3. What are the most inspiring open grand challenge problems?

Summary: Recent years have seen remarkable progress on some of the challenge problems that help drive AI research, such as answering questions based on reading a textbook, helping people drive so as to avoid accidents, and translating speech in real time. Others, like making independent mathematical discoveries, have remained open.

A lesson learned from social science- and humanities-inspired research over the past five years is that AI research that is overly tuned to concrete benchmarks can take us further away from the goal of cooperative and well-aligned AI that serves humans’ needs, goals, and values.

A number of broader challenges should be kept in mind: exhibiting greater generalizability, detecting and using causality, and noticing and exhibiting normativity are three particularly important ones.

An overarching and inspiring challenge that brings many of these ideas together is to build machines that can cooperate and collaborate seamlessly with humans and can make decisions that are aligned with fluid and complex human values and preferences. 

SQ9. What are the most promising opportunities for AI?

Summary: AI approaches that augment human capabilities can be very valuable in situations where humans and AI have complementary strengths. An AI system might be better at synthesizing available data and making decisions in well-characterized parts of a problem, while a human may be better at understanding the implications of the data.

It is becoming increasingly clear that all stakeholders need to be involved in the design of AI assistants to produce a human-AI team that outperforms either alone. AI software can also function autonomously, which is helpful when large amounts of data needs to be examined and acted upon.

Summarization and interactive chat technologies have great potential. As AI becomes more applicable in lower-data regimes, predictions can increase the economic efficiency of everyday users by helping people and businesses find relevant opportunities, goods, and services, matching producers and consumers.

We expect many mundane and potentially dangerous tasks to be taken over by AI systems in the near future. In most cases, the main factors holding back these applications are not in the algorithms themselves, but in the collection and organization of appropriate data and the effective integration of these algorithms into their broader sociotechnical systems. 

SQ10. What are the most pressing dangers of AI?

Summary: As AI systems prove to be increasingly beneficial in real-world applications, they have broadened their reach, causing risks of misuse, overuse, and explicit abuse to proliferate.

One of the most pressing dangers of AI is techno-solutionism, the view that AI can be seen as a panacea when it is merely a tool.

There is an aura of neutrality and impartiality associated with AI decision-making in some corners of the public consciousness, resulting in systems being accepted as objective even though they may be the result of biased historical decisions or even blatant discrimination.

Without transparency concerning either the data or the AI algorithms that interpret it, the public may be left in the dark as to how decisions that materially impact their lives are being made.

AI systems are being used in service of disinformation on the internet, giving them the potential to become a threat to democracy and a tool for fascism. Insufficient thought given to the human factors of AI integration has led to oscillation between mistrust of the system and over-reliance on the system.

AI algorithms are playing a role in decisions concerning distributing organs, vaccines, and other elements of healthcare, meaning these approaches have literal life-and-death stakes. 

More here.