The big players in AI want the state to regulate them (like nukes or bio-tech). Yet will open-source LLM's democratic potential also be part of the mix?

An X tip-off from our great London Futurists’ friend David Wood - about the Future of Life institute’s latest call for three laws to regular the development of AI. The “legislative action items” are in a still from the video, below:

This is the automated transcript of the video above:

In March 2023 an open letter sounded the alarm on the training of giant AI experiments. It was signed by over 30,000 individuals including more than 2,000 industry leaders and more than 3,000 experts

Since then there has been a growing concern about out of control AI development [Background slide: 86% of Americans believe AI could accidentally cause a catastrophic event], [News anchor]: “Advanced AI could pose a profound risk to society and humanity”.  [News anchor]: “They say we could potentially face a dystopic future or even extinction.”

People are right to be afraid. These advanced systems could make humans extinct and with threats escalating it could be sooner than we think. Autonomous weapons. large-scale cyber attacks, and AI-enabled bioterrorism endanger human lives today.

Rampant misinformation and pervasive bias are eroding trust and weakening our society. We face an international emergency.

AI developers are aware and admit these dangers, [Tristan Harris]: “And we're putting it out there before we actually know whether it's safe.” [Sam Altman]: “And the bad case—and I think this is like important to say—is like lights out for all of us”. [US congress committee]: A straightforward extrapolation of today's systems to those we expect to see in 2 to 3 years suggests a substantial risk—that AI systems will be able to fill in all the missing pieces enabling many more actors to carry out large-scale biological attacks.” [News report] “In 6 hours the computer came up with designs for 40,000 highly toxic molecules’.

They remain locked in an arms race to create more and more powerful systems with no clear plan for safety or control. [Sundar Pichai]: “It can be very harmful if deployed wrongly and we don't have all the answers there yet and the technology is moving fast. So does that keep me up at night? Absolutely.”

They recognize a slowdown will be necessary to prevent harm but are unable or unwilling to say when or how. [Background slide: “82% of voters don’t trust tech executive to regulate AI… 56% support a federal agency regulating AI”]. There is public consensus and the call is loud and clear: Regulate AI Now.

We've seen regulation driven innovation in areas like pharmaceuticals and aviation—why would we not want the same for AI? [Sam Altman]: “There are major downsides - we have to manage to be able to get the upsides.”

There are three key areas we are looking for US lawmakers to act:

One: immediately establish a registry of giant AI experiments, maintained by a US Federal agency

Two: they must build a licensing system to make labs prove their systems are safe before deployment

Three: they must take steps to make sure developers are legally liable for the harms their products cause

Finally we must not stop at home—this affects everyone. At the upcoming UK Summit, every concerned nation must have a seat. We are looking to create an international multi-stakeholder auditing agency.

This kind of international cooperation has been possible in the past: we coordinated on cloning and we banned bioweapons. And now we can work together on AI.

Knowing when we need the state to regulate for collective risks and dangers, and when it should leave well alone for communities to develop and flourish themselves, is a call we are acutely aware needs to be made - and with sophistication.

So alongside this call, we notice the rise of an open-source front in the development of current AI, as reported on here in Slate by Bruce Schneier and Jim Waldo, and as exemplified by Meta’s Llama:

[Open source LLMs - large language models] will wrest power from the large tech corporations, resulting in both much more innovation and a much more challenging regulatory landscape. The large corporations that had controlled these models warn that this free-for-all will lead to potentially dangerous developments, and problematic uses of the open technology have already been documented.

But those who are working on the open models counter that a more democratic research environment is better than having this powerful technology controlled by a small number of corporations.

…We have entered an era of LLM democratization. By showing that smaller models can be highly effective, enabling easy experimentation, diversifying control, and providing incentives that are not profit motivated, open-source initiatives are moving us into a more dynamic and inclusive A.I. landscape.

This doesn’t mean that some of these models won’t be biased, or wrong, or used to generate disinformation or abuse. But it does mean that controlling this technology is going to take an entirely different approach than regulating the large players.

More here. We’re interested in communities taking control of their own data, being able to see the patterns of their usage - their social graph - that’s usually sold to advertisers. But we’re also interested in some kind of common control over the AIs which process this usage. How do these voices get their say?