AI is both eminently regulable, and amazing in its simulatory powers. But we shouldn't let it treat us like chimps

We’re trying to keep up with the firehose of material on AI - and trying to steer between the tales of utopia and dystopia, abundance and inequality, that wreath this technology. We’re as alive to the necessity of harnessing it to a pro-human agenda, as we are the possibility that we may be in the presence of a new consciousness.

First, and very articulate about the harnessing that could and should happen, is this tweet thread from the economist Daron Acemoglu (on Twitter at @DAcemogluMIT

…I believe that there are almost “mystical” claims about AI, and especially generative AI (hence my language of myths).

I am convinced that generative AI is a very promising technology—but only if it is used in the correct way. I’m convinced it isn’t—and the myths that I have argued against are partly responsible for this distorted path.

If we give up the idea that generative AI can create consciousness or human mind-like behaviors, or the conceit that we are at the cusp of ultraintelligent machines, the AI discussion can be placed on a more productive grounding.

If we give up the utopia that generative AI will create superabundance and if we are more upfront about its shortcomings (as well as its impressive capabilities), we can have a more productive conversation about what our aspirations should be.

If we admit that AI can be and should be regulated (including slowing down its uncontrolled rollout and not repeating ever again the type of hype ChatGPT generated), that would be an important step towards a more productive discussion on regulation.

The heart of the matter is that it is possible to have generative AI become a tool for better human decision-making. This is particularly important, because we are in the midst of a trend towards more and more knowledge work, that will most likely continue in the decades to come.

Generative AI could provide complementary tools to knowledge workers. These would be creating new tasks (for educators, nurses, creative workers, tradespeople and even blue-collar workers) and providing inputs into better decision-making for knowledge work.

But this is not the direction we are traveling in. Rather, the current approach is repeating the same mistakes that technologists and business people made with digital technologies — excessive automation (and ignoring creation of new human tasks) and centralization of information.

This is both because of the vision of the tech leaders (the craze about autonomous machine intelligence, and the mistaken view that downplays the value and versatility of human skills) And the industry’s structure (an oligopoly, morphing into a duopoly for foundation models).

Why the centralization of information and the possible duopoly of Alphabet and Microsoft is so pernicious is explained in the NYT op-ed by myself and @baselinescene, titled “Big Tech Is Bad. Big A.I. Will Be Worse”.

In our book PowerAndProgress, we also propose several regulatory steps to prevent this situation. But the most important one is to start articulating a shared aspiration: a future direction of technologies and AI that is more pro-human — empowering workers and citizens.

The regulatory ideas we propose include:

(1) digital advertising taxes to change the business model of tech platforms;

(2) regulation of data use and well-defined property rights over data (including data unions), so that large language models cannot expropriate others’ creative work;

(3) potential breakup of Big Tech and moratorium on their M&A (to diminish their control over the future of technology and their huge social and economic power);

(4) leveling the playing field between capital and labour by increasing taxes on capital and reducing payroll taxes;

(5) government subsidies and competitive prizes for using (generative) AI in a more pro-human way, for example for creating new tasks, new work and new ways of decentralizing information;

(6) institutional changes to increase worker voice in the direction of technology.

More discussion of how digital technologies were misused and how AI is heading in the same direction, and further justifications for these policies (as well as ways in which they may or may not go wrong) are discussed in our book, PowerAndProgress,

More here.

We should also dwell on what AI promises to do that is completely unprecedented. Below is a simulation of 44 million atoms:

And to finish off, our friend, the head of London Futurists David Wood - whose moment of transhumanism has most certainly come - posts a funny Midjourney cartoon, with appropriate speech bubbles: