Harari on the Dangers of AI

Also: reflecting on Arthur C. Clarke and John Brockman, wondering if there is a permanent limit to human cultural education.

NY Times, Yuval Noah Harari, Tristan Harris, and Aza Raskin, 24 March 2023: You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills

Harari and his collaborators discuss the dangers of AI, specifically the recent chatbots, which dangers I am coming to understand quite differently than the way such presumed dangers of AI have been described until recently. The current problem is that these new AI apps *aren’t* intelligent — they’re merely reflecting back everything, wisdom and nonsense, that they absorb from the web. They are the social media algorithms of Facebook ramped up. Harari et al begin:

Imagine that as you are boarding an airplane, half the engineers who built it tell you there is a 10 percent chance the plane will crash, killing you and everyone else on it. Would you still board?

In 2022, over 700 top academics and researchers behind the leading artificial intelligence companies were asked in a survey about future A.I. risk. Half of those surveyed stated that there was a 10 percent or greater chance of human extinction (or similarly permanent and severe disempowerment) from future A.I. systems. Technology companies building today’s large language models are caught in a race to put all of humanity on that plane.

The specter of A.I. has haunted humanity since the mid-20th century, yet until recently it has remained a distant prospect, something that belongs in sci-fi more than in serious scientific and political debates. It is difficult for our human minds to grasp the new capabilities of GPT-4 and similar tools, and it is even harder to grasp the exponential speed at which these tools are developing more advanced and powerful capabilities. But most of the key skills boil down to one thing: the ability to manipulate and generate language, whether with words, sounds or images.

A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts. Not just school essays but also political speeches, ideological manifestos, holy books for new cults. By 2028, the U.S. presidential race might no longer be run by humans.

“Political speeches, ideological manifestos, holy books for new cults”!

For thousands of years, we humans have lived inside the dreams of other humans. We have worshiped gods, pursued ideals of beauty and dedicated our lives to causes that originated in the imagination of some prophet, poet or politician. Soon we will also find ourselves living inside the hallucinations of nonhuman intelligence.

This begs the question of what “intelligence” means, of course. This is yet another subject that is part of the current zeitgeist, from fake news and social media bubbles and the results in politics and culture. The new AI seems to be working like the worst aspects of social media. Or “news” organizations that feed their audiences the conspiracy theories they want (see yesterday’s post).

Social media was the first contact between A.I. and humanity, and humanity lost. First contact has given us the bitter taste of things to come. In social media, primitive A.I. was used not to create content but to curate user-generated content. The A.I. behind our news feeds is still choosing which words, sounds and images reach our retinas and eardrums, based on selecting those that will get the most virality, the most reaction and the most engagement.

While very primitive, the A.I. behind social media was sufficient to create a curtain of illusions that increased societal polarization, undermined our mental health and unraveled democracy. Millions of people have confused these illusions with reality. The United States has the best information technology in history, yet U.S. citizens can no longer agree on who won elections.

“Millions of people have confused these illusions with reality.”

Large language models are our second contact with A.I. We cannot afford to lose again. But on what basis should we believe humanity is capable of aligning these new forms of A.I. to our benefit? If we continue with business as usual, the new A.I. capacities will again be used to gain profit and power, even if it inadvertently destroys the foundations of our society.

So now I’m actually concerned. If mere social media, with algorithms on Facebook that promote anger responses over likes, have exacerbated the political divide in the US, how much worse will it get with AI chatbots offering up full essays about anything you ask them that confirm your worst suspicions?

I’ve seen a bit of this coming. How the current AI chatbots were anticipated by the predictive tools in Microsoft Word (and no doubt other MS Office apps). You start typing a sentence and Word offers a few following words, based on what other people go on to say with a similar beginning. Its offer is suitable about 1/3 of the time, and saves me a bit of typing. But it was obvious as soon as this feature appeared that most offered completions were off-base and irrelevant. At least to me.


Last night, I was browsing through Arthur C. Clarke’s nonfiction book The Promise of Space (1968) — one of the earliest first edition hardcover nonfiction books I acquired — with detailed descriptions and diagrams of the spacecraft that were to go to the Moon. A pinnacle achievement of the human species. And reflected on the opposite extreme, about so many ordinary people walking the streets don’t *know anything*, as shown in YouTube videos. (Which admittedly, highlight the worst cases.) What year was the War of 1812? How many dimes in a dollar? How many states in the US? Some people, apparently, fumble to answer these questions; they don’t know, and don’t care. They live their daily lives, without any awareness of the broader world. (Didn’t they go to school? Are they an indictment of public education?)

Then today I summarized some of the responses in the John Brockman book THE LAST UNKNOWNS, in the previous post. And I realized how only a tiny minority of humans have ever been educated, or have appreciated the advances that science and technology have made over the past several hundred years.

The large complex missions of Apollo were so outside the everyday experience of most people, that many of them believed they didn’t actually happen.

So — cuing comments to the Brockman book — is there a permanent limit to human cultural intelligence? A limit that might be overcome through rigorous education of each new generation — but which education conservatives are always opposed to, preferring cultural myths that promote tribal survival?

And, my ultimate theme, does science fiction in any way inform this issue? I think it does, because it doesn’t take traditional verities for granted, and is open to exploration and discovery, and of course education.

This entry was posted in Arthur C. Clarke, Culture, Technology. Bookmark the permalink.

Leave a Reply

Your email address will not be published.