- Ten Chiang characterizes the current examples of “artificial intelligence” as “applied statistics”;
- How Carl Sagan was wrong about the “reptilian brain”, in The Dragons of Eden;
- And an interview with Peter Singer, the controversial philosopher, author of The Expanding Circle and The Most Good You Can Do
Financial Times, Ted Chiang: “The machines we have now are not conscious”
This is a firewalled piece on the Financial Times’ website, but I think it allows one or two free views before insisting on a subscription. Mike Glyer, on his File 770 site, listed this as the first item in his June 4th Pixel Scroll post, with this, as the interviewer describes meeting Chiang for lunch:
(1) CHIANG CALLS AI “A POOR CHOICE OF WORDS”. Behind a paywall in the Financial Times: “Sci-fi writer Ted Chiang: ‘The machines we have now are not conscious’”.
…Before we have had a chance to order, the proprietor, who also doubles as the waiter, turns up with two steaming bowls of peppery red lentil soup. The flavours instantly awaken my taste buds: salty and pungent. As we dive in, Chiang, in his contemplative way, takes issue with my observation that his fictional worlds and the one we’re inhabiting are getting uncomfortably close together.
“The machines we have now, they’re not conscious,” he says. “When one person teaches another person, that is an interaction between consciousnesses.” Meanwhile, AI models are trained by toggling so-called “weights” or the strength of connections between different variables in the model, in order to get a desired output. “It would be a real mistake to think that when you’re teaching a child, all you are doing is adjusting the weights in a network.”
Chiang’s main objection, a writerly one, is with the words we choose to describe all this. Anthropomorphic language such as “learn”, “understand”, “know” and personal pronouns such as “I” that AI engineers and journalists project on to chatbots such as ChatGPT create an illusion. This hasty shorthand pushes all of us, he says — even those intimately familiar with how these systems work — towards seeing sparks of sentience in AI tools, where there are none.
“There was an exchange on Twitter a while back where someone said, ‘What is artificial intelligence?’ And someone else said, ‘A poor choice of words in 1954’,” he says. “And, you know, they’re right. I think that if we had chosen a different phrase for it, back in the ’50s, we might have avoided a lot of the confusion that we’re having now.”
So if he had to invent a term, what would it be? His answer is instant: applied statistics….
He makes excellent points about how we anthropomorphize the actions of machines. (That doesn’t necessarily means the actions of these machines, as they infiltrate themselves into our social media, might not be dangerous.)
Scientists specialize, and when they step outside their specialty, they can get things wrong.
Big Think, Ross Pomeroy, 2 Jun 2023: What Carl Sagan got very wrong about the human brain, subtitled “You don’t have a ‘reptile brain.'”
This refers to Sagan’s 1977 book The Dragons of Eden, which I reread in 2021 and reviewed here. His basic point was that the brain has evolved, in stages. But his notion of a “reptilian brain” at the core of our own brains was criticized, even then, as being overly simplistic. As always, the more science learns, the more our understanding discovers that reality is more complex.
In the decades since Sagan told his trusting readers that they have reptilian brains, the idea has been roundly disproved. Brain scans clearly show that the brain’s functions aren’t compartmentalized as MacLean theorized. Moreover, the idea fundamentally misunderstands evolutionary biology. As species diversified from their common ancestors, “complex nervous systems and sophisticated cognitive abilities evolved independently many times,” scientists explained in a 2020 review article debunking the reptile brain myth.
Science is always about provisional conclusions, designed to be disproven given potential evidence. Such conclusions are rarely completely disproven, but rather get more and more complex the more scientists learn. So it’s understandable that simplistic versions of early conclusions promulgate. We can’t all keep up; it would take too many lifetimes.
My take on Peter Singer, a moral philosopher whose conclusions are very controversial, is that he is a person who dares to take the modern scientific discoveries and understanding of human and animal consciousness seriously, and derives conclusions therefrom. Which contradict evolutionary-derived instincts to promote human survival at the expense of everything else, as institutionalized in religion and popular culture.
His books includes, perhaps most famously, The Expanding Circle: Ethics, Evolution, and Moral Progress — his phrase the “expanding circle” has become common among writers who discuss morals and ethics — as well as Ethics in the Real World: 82 Brief Essays on Things That Matter, and most recently, The Most Good You Can Do: How Effective Altruism Is Changing Ideas About Living Ethically, which as the title indicates, aligns him with the longtermism/effective altruism camp that takes long-term consequences of our present day actions seriously.
Let’s see what this piece says.
Vox, Dylan Matthews, 30 May 2023: One of the world’s most controversial philosophers explains himself, subtitled “The moral philosopher Peter Singer on animal welfare, the ethics of euthanasia, and more.”
The interview concludes:
I do consider the consequences of our actions as the way to determine which actions are right or wrong, and if I were persuaded that the harms are really so serious that it is better not to talk about these issues, then I wouldn’t talk about them. But I haven’t been persuaded by that. And, of course, we have to balance it against the consequences of parents thinking about the issue in a way that doesn’t leave them tortured with guilt for making what many people would think of as a morally wrong decision.
I’m interested in social reform. For example, I think switching to voluntary euthanasia or physician-assisted dying, that movement has made very significant progress in the last 40 years, and I think has greatly reduced the amount of unnecessary suffering. But some people with disabilities are opposed to that as well, because they think pressure will be put on people with disabilities to end their lives.
That would be a serious consideration if there were clear evidence that that’s the case. But I really haven’t seen the evidence, either about the speech harms that you’re referring to or about pressure on people with disabilities to end their lives. So I continue to advocate for physician-assisted dying.
In general, I think that freedom of thought and expression is really important. I think that people have become, perhaps, overly sensitive in the last couple of decades about speech harm. It’s often said but rarely backed up with firm evidence about how serious it is. So that’s why I haven’t stopped talking about these issues.