Longtermism (and Science Fiction), part 2

More about “longermism,” Effective Altruism, how they relate to science fiction, and whether or not books can be reduced to six-paragraph summaries.

\\

Let’s go back through some of those links I’ve collected on these topics. See how many I get through.

Note the subtitle of this one. “Profound philosophical errors”? What could these be?

Vox, Dylan Matthews, 12 Dec 2022: How effective altruism let Sam Bankman-Fried happen, subtitled “Profound philosophical errors enabled the FTX collapse.”

Reading through this… The emerging theme isn’t just the philosophy of EA, but how it advocates action, almost a duty, to make lots of money in order to give it away. (As if it were so easy to do so.)

For as much good as I see in that movement, it’s also become apparent that it is deeply immature and myopic, in a way that enabled Bankman-Fried and Ellison, and that it desperately needs to grow up. That means emulating the kinds of practices that more mature philanthropic institutions and movements have used for centuries, and becoming much more risk-averse. EA needs much stronger guardrails to prevent another figure like Bankman-Fried from emerging — and to prevent its tenets from becoming little more than justifications for malfeasance.

Despite everything that’s happened, this isn’t a time to give up on effective altruism. EA has quite literally saved lives, and its critique of mainstream philanthropy and politics is still compelling. But it needs to change itself to keep changing the world for the better.

The writer acknowledges that this very article is part of a running series at Vox called “Future Perfect,” which was inspired by EA and which was partly funded by Bankman-Fried’s family foundation! But knowing only what they knew then, they don’t regret having taken the funding. Moving on.

The problem isn’t longtermism …

Ah, yes: the long-term trajectory of humanity, the trillions upon trillions of beings who could one day exist, dependent on our actions today. It’s an impossible concept to express without sounding unbelievably pretentious, but it’s become a growing focus of effective altruism in recent years.

Many of the movement’s leaders, most notably Oxford moral philosopher Will MacAskill, have embraced an argument that because so many more humans and other intelligent beings could live in the future than live today, the most important thing for altruistic people to do in the present is to promote the welfare of those unborn beings, by ensuring that future comes to be by preventing existential risks — and that such a future is as good as possible.

MacAskill’s book on this topic What We Owe the Future received one of the biggest receptions of any philosophy monograph in recent memory, and both it and his more technical work with fellow Oxford philosopher Hilary Greaves make pointed, highly contestable claims about how to weigh future people against people alive today.

And so on. Long article.

… the problem is the dominance of philosophy

Even before the fall of FTX, longtermism was creating a notable backlash as the “parlor philosophy of choice among the Silicon Valley jet-pack set,” in the words of the New Republic’s Alexander Zaitchik. Some EAs like to harp on mischaracterizations by longtermism’s critics, blaming them for making the concept seem bizarre.

That might be comforting, but it’s mistaken. Longtermism seems weird not because of its critics but because of its proponents: it’s expressed mainly by philosophers, and there are strong incentives in academic philosophy to carry out thought experiments to increasingly bizarre (and thus more interesting) conclusions.

This means that longtermism as a concept has been defined not by run-of-the-mill stuff like donating to nuclear nonproliferation groups, but by the philosophical writings of figures like Nick Bostrom, MacAskill, Greaves, and Nick Beckstead, figures who have risen to prominence in part because of their willingness to expound on extreme ideas.

These are all smart people, but they are philosophers, which means their entire job is to test out theories and frameworks for understanding the world, and try to sort through what those theories and frameworks imply. There are professional incentives to defend surprising or counterintuitive positions, to poke at widely held pieties and components of “common sense morality,” and to develop thought experiments that are memorable and powerful (and because of that, pretty weird).

This isn’t a knock on philosophy; it’s what I studied in college and a field from which I have learned a tremendous amount. It’s good for society to have a space for people to test out strange and surprising concepts. But whatever the boundary-pushing concepts being explored, it’s important not to mistake that exploration for practical decision-making.

When Bostrom writes a philosophy article for a philosophy journal arguing that total utilitarians (who think one should maximize the total sum of happiness in the world) should prioritize colonizing the galaxy, that should not, and cannot, be read as a real policy proposal, not least because “colonizing the galaxy” probably is not even a thing humans can do in the next thousand years. The value in that paper is exploring the implications of a particular philosophical system, one that very well might be badly wrong. It sounds science fictional because it is, in fact, science fiction, in the ways that thought experiments in philosophy are often science fiction.

Bada boom — this seems like the essence of the criticism of EA. It’s outlandish, in the way philosophical thought experiments, and science fiction, often are outlandish. Which is fine for thought experiments, but not for public policy, or as rationale for investment firms.

The passages above echo my proposition that science fiction is more like philosophy than it is like science.

Very long article, as I said, and there are parallel “the problem isn’t…”/”the problem is…” sections on billionaires and utilitarianism.

\\\\

Any others?

NYT Books, Jennifer Szalai, 9 Dec / 13 Dec 2022: How Sam Bankman-Fried Put Effective Altruism on the Defensive, subtitled “The movement’s leading intellectuals told us to fear killer robots. They didn’t think to mention the crypto billionaire next door.”

This is a piece by one of NYT’s leading book reviewers, discussing EA in general and Bankman-Fried in particular, and then considering several books on the topic: MacAskill’s What We Owe the Future, Toby Ord’s The Precipice, and Nick Bostrom’s Superintelligence.

\

Guardian, John Naughton, 4 Dec 2022: Longtermism: how good intentions and the rich created a dangerous creed, subtitled “Tech billionaires seem intent on giving away a lot of money. But are those who support ‘effective altruism’ ignoring the very real problems of today?”

This captures the criticism of EA that it prioritizes all the people who might live in the future, over the people living today.

\

NYT, Ezra Klein, 4 Dec 2022: The Big Thing Effective Altruism (Still) Gets Right

This is a holiday piece about the best places one might donate to, with several personal recommendation he names at the beginning of his article. Then he goes on to discuss EA and Bankman-Fried. One of his pieces of advice is, “Be more skeptical of thought experiments,” citing Peter Singer’s scenario about a drowning child.

\

Finally for tonight is this odd adjacent piece.

Washington Post, Molly Roberts, 29 Nov 2022: Opinion | Sam Bankman-Fried doesn’t read. That tells us everything.

This plays on a comment by SBF: “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. … If you wrote a book, you f—ed up, and it should have been a six-paragraph blog post.”

Roberts goes about the *experience* of reading a book, how you read to read, not to have read. Then she goes on to link the goals of EA with the sins of SBF very directly. Is this what happened? SBF denies it. But it’s plausible.

SBF is a believer in effective altruism. This school of thought seeks to, by its own definition, use “evidence and reason to figure out how to benefit others as much as possible.” More specifically, SBF had devoted his life toward “earning to give.” You pick a career not because you care about the career itself, or even think it’s good for the world on its own merits. You pick a career because it will make you a massive amount of money, and you can spend that money on something that is good. So SBF picked finance, and after that he picked crypto.

Earning to give, in other words, is at its most extreme all about the end, means be damned.

The trouble is, if you don’t care about what you do, if part of you disdains what you do, your only concern is what the work gets you. That’s the case whether your aim is to keep all the riches for yourself or to do precisely the opposite. SBF was also immersed in a type of effective altruism known as longtermism, where that ultimate outcome you’re seeking is hundreds of thousands or even millions of years away, so instead of buying bed nets for children dying of malaria today, you’re trying to prevent the hypothetical next pandemic or the overheating of the earth. The risk, of course, is that this obsession with the future further disconnects you from the present. Why not scam a few bucks today to save a few billion lives in the 23rd century? That’s not just skipping to the end of the book — it’s skipping to the end of the entire series.

“Why not scam a few bucks today to save a few billion lives in the 23rd century?” Do we think SBF really thought that? I suspect not; my impression from seeing brief interviews is that he’s too young and idealistic, not nearly as cynical as the writer of this article.

\\\

Now, quite aside from EA and SBF, I wondered to what extent his idea that a book could be reduced to a six-paragraph blog post was plausible. I think it’s plausible to summarize a book after you’ve read it, but not to replace the reading of a book. That is, a six-paragraph, or even a one-paragraph, summary of a book (as on my nonfiction reviews page), can be extremely valuable for reminding people without eidetic memories, like me, of what they spent so many hours reading. Or, as a simple summary of sources for a research project.

But you have to spend those hours for the ideas to sink in. Remember: humans don’t take well to dry facts, let alone statistics. Humans respond to stories, to anecdotes, to expressions of how an idea happens in the real world. In fact I’ve noted recently how some books capture their entire thesis in the intro, and maybe conclusion, and fill out the rest with case studies and anecdotes. There *are* some books that could have been, say, magazine articles. But not the best ones. That’s why some of the best ones, by Pinker and Wilson and others, that are so dense with ideas, I have yet to summarize completely here on my blog… Those are the books most worth reading, and least given to summarizing in six paragraphs.

This entry was posted in longtermism, Philosophy, science fiction. Bookmark the permalink.

Leave a Reply

Your email address will not be published.