Longtermism and Science Fiction, part 1

I’ve discussed “longtermism” before, via a book by Ari Wallach called LONGPATH, reviewed here about five weeks ago. Its ideas seemed entirely reasonable: consider long-term consequences of making decisions rather than reacting in the moment. Consider the big picture. (The science-fictional perspective.)

However  the term “longtermism” is recently applied to a grander, more controversial topic, linked with the idea of “effective altruism.” This entered general public discourse (though the idea’s been around for years) with the publication of a book (on the same day as Wallach’s) called WHAT WE OWE THE FUTURE by William MacAskill, one of the leaders of a philosophical movement called “effective altruism” (EA), which I gather is the idea of putting your money (your charitable donations, say) to where it will do the most, long-term good. The movement is coming under increased scrutiny, and opprobrium, in just the past couple weeks, since MacAskill is apparently pals with Sam Bankman-Fried, whose cryptocurrency company FTX just went spectacularly bankrupt. So now the people who thought EA was somehow sinister have even more reason to dismiss the idea of EA as outlandish and irresponsible.

I’m 80 pages into MacAskill’s book (which so far isn’t controversial so much as tedious) and haven’t gotten to the good parts. I’ve collected links to numerous recent articles about EA. I posted about a guest essay by MacAskill in the NYT back in August and didn’t find anything to object to.

But others have, and today I’ll finally start going through more recent commentaries and critiques of longtermism, beginning with one that just appeared yesterday, from a writer I’ve followed for many years.

Adam Lee, OnlySky, 12 Dec 2022: Longtermism: the good, the bad and the ridiculous

The “overview” squib at the top reads “Planning for the future is wise and necessary. But some philosophers stray into religious fantasy when they argue that a sci-fi utopian dream supersedes our moral obligations to the present.”

Lee begins by quoting MacAskill, and also Nick Bostrom, on matters like existential risk. Then makes his key point.

To be clear, long-term thinking is absolutely a good thing and very much needed. Most of our problems, including all the most serious ones, exist because humans are prone to neglecting the future. Too many people are resistant to thinking about the consequences of their actions beyond the next few years—sometimes, beyond the next quarterly income statement. Longtermism ought to be a set of corrective lenses for this moral short-sightedness.

However, many longtermists make the opposite error. They take an imaginative leap ludicrously far into the future, arguing that the highest priority is making the decisions that will be best for our descendants across “millions, billions, and trillions of years” (as longtermist Nicholas Beckstead puts it). This is well beyond where evidence can guide us.

I’ve gathered this much. Apparently MacAskill and the other longtermers speculate that humanity might expand into minds uploaded into virtual reality, and grow into the trillions, or trillions of trillions. With so many *potential* future human entities, surely our priorities should be to support them, rather than the relatively trivial few billion current living on Earth.

And that this is the core of the outrage in these current articles.

I do have a legitimate objection to this, simply based on the track record of past futurists, who make grand pronouncements about, say, the world of 1990! Or 2001! which turn out to be woefully off-based. Way too optimistic in some ways (look at 2001’s lunar base and trip to Jupiter, though admittedly that’s fiction and wasn’t intended as prophecy) or too naive (people of the future wearing Victorian frocks). Examples are endless, and books have been written about the futures that haven’t come to pass. So, it’s naive, even impertinent, to project humanity’s future into the millions or billions of years — and then to make financial investments based on such potential outcomes.

Lee makes similar comments about the inability to imagine the future. Could people of 1000 CE have possibly imagined the world of today? He goes on to compare longtermism with Pascal’s Wager.

That’s why I’m skeptical of longtermists who claim, with complete confidence, that the future of humanity consists of mind uploading, simulated worlds, superintelligent AI, and space colonization.

Any or all of these things might happen. I’m not denying that. What I am denying is that this is the only future that can or should exist.

Especially, I deny that our sole moral obligation is to bring it into being. The dark side of longtermist thinking is the view that no evil that’s happening to living people today really matters. War, tyranny, poverty, pandemics, even climate change are all irrelevant, just so long as humanity as a whole survives.

[…]

The flaw in Pascal’s Wager is that it doesn’t work when there are multiple options for which god you should believe in. These longtermists’ logic has the same hole.

They assume that there’s only one possible future we should consider, and that their plan of action is what will take us there. The probabilities become undefined if either of these assumptions is untrue.

Concluding:

When evidence can’t guide us—such as when we speculate about things that haven’t yet happened—we should be more humble, and our conclusions more tentative. When it makes dogmatic claims to know what our future holds, longtermism goes against that skeptical principle. It strays into the realm of religious faith.

\\\

Now, the mainstream of thinking in science fiction, over the past hundred years, is that humanity’s destiny is bound up into the expansion of the race into space, colonizing other planets, and creating a galactic federation, with or without similar alien races. This is the premise of Star Trek, for example, which drew on the common themes of much science fiction published in earlier decades. It was such a common assumption that Donald A. Wollheim, one of the earliest science fiction publishers, wrote a whole book about it: THE UNIVERSE MAKERS (reviewed here.)

The EA advocates seem only to have shifted this expanse into the realm of virtual reality, with minds uploaded into cyberspace.

Meanwhile some science fiction writers (e.g. KSR) have seen interstellar colonization as a fantasy, likely never to happen. As science fiction writers are increasingly bounded by the established discoveries of science, e.g. that faster-than-light travel is probably impossible, given our understanding of the structure of the universe.

This is how the speculations of science fiction give way to reality.

\\\

In subsequent posts I’ll explore the commentaries that somehow link longtermism to the failure of FTX. Why should there be a connection…? And does the failure of a financial firm somehow discredit the idea of thinking about the future?

This entry was posted in longtermism. Bookmark the permalink.

Leave a Reply

Your email address will not be published.