Longtermism, or How to Get-Out-Of-Caring While Feeling Moral and Smart

Should you read What We Owe the Future before you die or should you die first?

by Jon Shaffer

There is a lot to agree with in William MacAskill’s new book, What We Owe the Future, until you seriously engage with his argument. Longtermism, the moral theory sketched in MacAskill’s upbeat, almost cheerful prose, is presented in ways that feel uncontroversial, self-evident. But the book is built on fanciful math, an antisocial lack of understanding of how political power shapes society, and gives up on the impoverished and oppressed people of today in favor of make-believe trillions who may or may not exist in the distant future. 

What We Owe the Future (WWOF from here on) tries to make us care about many trillions of people in the distant future at the expense of those living amongst us, right here, right now. Amongst the book’s more uncontroversial positions: he argues that future people matter. Sure! That what we choose to do today will likely affect those future people in ways that could be positive or negative. No doubt. Therefore, we should do all we can to prevent catastrophe and make the future as good as possible. Totally. 

Reasonable points, sensible concerns taken to…deeply fucked galaxy-brain conclusions.

MacAskill begs us to ask questions like: Do you care about the specter of climate catastrophe? Definitely. World War III complete with nuclear annihilation? Yikes, yeah. How about population stagnation and potential collapse because rich people stopped having enough babies? Wait, huh? What do you think about lowering the probability of complete human extinction by .0001% at the expense of allowing 100 million people to die in genocidal neglect? Damn… stop, no. It is this lull of imbricated logical precision and cold, uncaring moral hollowness that frightens me most about longtermism. Reasonable points, sensible concerns taken to…deeply fucked galaxy-brain conclusions.

The problems with this book lie in whose wellbeing it argues for and against and why. The ideas advanced by WWOF are set squarely against the flesh and blood human beings living and dying today, right now, in grinding poverty, deep oppression, extant misery. The people sleeping outside, hungry, in the alleyway next to William MacAskill’s home in Oxford are not this book’s primary concern. Yes, of course their wellbeing matters, McAskill and his lot may concede, but have you ever imagined the sheer trillions of people that could exist in the future? 

Paul Farmer, a medical anthropologist and cofounder of Partners in Health, once said that, “The idea that some lives matter less is the root of all that is wrong with the world.” It is also the root of everything wrong with longtermism, which seeks to “rigorously” and “scientifically” quantify just how much less valuable the lives of the impoverished and oppressed really are when weighted against an unbounded future fantasia. 

MacAskill certainly seems to believe some lives matter more than others, but he’s too smart to say that outright. He knows that line of argument is a political dead-end. 

Instead, MacAskill pretends to have a science of shaping the future, one where he and his utility-maximizing minions have the computational and scientific gravitas to appeal to our universal rationality. In doing so, MacAskill shifts our focus, throughout the book, away from the manifest suffering happening right now that we could, absolutely, alay, to the hypothesized, ‘calculated’ suffering of potential people who may never exist.

Let’s take a clear-eyed look into the argument. To do so, into the weeds we go. Here is an outline of WWOF’s argument:

  1. Morality is about taking the actions that consequentially maximize goodness and minimize badness. 

  2. All of the contemporary goodness or badness in the world is infinitesimal compared to the enormous volume of goodness or badness that may be experienced by humans in the future, near and distant. 

  3. Therefore, we must train most of our attention and resources on defending a seemingly endless human potential through protecting against:

    • What he calls “values lock-in” (AI takeover)

    • Civilizational collapse (population stagnation and/or pandemic)

    • Climate catastrophe (we’re in it)

Let’s take these points seriously. Maximizing goodness and minimizing badness is all well and good but for whom and how would we know? 

The magic trick behind utilitarian thinking requires three intellectual slights of hand – power moves of misdirection and obfuscation – to justify their arguments. Power-move number one is commensuration: the social act of equating two qualitatively different things to one another for the purposes of calculation (I’ve got 7 apples, you’ve got 4 oranges, we have 11 fruits). Any claim to maximize goodness and minimize badness in the world requires inventing some unit of equivalence. Utilitarians approach this in many different ways but the rub is that reasonable (and important!) differences in what people might consider good, valuable, meaningful, worth-living-for get flattened out in whatever the philosopher says is good. 

Longtermism as a ideological movement, is one grounded in a deep paranoia of privilege, captured and steered by the robber barons of our age. It should be rejected.

Once you’ve convinced yourself that there could be some commensurable measure of wellbeing, goodness, happiness for all humans—the jargon here is “utility quanta”—then power-move number two is utilizing tools of calculation to produce ever more elaborate “models” of overall population wellbeing or goodness. 

Power-move number three: using invented probabilities and “expected value” to make up arguments about what is “likely” or “not likely” to happen in the near or distant future. By making claims to highly uncertain probabilities that no amount of research or calculation could deliver with any accuracy, longtermism relies on some real crystal ball shit (I mean no offense to our clairvoyant comrades).

Together, these power moves make a potent mixture driving the really troubling arguments of the book. If the population of the future is unimaginably vast, and morality is only about maximizing total all-time goodness or badness, then the wellbeing of a nearly infinite future population will squash any “rational” calculation of what utility could be gained by investing scarce resources on people suffering harms in the present moment. 

By this logic even near-term pandemic prevention, combating climate change, political organizing to protect democracy, or investing in equitable health systems wouldn’t make the longtermist cut. Oxford Professor Nick Bostrom, director of the Future of Humanity Institute wrote in 2001, “Tragic as such events are to the people immediately affected, in the big picture of things—from the perspective of humankind as a whole—even the worst of these catastrophes are mere ripples on the surface of the sea of life.” This ought to be unacceptable on both moral and political grounds. We shouldn’t accept mass death and suffering because we know that lives can be saved, suffering palliated. We shouldn’t cede political ground nor political power to those who would justify such willful neglect. 


Is this a book I should read before I die or should I die first?

What We Owe the Future most certainly is a book you should die before reading. Instead use your precious few years on this earth to materially care for the worst off and to organize against the powerful people who would have us believe that there is no moral alternative to abandoning our neighbors — distant or close. WWOF and longtermism as a ideological movement, is one grounded in a deep paranoia of privilege, captured and steered by the robber barons of our age. It should be rejected.


Jon Shaffer is a sociologist and organizer based in Baltimore who studies global health organizations, social movements, and their struggles over scientific knowledge.


Previous
Previous

How Frail the Human Heart

Next
Next

How Medical Schools Discriminate Against People With Disabilities