‘Longtermism’ and ‘Effective Altruism’ have become buzzwords within certain circles. While concern for future people sounds morally intuitive, nothing concrete is being offered by those who claim we should care for the distant future, writes Ben Chugg.
In the 1970s, the philosopher Peter Singer catalyzed a moral revolution by arguing that we should do more to help the global poor. His essay, Famine, Affluence, and Morality, argued that distance should not affect our moral decision-making. A malaria stricken child is equally deserving of our attention whether they are our next-door neighbors or 20,000 kilometers away.
Singer’s arguments made, and continue to make, a huge impact. He founded the Life You Can Save, which has directed thousands of people to make more globally conscious choices with their donations. He influenced the Giving Pledge, which commits billionaires to giving away a majority of their wealth. He is widely acknowledged as the father of the animal welfare movement and widely cited as the most influential living philosopher. It was no surprise when, in 2021, he was awarded the Berggruen prize for his work on practical ethics.
Perhaps most significantly, Singer inspired the Effective Altruism (EA) movement, which seeks to use “evidence and reason to figure out how to benefit others as much as possible, and taking action on that basis." Since its inception in the early 2010s, EA has been responsible for millions of dollars of funding towards causes such as preventing blindness from Trachoma, curing obstetric Fistula, providing anti-malarial bednets, and sending direct cash transfers to the world’s most impoverished. EA has campaigned for better conditions for animals in factory farms and inspired some of the world’s wealthiest individuals to donate more of their wealth.
In the last few years, however, EA has slowly moved in another direction — one that is less convinced by Singer’s arguments to help the global poor. Indeed, we might conceive of Singer’s influence as the first wave of EA. The second wave is focused on longtermism, the idea that “positively influencing the far-future is a key moral priority of our time.”
Imagine someone from the year 1000 trying to influence the present. Our circumstances would be beyond imagination, our problems unrecognizable, our desires mysterious.
Longtermism argues that, just as we shouldn’t care about where a person lives, we also shouldn’t care about when a person lives. It holds that because there may be many more people alive in the future than there are today, the future is where the weight of our moral obligations lie. Adherents of longtermism — longtermists — go on to argue that, because of this temporal agnosticism, we should be much more concerned with the future than the present because of the sheer number of people who may yet be born. If humans survive even as long as a typical mammalian species then there could be trillions upon trillions of future humans. This means that our moral concern should be largely focused on the future.
Perhaps the leading spokesperson for longtermism is William MacAskill, a young, energetic professor of philosophy at Oxford who was one of EAs founding members. MacAskill makes the case for longtermism in his book, What We Owe the Future. Like Singer, he is trying to shake the moral ground beneath our feet.
"Future people count. There could be a lot of them. We can make their lives go better. This is the case for longtermism in a nutshell. The premises are simple, and I don’t think they’re particularly controversial. Yet taking them seriously amounts to a moral revolution—one with far-reaching implications for how activists, researchers, policy makers, and indeed all of us should think and act." (WWOTF, Chap 1; emphasis mine)
Longtermism is not simply taking into account the welfare of our children, grandchildren and great-grandchildren. It is focused on taking into account all future people — those living thousands, millions, and billions of years in the future. While a noble goal in spirit, this focus on untold numbers of future generations is precisely where longtermism runs into trouble.
The impenetrable wall of uncertainty that assails us as we attempt to peer into the distant future overshadows the entire project, rendering much of longtermism unfortunately vacuous.
Imagine someone from the year 1000 trying to influence the present. Our circumstances would be beyond imagination, our problems unrecognizable, our desires mysterious. Even someone from 1900 would scarcely recognize today’s world. The Great War had yet to begin; there was no internet; Henry Ford had yet to manufacture his first car; women did not have the right to vote in Canada, the United States, or the UK; Einstein had yet to overturn Newtonian physics, and the atomic bomb had to be invented. Any sort of targeted intervention to influence the future would require someone to foresee such developments, as each plays a role in determining what problems we face today. Such foresight is unobtainable at a scale of 100 years — longtermism is concerned with scales quite literally thousands of times greater.
The result is a philosophy that is grandiose in ambition but sparse in useful insights. What We Owe the Future performs a sweeping overview of all the ways humanity’s future might suffer: from the extinction of the species, to the cessation of economic growth, the end of moral progress, or the rise of a global totalitarian dictatorship. These are all, of course, bad things. But what does longtermism tell us to actually do?
Concrete problems focused on in the book include common concerns, such as climate change, nuclear war, and pandemics. But such issues do not need longtermism to justify their importance. Indeed, I daresay that most people working on such problems were doing so before Oxonian academics highlighted them. Moreover, each issue is complex, and trying to handle all of them in a single book leaves little room for depth and nuance.
What is new in longtermism is inactionable, and what is actionable does not require longtermism.
What We Owe the Future does deviate with existing issues in one substantial way: concern with superintelligent artificial intelligence run amok. MacAskill argues that such systems, if developed, could pose an existential threat, or aid in the rise of malevolent actors. To be clear, there is no clear path between the narrow systems being developed today and systems as generally intelligent as humans. For this reason, malevolent superintelligence is not a primary concern among most experts in machine learning. MacAskill seems to recognize this and hedges his bets accordingly. The book takes no firm stance on whether superintelligent AI would be good or bad. Instead of committing to any specific suggestion, which might be debated and refuted, the reader is counseled to “take robustly good actions, build up options, and learn more” in light of this deep uncertainty. Such vague recommendations are hardly the carrion call we’d expect of a moral revolution.
And this sums up the paradox at the heart of longtermism: Any actionable problem it suggests we tackle today is justifiable without making abstract philosophical arguments about the moral value of an astronomical number of future people. And it has to be this way, for the only problems of which we’re aware are the ones that affect us today. Thus, what is new in longtermism is inactionable, and what is actionable does not require longtermism.
Concern with the entire purview of the future entitles one to forget the suffering of the present. Longtermism has nothing concrete to say regarding current emergencies.
What We Owe the Future is at its best the few times it forgets its own purpose and slips into discussing novel solutions to existing problems. For instance, MacAskill discusses allowing vaccines to be sold on the market during Covid-19, buying coal plants but leaving the coal in the ground, and the plausibility of charter cities. A book filled with such ideas would be hugely valuable, for we could debate and test the ideas. Unfortunately, there is too little time spent on such issues, and too much time spent debating ethical abstractions.
Concern for the future of humanity is valuable. But the way to a better future lies not in trying to reason through problems thousands of years in the future. It lies in making incremental progress, generation by generation. We’ve been gifted a better world today not because those in the past were concerned with predicting our problems, but because they solved the problems in front of them. Progress comes from trial and error. From recognizing problems, trying to solve them, failing, and trying again. Relegating our concern to the future means we lose this feedback mechanism: How do we know if our actions are helping those in thousands of years time?
Moreover, concern with the entire purview of the future entitles one to forget the suffering of the present. Longtermism has nothing concrete to say regarding current emergencies. How can we best help women in Iran, or support Afghan refugees? How can we resolve the Russia-Ukraine crisis, or best handle the Isreali-Palestine conflict? A moral philosophy that is focused on idealism over practice and provides no guidance as to the pressing problems of today barely deserves the title. As Peter Singer wrote:
[E]thics is not an ideal system that is noble in theory but no good in practice. The reverse of this is closer to the truth: an ethical judgement that is no good in practice must suffer from a theoretical defect as well, for the whole point of ethical judgements is to guide practice. (Practical Ethics, 1993. Chap 1, pg 2.)
One wonders how he feels about the state of effective altruism today.