Towards Ineffective Altruism
There are boundless ways of doing good that are fundamentally immeasurable or, if they are measurable, may not be optimized.
Today’s essay is long so I’ll keep this intro short: read it! to the end! happy sunday longread!
📈 towards ineffective altruism
By Hal Triedman, edited by Archana Ahlawat
Do not allow your dreams of a beautiful world to lure you away from the claims of men who suffer here and now. Our fellow men have a claim to our help; no generation must be sacrificed for the sake of future generations, for the sake of an ideal of happiness that may never be realized.
– Karl Popper, “Utopia and Violence”
This tweet from Timnit Gebru has been living in my head rent-free for the past month. Recently, she and other critics of big tech (as well as former longtermist Phil Torres) have been loudly sounding the alarm about effective altruism and longtermism on Twitter and in various publications.
These ideologies scare me, and I want to engage with them seriously — not because I believe in them, but because they are seemingly rational, relying on the language of science, moral philosophy, and statistics. They are increasingly influential among policymakers, intellectuals, well-funded institutions, and the richest men in the world. Their ubiquity makes them pernicious and hard to combat. To take them on, we must critique their philosophical foundations, their rhetoric, and their material impacts simultaneously.
Some definitions
At its most basic, the effective altruism movement makes a generally utilitarian argument about how the world’s privileged people should spend their time and money if they want to maximize their positive impact on the world.
Effective altruism was born mostly at Oxford in the late 1990s and early 2000s, at around the same time that the internet industry in Silicon Valley was experiencing its first cycle of boom and bust. The dominant ideologies of both come from the business culture of the time, and the two have become closer together since. As Nadia Asparouhova writes in her recent piece on “Idea Machines”: “Effective altruism is often associated with tech, but it’s genetically more similar to McKinsey.”
Nowadays, effective altruism’s epistemology and tools often parallel those of the tech industry. At its heart, it is driven by the principle of maximization and informed by statistical analysis. With these methods, effective altruists make arguments such as maximizing disability-adjusted life-years by allocating time and money towards initiatives that provide a mosquito net for a child in a poor country (rather than providing direct donations to the child’s family).
At first glance, this all seems straightforward and uncontroversial, even if it speaks of “doing good” in the terms of a business investment. If we want to make the world a better place by giving money away, of course we should maximize the good that each dollar does, you might say. And besides, how bad can an ideology be if its principal goal is to give billions of *maximally effective* dollars away to charity each year?
These are fair points, and I don’t entirely disagree with them. Billions of dollars per year from the wealthy tech elite used to convince people to go vegan or to give to non-religious health NGOs or to end factory farming is not, on its face, a bad thing.
But effective altruism is just the tip of the utilitarian iceberg. Beneath the visible argument that giving must be optimized in order to be “good,” there are an array of ideologies in close contact with effective altruism that are far stranger, more ethically dubious, and highly influential. Foremost among them is the ideology of longtermism — an ideology that Phil Torres (a former longtermist himself) has described as “one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about.”
Longtermism originates from the same spaces and places as effective altruism, including the rationalist community and online blogs like LessWrong. If we can survive the next few hundred years, colonize other worlds, and learn to simulate conscious beings with computers, longtermists say, there could be a lot of people that exist in the future. The high end of the range is 1058 (10 billion trillion trillion trillion trillion), but most say there could be at least quad- or quintillions. If all beings are equally important, regardless of when or where they exist, then doing something right now that has a tiny probability (say, a one in one quintillion chance) of affecting a tiny fraction of the future people (0.00000000000000000000000000001% — that’s 28 zeros), could still potentially change the lives of more than 10 billion people, more than the nearly 8 billion people existing on the planet today.
From this time-and-space-agnostic view, the current state of the world and the humans in it begins to seem miniscule, a grain of sand in the beach of a future that may span galaxies and trillions of years.
With this perspective, new priorities emerge. Instead of focusing on the material inequities of our world, longtermists think that the way to do the most good in the long-term is to focus on the things that could prevent this unthinkably large set of futures from coming to pass. Thus, we ought to focus on studying and reducing existential risks — potential developments that could wipe humanity out completely or permanently constrain humanity before it achieves its full potential. Existential risks include global totalitarian governments, deadly pandemics, asteroids, nuclear wars, misaligned hyper-intelligent AI systems that destroy human civilization, and other unspecified horrors.
“Strong longtermism” is a variant of longtermism advanced by Hilary Greaves and William MacAskill that argues that, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1000) years, focusing primarily on the further-future effects.” An extended quotation from their paper is illustrative of the impacts that these ideologies can have. Let’s say Shivani, a philanthropic donor, wants to donate $10,000 to the cause that will do the most good:
Suppose, for instance, Shivani thinks there’s a 1% probability of a transition to a world government in the next century, and that $1 billion of well-targeted grants… would increase the well-being in an average future life, under the world government, by 0.1% with a 0.1% chance of that effect lasting until the end of civilisation, and that the impact of grants in this area is approximately linear with respect to the amount of spending. Then, using [a] figure of one quadrillion lives to come, the expected good done by Shivani contributing $10,000 to this goal would… be 100 lives. In contrast, funding for the Against Malaria Foundation, often regarded as the most cost-effective intervention in the area of short-term global health improvements, on average saves one life per $3500.
In simple terms, Shivani can save 35 expected future lives for each current life she can save. In this instance, the premier example of donating based on effective altruist principles is utterly ineffective compared to the logic of longtermism and existential risk. The idea that studying existential risk and reducing it by a fraction of a percent could improve the lives of untold future millions is a powerful one.
Of course, longtermism is not the principal motivation for most effective altruists, and there are gradations of how far one can subscribe to this argument. Regardless, in recent years, it has increasingly begun to drive giving, set priorities, and define the movement as a whole, prompting some to ask if effective altruism is just longtermism now.
Why understanding effective altruism and longtermism is important
Effective altruism and longermism are ideologies that are increasingly influential among the richest men and the most prestigious institutions in the world, shaping policy and capital allocation. The movement has shifted to pushing young adherents towards careers in government, with a focus on reducing existential risks through policy. These recent developments take our discussion away from questions of rhetoric and morals (for the moment) and squarely into material considerations.
Longtermism and existential risk are particularly influential ideologies among those who made fortunes in technology and in elite institutions. Elon Musk has cited the work of Nick Bostrom (who coined the term existential risk in 2002) and has donated millions to the Future of Humanity Institute and Future of Life Institute, sister organizations based out of Oxford. Jean Tallinn, a founder of Skype worth an estimated $900 million in 2019, also cofounded the Center for the Study of Existential Risk at Cambridge, and has donated more than a million dollars to the Machine Intelligence Research Institute (MIRI). Vitalik Buterin, a cofounder of the Ethereum cryptocurrency, has donated extensively to MIRI as well. Peter Thiel, the radical libertarian donor, early Trump supporter, and funder of JD Vance’s Ohio Senate campaign, delivered the keynote address at the 2013 Effective Altruism summit.
Longtermism is also increasingly popular among rank-and-file effective altruists, to the point where many consider them to be synonymous. According to data from the Open Philanthropy Grants database, in 2021 effective altruists donated $92 million to AI risk research, $21 million to biosecurity and pandemic preparedness, and $10.5 million to global catastrophic risk research. Altogether, this $125 million towards longtermist existential risk research represents a larger slice of donations than any other individual cause. And the allure of AGI (Artificial General Intelligence) — a major focus/fear of effective altruism and longtermism — is especially clear in industry, where multiple startups and big tech companies pour billions of dollars into research and development.
These bureaucrats, donors, research institutes, and companies are by no means an ideological monolith, nor do they necessarily represent the beliefs of the average effective altruist. However, this web of entities has one key feature — intellectual, institutional, and financial capital. A relatively small cadre of longtermist academics housed within and legitimized by influential institutions can advance ideas that guide how governments and venture capitalists think about and shape the future.
Towards ineffective altruism
So far, in the spirit of critique, I’ve laid out the philosophical underpinnings of the effective altruism and longtermism movements and the material superstructures that have arisen from those foundations over the past two decades.
It seems to me that the seemingly limitless bounds of longtermism are ultimately a moral carte blanche on anything we do (except make the species extinct). It’s easy to see how this position can ultimately lead to reprehensible outcomes. Just this week, 80,000 Hours released a piece that argues for effective altruists to not focus their careers on climate change — a process which will uproot hundreds of millions of mostly non-white poor people and cause billions to experience chronic water scarcity — because it has a low chance of becoming uncontrollable and turning Earth into Venus. Other longtermists worry that their ideology would provide rationalizations for genocide if political leaders took it literally. Mathematical statistician Olle Häggström, usually a proponent of longtermism, imagines
a situation where the head of the CIA explains to the US president that they have credible evidence that somewhere in Germany, there is a lunatic who is working on a doomsday weapon and intends to use it to wipe out humanity, and that this lunatic has a one-in-a-million chance of succeeding. They have no further information on the identity or whereabouts of this lunatic. If the president has taken Bostrom’s argument to heart, and if he knows how to do the arithmetic, he may conclude that it is worthwhile conducting a full-scale nuclear assault on Germany to kill every single person within its borders.
Besides the moral hazards of advocating these positions, these ideologies provide an overly simplistic formula for doing good: 1) define “good” as a measurable metric, 2) find the most effective means of impacting that metric, and 3) pour capital into scaling those means up.
But following the formula of effective altruism is clearly not all that being good requires. There are boundless ways of doing good that are fundamentally immeasurable or, if they are measurable, may not be optimized. Nevertheless, this universe of actions demands our consideration. To follow in the footsteps of Timnit Gebru (and to be purposefully contrarian), let’s call the philosophy of seriously considering the merits of doing good immeasurably or suboptimally ineffective altruism.
Ineffective altruism might look like giving $10 to a houseless person who asks for it. It might look like organizing to ensure that as many people as possible have access to basic material needs like food, housing, and healthcare. It might look like the ephemeral work of knitting a social and political community together. After all, how can one quantify the resiliency of a particular neighborhood? None of these actions would be particularly “effective,” and yet they might also have more of a tangible impact than unknowably reducing an existential risk by some fraction of a percentage point. They also show an understanding of one’s responsibilities to their community, how strengthening community is also important for our shared future, even if it isn’t measurable.
Ineffective altruism eschews metrics, because “What does doing good look like?” should be a continuously-posed question rather than an optimization problem. As an ideology of allocating resources, it is recognized as explicitly political, rather than cloaking itself in the discourse of science and rationality. It allows us to get outside of the concept of altruism entirely — a concept that feels limiting in its focus on the actions of the individual — and instead consider a paradigm of collective, democratic mutual aid. Most importantly, ineffective altruism allows us to ask harder questions than effective altruism does: questions about who and what we value.
What might “moral good” look like outside of market-derived values (like the maximization principle)? How can we collectively decide to allocate resources? How can we build societies based on principles that cannot be measured, like mutual respect and solidarity? How can we eliminate material misery from the world? What might we do to ensure the flourishing of future generations, rather than just their survival? How can we depart from a society where those who have the privilege to choose to care about others can, and move towards a society where everyone has the power to care about others and must?
People all over the world have been attempting to answer these questions for generations. After massive street protests in 2019 in Chile, 80% of the population voted to redraft the nation’s constitution — an effort that is currently in progress and will be finalized this September. In Taiwan, Digital Minister Audrey Tang is building effective tools for building consensus and making decisions online. Tang helped enable a highly effective set of COVID-19 policies that kept the disease largely outside Taiwan for more than two years, influenced what digital democracy looks like on the island, and inspired other online civil processes around the world. And in the United States, the last few years has seen rising interest in small-d democratic institutions like labor unions and mutual aid organizations. These efforts may be inefficient or messy or unpredictable, but are good in part because of those facts, not in spite of them.
As we get some distance from effective altruism and longtermism, we can also begin to consider other ways of thinking about the long-term future. Our conceptions of the future inform our actions today, and the future is much too important to cede to an ideology with the ethos and rhetoric of longtermism. Seventh-generation decision making, for example, is an indigenous principle that is enshrined in the Constitution of the Iroquois Nation. It mandates Iroquois leaders to consider the effects of their actions over seven generations, encompassing hundreds of years. Seven generations is a long time, but it is also a finite amount of time. Although this framework prioritizes long-term thinking, it doesn’t bring the weight of infinity to bear on the present. And unlike longtermism, the seventh-generation principle doesn’t pretend to be scientific. It doesn’t rely on unfalsifiable guesses about a future we can’t even imagine to assign expected values to different political decisions; rather, it makes thinking about the future a moral imperative.
Philosopher Karl Popper on about the dangers of exclusive focus on the utopian ideal of the far future over the material concerns of the present day:
We must not argue that a certain social situation is a mere means to an end on the grounds that it is merely a transient historical situation. For all situations are transient. Similarly we must not argue that the misery of one generation may be considered as a mere means to the end of securing the lasting happiness of some later generation or generations; and this argument is improved neither by a high degree of promised happiness nor by a large number of generations profiting by it. All generations are transient. All have an equal right to be considered, but our immediate duties are undoubtedly to the present generation and to the next.
When we critically examine effective altruism and longtermism, we can see them as falsely utopian ideologies cloaked in the opaque vocabulary of science and math. Let’s instead strive for a world where altruism doesn’t have to be maximally effective for it to be worthy, where doing good doesn’t have to be optimized, where morals aren’t a function of the market.
Reboot is a volunteer-run labor of love. To receive future essays and support our work, consider becoming a free or paid subscriber!
Hal Triedman is a technologist, journalist, and political organizer. He works as a privacy engineer at "the last best place on the internet" — Wikipedia — where he is working on differential privacy, algorithmic accountability, anti-censorship, and other issues at the junction of technology and policy. (Twitter, Goodreads)
Archana Ahlawat is a software engineer, investigative researcher, and writer. She is an incoming fellow at the Princeton Center for Information Technology Policy and has previously built social movement technology, worked on data analytics products, and researched surveillance technology. (Twitter, Goodreads)
🌀 microdoses
Tressie McMillan Cottom in the NYT on cancelling student loans! (this link should work even if you don’t have a sub)
aaahsdhfjahahahas this is satire right
did you hear there was a ML conference deadline last week
:/
💝 closing note
For some reason EA discourse always gets intense (and sometimes personal?!) — and I’m honestly surprised it took us this long to publish an essay on it. As always — would LOVE to hear responses and discussion, even followup essays!
Trying (not) to count the uncountable,
Reboot team
Just a few thoughts, forgive the long response:
1. EA isn’t just longtermism. I’m a moderately engaged member of the EA community, and the last EA event I went to was about ¼ Global Health, ¼ longtermist, ¼ meta-EA stuff and the rest other areas like animal welfare. If you look at EA funding, Global Health and Development and animal welfare are still very high, and that's what I work on mainly.
2. We don’t need to be longtermists to care about catastrophic and existential risks. Even if we just care about people alive today, or even just our own kids, then engineered pandemics, misaligned AI and nuclear weapons are still major risks to their well-being in their lifetimes, and are very neglected areas. Nuclear risk reduction is critically underfunded (see Joan Rohlfing), and EAs like to quote the fact that, as of a couple of years ago, the global biological weapons convention had less funding than an average McDonalds. AI safety is less neglected, largely due to an influx of EA money, but as you note, AI safety funding only comprises a rounding error in terms of colossal total commercial funding pushing faster and better AI.
3. Interestingly, your supposedly ‘ineffective altruist’ question of ‘how can we ensure the flourishing of future generations, rather than just their survival?’ is a very longtermist question! If you’re not a longtermist, and just want to make sure that people alive today and in the near future don’t die before their time, you should probably only focus on catastrophic risks, but shouldn’t care that much about how the world flourishes a few hundred years from now. If you are a longtermist, the question of how to ensure long-term flourishing becomes far more important.
4. In my limited experience, EAs tend to have fairly diverse moral views; I generally see the world in terms of a more consequentialist ‘effective altruism’, in parallel with a ‘virtue ethics’ that governs the way that I behave as part of society. Most EAs would agree that there are other moral responsibilities that we should have, being honest, being respectful and kind to strangers, being loyal to our friends and family, and being a good citizen/ community member. In The Precipice, possibly the most influential book on longtermism and EA, Toby Ord uses the virtue ethics framework to talks about civilizational virtues that humanity can nurture, and I think we should embrace that framing a bit more in EA.
5. EAs tend to be very aware of the opportunity costs of ineffective altruism. If I were to give a few thousand pounds to a local homeless charity (in a rich country) or some of the other causes you mention/ link to (e.g. bail funds for protestors), I know that that is a conscious decision not to save lives in a low-income country or reduce significant amounts of animal suffering. Of course, it's the same opportunity cost if you're donating to an x-risk charity, but I'd have thought that someone who made the argument that we should prioritise climate change more because it is likely to affect more people in the global south in the near-mid term future, would also think that we should prioritise saving more lives in the global south through health interventions (or cash transfers), over less effective rich-world charities.
6. Your claim that ‘ineffective altruism… might look like organizing to ensure that as many people as possible have access to basic material needs like food, housing, and healthcare’ doesn’t sound like ineffective altruism to me. It sounds like effective altruism. You’ve even added the optimising terminology in your example. Similarly, you pose the question ‘how can we eliminate material misery from the world?’. All you’re lacking is the phrase ‘most effectively/ quickly’, and you’ve asked a classic EA question. If you don't care about effectiveness or speed, I have trouble understanding your worldview.
7. I’d argue that you can measure most of the things you claim not to be able to measure. You can measure mutual respect, social cohesion and resilience (it's more complex but a quick google reveals countless articles on ways to measure them). EAs have a strong quantitative bias when it comes to measurement, and I’m someone who advocates for more use of qualitative methods, but I understand why we have this bias, especially with our background in the charity space, where many charities can use imprecise methodology and vague terminology to mask ineffectiveness.
8. You quote Audrey Tang as a counterpoint to EA, but Audrey seems like a very EA person. I don’t think she identifies as EA, but she has been on the 80,000 hours podcast, attended EA events, and many EAs interested in improved decision-making talk about her ideas.
9. I disagree strongly that ‘effective altruism and longtermism can be seen as falsely utopian ideologies cloaked in the opaque vocabulary of science and math’, especially the 'opaque' claim. Except for a few obscure philosophy papers, most of the 'core texts' for EA (Doing Good Better, The Precipice) require little more than a secondary school education. Most EA reasoning is very transparent, and it's definitely considered an 'epistemic virtue'. I understand that most people struggle with maths and stats, but if you understand these concepts to a fairly basic degree, a good cost-effectiveness analysis is a very transparent tool. I'd argue that little could be clearer than a well-phrased comparison between the effectiveness of a high-impact global health charity and a low-impact rich-world charity.
10. You imply that ineffective altruism would allow us to ask hard questions about who or what we value... I have never been, and can’t imagine being part of a community that cares more about asking these hard questions than the EA community. Questions about the nature of consciousness, how to think about suffering, human and non-human minds, questions that 99% of the world ignore, are things that seriously inform our discussions and worldviews. Most EAs agree with the idea of 'EA as a question' and can be obsessive about asking these questions.
11. The idea that ‘the seemingly limitless bounds of longtermism are ultimately a moral carte blanche on anything we do’ is definitely a challenging thought, which many EAs struggle with. You could reverse it- the limited bounds of near-termism are similarly a moral carte blanche on anything we do that could affect the long-term future, and it still feels uncomfortable. There are some difficult moral issues in how to prioritise longtermism and shorter time horizons (I personally work more on short term cause areas), but I think the idea: ‘Let’s just make sure we don’t destroy ourselves over the next century’ is relatively uncontroversial, and this is where the vast majority of longtermist funding and effort are going. EAs and longtermists can be the weirdos considering the possible welfare of infinite digital minds in a century or so from now, but making sure we don't destroy ourselves is our main priority, and I'd hope that this is something we can all get on board with.
Interesting piece. If I may mildly paraphrase:
1) conventional charitable giving has poor accountability or legibility so it often does little-to-no Good however measured,
2) EA creates a layer of math for accountability and to insist Good is created by at least some measure,
3) longtermism stretches that math so much farther that accountability and Goodness creation become, once again illegible,
4) iEA is a sort of reminder that Good creation is often an illegible p2p matter of communal responsibility: a centralized quantitative account can't encompas that model.
This strikes me as a story of the perils in choosing a metric. There's wisdom in adding mathematical legibility, but also in how metrics are quantitative means to a qualitative end.
Is that fair?