⚡️ Replies To: Towards Ineffective Altruism
What does it mean to make (and measure) change?
A few weeks ago, we published Hal’s essay, Towards Ineffective Altruism, and it generated lots of discussion (!) amongst our community members and readers. Today, we are excited to share a range of responses to the essay, from those within the EA community, adjacent, and critics building on Hal’s piece.
🗯 reboot replies: towards ineffective altruism
Edited by Archana Ahlawat
in defense of EA
Aaron on how EA addresses current harms, from the non-longtermist perspective
I've been involved in the EA community for 10 years, and honestly I find it a particularly onerous and demanding philosophy, so I'm always looking for reasons to weasel out of it. 😅
Still, after all these years, I can't find a compelling counter to the argument that people are dying now, and you can do something about it. I've pledged to donate any income I earn in excess of $70k, not because I want to, but because I feel I ought to. I donate to GiveWell's Maximum Impact Fund (which largely gets distributed in the form of mosquito nets for children in the developing world), and I rest a little easier knowing that I'm doing my part to decrease the amount of suffering in the world.
Much of the article was about longtermism, which is still a minority view among EAs. It's a recent development in the movement, and its recency and newfound funding are generating a lot of talk, but by and large, most people entering the EA space do so with a lens towards global health and animal welfare, and that's still where a majority of its funds are distributed.
Luke on the importance of paying attention to long-term existential risks
One part that stuck out for me is the implication that people should not act based on miniscule probabilities of affecting tiny percentages of future people. Perhaps you think that longtermists are all acting based on crazy small odds, making plays based on numbers that are impossible to conceive. However, the average longtermist believes that the odds that a humanity-destroying event occurs in this century are about one in six. I think it is likely that most individual actions that EAs take have a low chance of affecting the long term future, but the odds of existential doom that we want to reduce are a much more understandable and sobering one in a handful.
A gut-reaction to these odds is to dismiss them as impossible, or to internally anchor on a number that feels more intuitive. But in classic EA tradition, I challenge you to force yourself to come up with numbers using what you know already. What do you think are the chances of nuclear war wiping out humanity before the year 2100? What are the odds that we create superintelligent AI, and what are the odds that it kills or otherwise permanently cripples humanity? If your estimated chances of these plausible future events are substantially different from current prevailing EA wisdom, having that conversation would be useful for EAs and non-EAs alike.
on positionality and changemaking
Scott on effectiveness as unique and individual
The biggest risk of optimizing our altruism through a single theoretical framework and shared movement such as effective altruism is convergence towards addressing the same causes as everyone else while leaving the ways we are uniquely positioned to impact our communities untapped.
Doing good should be less of an optimization problem and more like an orchestra: people playing their parts in the way they can best serve. Contributing to a less "effective" cause might actually be more effective if it sparks your passions enough to go the extra mile or is something you are uniquely able to impact.
One thought experiment is to apply Kant's Categorical Imperative (what if everyone did this?) and imagine two competing realities. In one, I fear the same few causes being overfunded and glorified, and in the other, I imagine people engaging in local causes in their backyard - the urbanist who fights for bike infra, the mom who volunteers to save wildlife, the startup founder who solves a problem for people.
The prescriptions of EA are a useful starting point in the absence of specific convictions — if you're unsure which instrument to play. I challenge the reader to find issues that matter to them and avoid outsourcing doing good. Altruism isn't a checkbox, it's a way of life.
Samson on community context and local impact
Reading EA criticism recently, a question beyond the usual “What is good?” emerged: Good for whom? This is the question posed by many critical social science theories: what invisible power structures lie beneath supposedly transparent, objective definitions of value? These structures elevate the privileged and further marginalize the underprivileged, the root cause of much of the poverty and violence that altruists aim to combat.
Lasting structural change happens not from altruists without context and community, but from those taking ownership of their positionality and driving non-altruistic change within their communities. "Your positionality, experiences and worldview limit the change you can cause. You always need to start there," a mentor of mine once said.
Changemaking with positionality in mind looks like helping organize mutual aid for peers and workers on campus, making Good in your own community, in the power structures in which you participate as a privileged student. Changemaking with positionality in mind looks like treating prison abolition and preventing ICE deportations not as abstract issues attached to the political climate of the country, but efforts to materially protect members of your community — your family, your neighbors, your city — against state violence.
Altruism is only one position — specifically, a “disinterested and selfless” one — among many. If it is treated as universally the *best* position to take on, then the possibility for mutual aid, community ownership, democracy, equality, and so much else that we value dies — replaced with the individualistic neoliberal shell that has failed us time and time again.
on (the aesthetics of) measurability
John on the the importance and risks of metricization
I believe it is always possible to decouple 'what is good?' from 'how do we measure X?', and that this is essential if we want to produce interventions, programs, and policies which are effective and amenable to monitoring and evaluation. The difficulty is in answering that first question, and arriving at a description of the values shared by a group of people.
As an example, Hal asks, "how can we build societies based on principles that cannot be measured, like mutual respect and solidarity?", but I would argue that we can develop definitions and metrics for these principles. An initial approach to this might be through surveys with questions like, "how many disagreements did you have in the last week?", "did you get angry during a conversation recently?", and "how have you supported people in your communities this month?".
A potential hazard lies in focusing too closely on the metrics. Trying to optimize these too rigorously can have negative consequences because society is a complex system, so there are inevitable trade-offs which may be unobserved once a policy or action is implemented. In particular, extreme optimisation usually comes at the cost of resilience, and systems become more prone to catastrophic failure when something goes wrong. This means that we should be mindful to not become myopic in pursuing our goals.
Deb on the deficiency of relatable and grounded aesthetics in EA
Aesthetics are a much more important point in the argument in favor of ineffective altruism than Hal’s piece might suggest. EA is rife with tales of adherents who will forgo any aesthetic pleasure in their own lives to benefit some marginal or abstract good.
In Michael Neilson’s recent EA piece, he writes that “EA tends toward a very instrumental and particular aesthetic. Interesting to consider in the frame of art: historically, primarily instrumental approaches to art nearly always result in bad art.” Taking EA to its extreme, one might imagine that an EA confronted with a piece of art would say, “what a waste of our marginal and scarce resources.” This is an incredibly difficult stance to argue with simply because it’s not the experience most people have when they look at art. There’s no common ground for discussion, because discussion requires aesthetics and aesthetics requires feelings, taste, and imagination.
EAs instrumental, ends-justify-the-means aesthetic is not grounded in the way most people actually experience and relate to the world. What Neilson calls “EA judo” - a phenomenon wherein debates over the EA worldview are absorbed into debates over particular “most good” strategies - seems to be a direct consequence of EA’s lack of relatable aesthetics. As long as EA’s proponents fail to consider the impact of shared aesthetics that are rooted in actual subjective human experience, it will remain insufficient and unconvincing as a social movement.
Jessica on quantifying life
Something about “ineffectiveness” that is compelling to me is about moving beyond abstract human lives as the core unit of analysis, in the way that EA tends to do. In many ways I’m sympathetic to this approach — that no individual life is intrinsically more valuable than another — yet it’s also this exact framing that can be dangerous in practice. People are different, experience life differently, and move through the world with different levels of power — and the EA-flavored aggregation of human experience to just a metric of lives papers over these differences.
The most "effective" thing to do will almost never be to address inequality at any scale other than the most absolute as measured by a raw body count — hence mosquito nets — nor will it ever be to address subjective experience, or differentials in subjective experience. Working on racial justice, or making sure our neighbors are housed and fed, will never be the most “effective”. At its extreme, the generic abstraction of “human lives” makes it easy to ignore justice and distributional concerns, to slip onto a path to what I see as reprehensible outcomes: I can't help but think about how easily a rhetoric of “effectiveness” for the “greater good of humanity” can be reappropriated.
Ineffective altruism to me, then, is also fundamentally about going beyond sterile computations of human beings as atomic units, and thinking more deeply about relations between people, relations between individuals and institutions. And I would like to consider subjective experience, which I personally find to be crucially constitutive of what makes life wonderful: how it feels to take a walk in the park, how it feels to go dancing with friends.
contributor bios
Aaron Mayer is the founder of Impact Labs, a community organization dedicated to the use of technology for social good. He now runs a crypto hedge fund and donates the majority of his performance fee to GiveWell. ✨
Luke Sallmen loves people. He works on reducing AI x-risk at Redwood Research.
Scott Fitsimones is an urbanist, technologist, and startup founder. He is passionate about building better cities and using blockchain to create fairer systems. He is the founder of CityDAO, a community-owned experiment in blockchain-based land ownership that owns 40 acres of land in Wyoming.
Samson Zhang is a Physics and Media Studies major at Pomona College in LA and an editor of the nonprofit AAPI news publication The Yappie. Originally from New York, they also enjoy building notetaking and blogging webapps for themself and friends.
John Preston (they/them) is an Analyst at the UK Ministry of Justice and a PhD student developing co-production approaches to policy simulation for local whole systems approaches to public health. They are interested in reducing and eliminating marginalization, and improving the quality and transparency of decision-making in socially powerful organizations.
Deblina Mukherjee is a recent graduate from the University of Chicago.
Jessica Dai is a soon-to-be PhD student at Berkeley and (usually) editor of the Reboot newsletter.
Reboot publishes free essays on tech, humanity, and power every week. If you liked this or want to keep up with the community, subscribe below ⚡️
🌀 microdoses
on the topic of longtermism, x-risk, and AI safety…. themed set of microdoses today. context: a google engineer claims that LaMDA, a language model, is “sentient” (or at least exhibits some signs of sentience..?). these are all threads — click through them and read past the first tweet!!
💝 closing note
More book events coming soon :)
Practicing for the Turing test,
Reboot team