15 Comments

Just a few thoughts, forgive the long response:

1. EA isn’t just longtermism. I’m a moderately engaged member of the EA community, and the last EA event I went to was about ¼ Global Health, ¼ longtermist, ¼ meta-EA stuff and the rest other areas like animal welfare. If you look at EA funding, Global Health and Development and animal welfare are still very high, and that's what I work on mainly.

2. We don’t need to be longtermists to care about catastrophic and existential risks. Even if we just care about people alive today, or even just our own kids, then engineered pandemics, misaligned AI and nuclear weapons are still major risks to their well-being in their lifetimes, and are very neglected areas. Nuclear risk reduction is critically underfunded (see Joan Rohlfing), and EAs like to quote the fact that, as of a couple of years ago, the global biological weapons convention had less funding than an average McDonalds. AI safety is less neglected, largely due to an influx of EA money, but as you note, AI safety funding only comprises a rounding error in terms of colossal total commercial funding pushing faster and better AI.

3. Interestingly, your supposedly ‘ineffective altruist’ question of ‘how can we ensure the flourishing of future generations, rather than just their survival?’ is a very longtermist question! If you’re not a longtermist, and just want to make sure that people alive today and in the near future don’t die before their time, you should probably only focus on catastrophic risks, but shouldn’t care that much about how the world flourishes a few hundred years from now. If you are a longtermist, the question of how to ensure long-term flourishing becomes far more important.

4. In my limited experience, EAs tend to have fairly diverse moral views; I generally see the world in terms of a more consequentialist ‘effective altruism’, in parallel with a ‘virtue ethics’ that governs the way that I behave as part of society. Most EAs would agree that there are other moral responsibilities that we should have, being honest, being respectful and kind to strangers, being loyal to our friends and family, and being a good citizen/ community member. In The Precipice, possibly the most influential book on longtermism and EA, Toby Ord uses the virtue ethics framework to talks about civilizational virtues that humanity can nurture, and I think we should embrace that framing a bit more in EA.

5. EAs tend to be very aware of the opportunity costs of ineffective altruism. If I were to give a few thousand pounds to a local homeless charity (in a rich country) or some of the other causes you mention/ link to (e.g. bail funds for protestors), I know that that is a conscious decision not to save lives in a low-income country or reduce significant amounts of animal suffering. Of course, it's the same opportunity cost if you're donating to an x-risk charity, but I'd have thought that someone who made the argument that we should prioritise climate change more because it is likely to affect more people in the global south in the near-mid term future, would also think that we should prioritise saving more lives in the global south through health interventions (or cash transfers), over less effective rich-world charities.

6. Your claim that ‘ineffective altruism… might look like organizing to ensure that as many people as possible have access to basic material needs like food, housing, and healthcare’ doesn’t sound like ineffective altruism to me. It sounds like effective altruism. You’ve even added the optimising terminology in your example. Similarly, you pose the question ‘how can we eliminate material misery from the world?’. All you’re lacking is the phrase ‘most effectively/ quickly’, and you’ve asked a classic EA question. If you don't care about effectiveness or speed, I have trouble understanding your worldview.

7. I’d argue that you can measure most of the things you claim not to be able to measure. You can measure mutual respect, social cohesion and resilience (it's more complex but a quick google reveals countless articles on ways to measure them). EAs have a strong quantitative bias when it comes to measurement, and I’m someone who advocates for more use of qualitative methods, but I understand why we have this bias, especially with our background in the charity space, where many charities can use imprecise methodology and vague terminology to mask ineffectiveness.

8. You quote Audrey Tang as a counterpoint to EA, but Audrey seems like a very EA person. I don’t think she identifies as EA, but she has been on the 80,000 hours podcast, attended EA events, and many EAs interested in improved decision-making talk about her ideas.

9. I disagree strongly that ‘effective altruism and longtermism can be seen as falsely utopian ideologies cloaked in the opaque vocabulary of science and math’, especially the 'opaque' claim. Except for a few obscure philosophy papers, most of the 'core texts' for EA (Doing Good Better, The Precipice) require little more than a secondary school education. Most EA reasoning is very transparent, and it's definitely considered an 'epistemic virtue'. I understand that most people struggle with maths and stats, but if you understand these concepts to a fairly basic degree, a good cost-effectiveness analysis is a very transparent tool. I'd argue that little could be clearer than a well-phrased comparison between the effectiveness of a high-impact global health charity and a low-impact rich-world charity.

10. You imply that ineffective altruism would allow us to ask hard questions about who or what we value... I have never been, and can’t imagine being part of a community that cares more about asking these hard questions than the EA community. Questions about the nature of consciousness, how to think about suffering, human and non-human minds, questions that 99% of the world ignore, are things that seriously inform our discussions and worldviews. Most EAs agree with the idea of 'EA as a question' and can be obsessive about asking these questions.

11. The idea that ‘the seemingly limitless bounds of longtermism are ultimately a moral carte blanche on anything we do’ is definitely a challenging thought, which many EAs struggle with. You could reverse it- the limited bounds of near-termism are similarly a moral carte blanche on anything we do that could affect the long-term future, and it still feels uncomfortable. There are some difficult moral issues in how to prioritise longtermism and shorter time horizons (I personally work more on short term cause areas), but I think the idea: ‘Let’s just make sure we don’t destroy ourselves over the next century’ is relatively uncontroversial, and this is where the vast majority of longtermist funding and effort are going. EAs and longtermists can be the weirdos considering the possible welfare of infinite digital minds in a century or so from now, but making sure we don't destroy ourselves is our main priority, and I'd hope that this is something we can all get on board with.

Expand full comment
May 22, 2022Liked by Archana Ahlawat, Hal Triedman

Interesting piece. If I may mildly paraphrase:

1) conventional charitable giving has poor accountability or legibility so it often does little-to-no Good however measured,

2) EA creates a layer of math for accountability and to insist Good is created by at least some measure,

3) longtermism stretches that math so much farther that accountability and Goodness creation become, once again illegible,

4) iEA is a sort of reminder that Good creation is often an illegible p2p matter of communal responsibility: a centralized quantitative account can't encompas that model.

This strikes me as a story of the perils in choosing a metric. There's wisdom in adding mathematical legibility, but also in how metrics are quantitative means to a qualitative end.

Is that fair?

Expand full comment
May 22, 2022Liked by Archana Ahlawat, jessica dai, Hal Triedman

I think there's room for a hybrid path between the best of near-term data driven change and the 'ineffective' stuff you've listed, but I really loved this article! I didn't know anything about lomgtermism but have not liked EA for a while

Expand full comment

I think you are throwing out the baby with the bathwater. Even if you don't agree with longtermism you don't have to abandon effectiveness.

Effective altruism has three main areas longtermism, animal welfare and global wellbeing. No matter which area you want to focus on, I think it is vital to focus on effectiveness. Let's say you want to do something good with 10$. If giving 10$ to a person without a house helps her less than it helps some child that would receive deworming pills if you donated 10$ to the deworm the world organisation, wouldn't it probably be better to donate the money?

To agree that helping many people is better than helping a few you don't have to be a rich, white, capitalist, utilitarian (the effective altruism community is happy to admit that we are not diverse enough and are mostly rich, white, male, utilitarians). Basically every moral theory agrees that doing more good is better than going a little good it doesn't matter if you believe in virtue ethics or deontology or something else more good is probably better that a little good. So effective altruism just says do what does the most good.

EA started by comparing charitys through randomised controlled trials and looking at which charitys perform better. There are some problems with randomised controlled trials thought, like that influencing policy is hard to evaluate thought these experiments but the EA movement has even though of that.

Audrey Tang for example, who you mentioned in your post. She was interviewed by an EA podcast recently, and what she does, strengthening democracy is one of EAs priorities. I haven't listened to the interview with her yet but I would guess that one can consider her work as being under the umbrella of EA.

In some way everyone who wants to help other people, or animals, and doesn't just do what they intuitively think of, but does what actually helps others the most is an effective altruist.

If you are a biologist researching how to stop the next pandemic, if you are doing very cost effective surgeries restoring people's eyesight, if you are doing a normal job and just donating 10% of your income to great charitys or if you are an advocate for criminal justice reform - if you have spent time methodically thinking about what does the most good and you are doing that, you are an effective altruist.

Ineffective altruism would be terrible. It would consist of building Operahouses in Nigeria instead of giving people seasonal maleria treatment, it would consist of buying homeless people chocolate instead of investing in healthcare, it would consist of lobbying for more parks instead of lobbying for criminal justice reform.

I don't went to speculate too much about you but maybe you just don't like the asthenic of EA it is fairly left wing, but probably still further right wing than you, and its members are mostly very privileged and white, and maybe they are the kind of people that like Elon musk us opposed to the people that don't like him. But that shouldn't stop you from embracing effectiveness.

-sorry for any spelling mistakes ; )

Expand full comment

Hi Hal - just wanted to point out a small factual error. You write "80,000 Hours released a piece that argues for effective altruists to not focus their careers on climate change", but the article you've linked to says "we’d love to see more people working on this issue"! (They're definitely arguing that they think there are *better* things for many people to be doing. But that's not the same thing as saying people shouldn't work on climate change)

Expand full comment

Thanks for the article Hal. I had a few thoughts while reading that I’d like to comment on. To be transparent, I’ve been involved with EA for a while, so that’s the personal background where this is coming from. Down to discuss any thoughts/questions below

In no particular order:

- I think it’s accurate that longtermism is very popular in EA right now. A lot of the energy and focus at EA global conferences right now is directed towards x-risks, which longtermists tend to care most about.

- I am also concerned/confused about big tech companies investing so heavily in fundamental AI research / AGI research. I would think that the proper vehicle for doing AI safety research would be a non-profit research institute, and I am skeptical of the reasons for so much AI safety research occurring in for-profit institutions. I’ve thought relatively little about this though.

- “Billions of dollars per year from the wealthy tech elite used to convince people to go vegan or to give to non-religious health NGOs or to end factory farming is not, on its face, a bad thing.” - statements like this seem weirdly pessimistic to me. It is an amazing thing that people give billions of dollars toward non-religious health NGOs and ending factory farming!

- “After all, how can one quantify the resiliency of a particular neighborhood?” - I have trouble understanding what people think is *prohibitively* difficult about doing things like this. What many social scientists would suggest in this situation is to come up with a proxy for resiliency (maybe using a survey to measure self-reported feelings of resilience, or stability, or well-being, or togetherness), then try to figure out how to measure the effect of the “ephemeral work of knitting” intervention on the proxy metric. Maybe it’s a randomized control study, or an observational study. I often have the feeling, when someone suggests that something isn’t measurable, that there are usually clear but difficult paths to measuring it.

- I think the point about falsifiability is one of the best critiques of longtermism and the case for x-risk! I’m very sympathetic to anyone who doesn’t think x-risks are a big deal because they think that it is hard to falsify any estimates of their risk/tractability/value. Of course, everyone bears the burden of providing good evidence for the value of their favored problems to work on. This is why any social movement/ideology/idea, ineffective altruism included, has to justify its beliefs. I get the sense that ineffective altruism (just using the term, not trying to take too literally) will have a hard time convincing anyone of its value if it considers the problems it works on to have (literally) immeasurable importance. How do you convince someone to work on, e.g., “organizing to ensure that as many people as possible have access to basic material needs” rather than, say, union-busting to protect the interests of corporations? In other words, how do you convince people to work on the things you want them to work on, when they currently care about something that you think is bad?

- ‘They also show an understanding of one’s responsibilities to their community, how strengthening community is also important for our shared future, even if it isn’t measurable.” - Right so I think you care about responsibilities/duties, based on this. How do those weigh against other concerns, like how much you can improve human welfare based on your actions? Can you trade-off “this action is more aligned with my duties” with “this action is better for human welfare”? I actually knew someone once who was a deontologist and into effective altruism, so trust me I’ll believe whatever answer you give to this.

- “Seven generations is a long time, but it is also a *finite* amount of time.” - Is the finite vs. infinite distinction meant to be the important thing here? Maybe I’m not getting the point. Otherwise, it seems like if you like this principle, you do care about future people to some extent. But why? Why not adopt Popper’s stance that we *shouldn’t* weigh the interests of future people against present people.

Expand full comment
May 27, 2022·edited May 27, 2022

I think you've set up an interesting and productive dichotomy between effective and ineffective altruism, but I part ways with you at the insistence we have to pick one or the other. They're both legitimate impulses: on the one hand, I don't want to donate to malaria prevention only to have most of the money siphoned off to pay self-dealing consultants; on the other hand, some of Nick Bostrom's conclusions seem obviously crazy and inhumane.

Wisdom here, I suggest, consists of not becoming too committed to an abstract view of the good and losing sight of what's concretely helpful. Concern for the future inhabitants of Dyson spheres over poverty today counts, but so does opposing pandemic prevention because the tech bros like it too much. (Not to imply that you are doing this second thing, but I've certainly seen some Twitter discourse that is.)

Expand full comment
May 23, 2022·edited May 23, 2022

Thanks for the article! Would you mind explaining in a bit more detail what you oppose about maximization itself?

Expand full comment