Discussion about this post

User's avatar
Torches Together's avatar

Just a few thoughts, forgive the long response:

1. EA isn’t just longtermism. I’m a moderately engaged member of the EA community, and the last EA event I went to was about ¼ Global Health, ¼ longtermist, ¼ meta-EA stuff and the rest other areas like animal welfare. If you look at EA funding, Global Health and Development and animal welfare are still very high, and that's what I work on mainly.

2. We don’t need to be longtermists to care about catastrophic and existential risks. Even if we just care about people alive today, or even just our own kids, then engineered pandemics, misaligned AI and nuclear weapons are still major risks to their well-being in their lifetimes, and are very neglected areas. Nuclear risk reduction is critically underfunded (see Joan Rohlfing), and EAs like to quote the fact that, as of a couple of years ago, the global biological weapons convention had less funding than an average McDonalds. AI safety is less neglected, largely due to an influx of EA money, but as you note, AI safety funding only comprises a rounding error in terms of colossal total commercial funding pushing faster and better AI.

3. Interestingly, your supposedly ‘ineffective altruist’ question of ‘how can we ensure the flourishing of future generations, rather than just their survival?’ is a very longtermist question! If you’re not a longtermist, and just want to make sure that people alive today and in the near future don’t die before their time, you should probably only focus on catastrophic risks, but shouldn’t care that much about how the world flourishes a few hundred years from now. If you are a longtermist, the question of how to ensure long-term flourishing becomes far more important.

4. In my limited experience, EAs tend to have fairly diverse moral views; I generally see the world in terms of a more consequentialist ‘effective altruism’, in parallel with a ‘virtue ethics’ that governs the way that I behave as part of society. Most EAs would agree that there are other moral responsibilities that we should have, being honest, being respectful and kind to strangers, being loyal to our friends and family, and being a good citizen/ community member. In The Precipice, possibly the most influential book on longtermism and EA, Toby Ord uses the virtue ethics framework to talks about civilizational virtues that humanity can nurture, and I think we should embrace that framing a bit more in EA.

5. EAs tend to be very aware of the opportunity costs of ineffective altruism. If I were to give a few thousand pounds to a local homeless charity (in a rich country) or some of the other causes you mention/ link to (e.g. bail funds for protestors), I know that that is a conscious decision not to save lives in a low-income country or reduce significant amounts of animal suffering. Of course, it's the same opportunity cost if you're donating to an x-risk charity, but I'd have thought that someone who made the argument that we should prioritise climate change more because it is likely to affect more people in the global south in the near-mid term future, would also think that we should prioritise saving more lives in the global south through health interventions (or cash transfers), over less effective rich-world charities.

6. Your claim that ‘ineffective altruism… might look like organizing to ensure that as many people as possible have access to basic material needs like food, housing, and healthcare’ doesn’t sound like ineffective altruism to me. It sounds like effective altruism. You’ve even added the optimising terminology in your example. Similarly, you pose the question ‘how can we eliminate material misery from the world?’. All you’re lacking is the phrase ‘most effectively/ quickly’, and you’ve asked a classic EA question. If you don't care about effectiveness or speed, I have trouble understanding your worldview.

7. I’d argue that you can measure most of the things you claim not to be able to measure. You can measure mutual respect, social cohesion and resilience (it's more complex but a quick google reveals countless articles on ways to measure them). EAs have a strong quantitative bias when it comes to measurement, and I’m someone who advocates for more use of qualitative methods, but I understand why we have this bias, especially with our background in the charity space, where many charities can use imprecise methodology and vague terminology to mask ineffectiveness.

8. You quote Audrey Tang as a counterpoint to EA, but Audrey seems like a very EA person. I don’t think she identifies as EA, but she has been on the 80,000 hours podcast, attended EA events, and many EAs interested in improved decision-making talk about her ideas.

9. I disagree strongly that ‘effective altruism and longtermism can be seen as falsely utopian ideologies cloaked in the opaque vocabulary of science and math’, especially the 'opaque' claim. Except for a few obscure philosophy papers, most of the 'core texts' for EA (Doing Good Better, The Precipice) require little more than a secondary school education. Most EA reasoning is very transparent, and it's definitely considered an 'epistemic virtue'. I understand that most people struggle with maths and stats, but if you understand these concepts to a fairly basic degree, a good cost-effectiveness analysis is a very transparent tool. I'd argue that little could be clearer than a well-phrased comparison between the effectiveness of a high-impact global health charity and a low-impact rich-world charity.

10. You imply that ineffective altruism would allow us to ask hard questions about who or what we value... I have never been, and can’t imagine being part of a community that cares more about asking these hard questions than the EA community. Questions about the nature of consciousness, how to think about suffering, human and non-human minds, questions that 99% of the world ignore, are things that seriously inform our discussions and worldviews. Most EAs agree with the idea of 'EA as a question' and can be obsessive about asking these questions.

11. The idea that ‘the seemingly limitless bounds of longtermism are ultimately a moral carte blanche on anything we do’ is definitely a challenging thought, which many EAs struggle with. You could reverse it- the limited bounds of near-termism are similarly a moral carte blanche on anything we do that could affect the long-term future, and it still feels uncomfortable. There are some difficult moral issues in how to prioritise longtermism and shorter time horizons (I personally work more on short term cause areas), but I think the idea: ‘Let’s just make sure we don’t destroy ourselves over the next century’ is relatively uncontroversial, and this is where the vast majority of longtermist funding and effort are going. EAs and longtermists can be the weirdos considering the possible welfare of infinite digital minds in a century or so from now, but making sure we don't destroy ourselves is our main priority, and I'd hope that this is something we can all get on board with.

Expand full comment
Wyatt Allen's avatar

Interesting piece. If I may mildly paraphrase:

1) conventional charitable giving has poor accountability or legibility so it often does little-to-no Good however measured,

2) EA creates a layer of math for accountability and to insist Good is created by at least some measure,

3) longtermism stretches that math so much farther that accountability and Goodness creation become, once again illegible,

4) iEA is a sort of reminder that Good creation is often an illegible p2p matter of communal responsibility: a centralized quantitative account can't encompas that model.

This strikes me as a story of the perils in choosing a metric. There's wisdom in adding mathematical legibility, but also in how metrics are quantitative means to a qualitative end.

Is that fair?

Expand full comment
14 more comments...

No posts