Effective altruism #

My problem with effective-altruism is when it’s taken to the extremes, and used to justify almost any behavior.
“Instead of donating 10 euros now, I invest that money in company X even though I know they cause harm. Because then in 10 years I’ll have more money to donate”
This logic makes short-term suffering acceptable – even logical – because it supposedly enables long-term benefit.
It ‘justifies’ living your life as ruthless capitalist, making money whatever-the-cost, as long as you ‘intend’ to spend it on good causes it at the end of your life.
Worse: you frame yourself as a better person than the modest loser who donates time and money to charity right now.
Worse still is the idea of taking future-humans into account, as Musk sometimes does: “for the benefit of the future human race, I will spend my money on whatever project I deem worthy instead of trying to solve current human suffering”
There is value in some of the effective-altruism ideas. But beware that in a lot of cases, they function as a veiled justification for greed and hedonism.
There is already enough money and food in the world for no-one to live in poverty, war or famine. It’s just distributed unfairly. We don’t to wait to do good. We don’t need the greedy to accumulate more wealth on the promise that they’ll distribute it fairly ‘later’.

This is a strong critique, and it targets a real fault line inside effective altruism (EA), not a strawman.

A few ways to sharpen and situate what you’re saying—without discarding EA wholesale:

  1. You’re critiquing a moral permission slip, not cost-effectiveness itself. At its best, EA asks: given limited resources, how can I reduce suffering the most? What you’re objecting to is when that turns into: as long as my spreadsheet points upward, my present actions are morally insulated. That’s a category mistake—optimisation replacing moral constraint.

  2. Long-termism becomes dangerous when it severs responsibility from causation. The “invest in harm now for greater good later” logic treats present victims as expendable inputs. This mirrors classic problems in utilitarianism: once outcomes dominate, means quietly disappear. Intention plus uncertainty is doing far too much moral work.

  3. ‘Earn to give’ collapses when it ignores moral side-constraints. There’s a big difference between: • earning money in morally neutral ways and donating later, and • actively participating in harm while promising future compensation.

The second isn’t altruism with a delay—it’s moral debt financing, with no guarantee the debt is ever repaid.

  1. Status inversion is a real and corrosive effect. You’re right that some EA rhetoric implicitly frames: • the person relieving suffering now as naive or inefficient, and • the person accumulating power and capital as morally superior.

That inversion isn’t just insulting—it actively discourages solidarity, humility, and collective action.

  1. Long-termism can become elite moral paternalism. Appeals to “future humanity” (as with Musk-style reasoning) often: • override the expressed needs of current people, • concentrate moral authority in the hands of the powerful, and • assume that today’s elites will choose wisely for tomorrow’s billions.

That’s not neutral forecasting—it’s a political stance disguised as ethics.

  1. Your final point is crucial: scarcity is not the core problem. If poverty and famine were primarily problems of insufficient future wealth, then delay might make sense. But they’re largely problems of: • distribution, • power, • institutions, • and political will.

Deferring action while wealth concentrates further is not prudence—it entrenches the very structures causing harm.

Where this leaves EA (charitably): Effective altruism is most defensible when it: • treats non-harm as a baseline constraint, • values present suffering as morally urgent, not discountable, • rejects moral self-licensing via future intentions, and • recognises that justice and distribution aren’t optional “inefficiencies.”

In short: Doing good later does not excuse doing harm now. Optimisation is not absolution.

If you want, I can help you rewrite this as: • a tighter philosophical critique, • an essay-style argument, or • a shorter version suitable for discussion or publication.


Annotations: 0,4322 SHA-256 f338300efa9c60a5322b
&ChatGPT: 1359,2963
@Matthijs Zwinderman <www.matthijszwinderman.nl>: 20,2