I find there's something... untrustworthy about effective altruism as a movement, but I'm yet to be able to describe exactly what. Perhaps it's that effective altruists seem to spend so much time...
I find there's something... untrustworthy about effective altruism as a movement, but I'm yet to be able to describe exactly what. Perhaps it's that effective altruists seem to spend so much time talking about how they're going to make money in order to give it away and relatively less time talking about where they're going to give it.
There’s a psychological effect where people who say they’re going to do something good (“I’m going on a diet next week”, “This year I’ll start working out”, “I’m going to cut back on drinking...
There’s a psychological effect where people who say they’re going to do something good (“I’m going on a diet next week”, “This year I’ll start working out”, “I’m going to cut back on drinking after tonight”, etc.) get the dopamine reward as though they’ve already done the thing. For this reason I avoid ever telling anyone when I’m making a healthy change to my lifestyle. I’ll even outright deny I’m doing anything new. I don’t want to get the instant gratification for something that necessarily takes work to do right.
I think the EA people are trying to cash out early on a potential future. And then once they get patted on the back they don’t really need to give anyone anything.
Just to add on to this, I think one of the things that I find kind of unsavory about EA people is that they occasionally have the idea that they will do these high paying jobs and maximize their...
Just to add on to this, I think one of the things that I find kind of unsavory about EA people is that they occasionally have the idea that they will do these high paying jobs and maximize their compensation with the justification of the good they are doing. But the irony is that those high paying jobs do a lot to suck away value from the economy, and the core idea of EA is that it's supposed to give the best "bang for your buck" - so it's effectively an excuse to give less away. It's more like good PR.
Granted this is not everyone, but it's enough that it's bothersome.
On the other hand, there are high-profile people in EA who give a substantial percentage of their income every year. (I would also ask what's the base rate on this. What do average people do?)
On the other hand, there are high-profile people in EA who give a substantial percentage of their income every year.
(I would also ask what's the base rate on this. What do average people do?)
Scott Alexander wrote a blog post defending EA today that has one statistic: Of course that's among his readers, so it probably doesn't generalize. I think if you surveyed certain religious...
I checked this in an old SSC survey, and the non-EAs (n = 3118) donated an average of 1.5%, compared to the EA (n = 773) donating an average of 6%.
Of course that's among his readers, so it probably doesn't generalize.
I think if you surveyed certain religious communities then you'd see higher-than-average charitable donations too. That doesn't necessarily mean they're good. A statistic like this isn't going to convince a suspicious person that they're not some kind of cult; if anything it's evidence that EA's act differently than "normal" people, and different is often seen as bad.
Even the Wikipedia page for them goes over plenty of ways EA has gone wayward, become a culty "movement" that ends up becoming cover for bad behavior, its alignment with Silicon Valley, and how it...
Even the Wikipedia page for them goes over plenty of ways EA has gone wayward, become a culty "movement" that ends up becoming cover for bad behavior, its alignment with Silicon Valley, and how it went from frugality to opulence with the arrival of big donors. It's insistence on focusing on the wrong types of "AI safety" (the big scary apocalypse vs. valid short/immediate-term harms) doesn't help either.
Personally, I don't like its seemingly very cold and hyper-rationalist approach that leads to hypothetical philosophical conclusions like this:
In a 2015 debate, when presented with a similar scenario of either saving a child from a burning building or saving a Picasso painting to sell and donate the proceeds to charity, MacAskill responded that the effective altruist should save and sell the Picasso.
While I can understand the type of thinking that results in such answers, I'm uncomfortable with that type of thinking always influencing altruistic decisions. Even in a hypothetical problem presented in a debate, the EA focus is money and what hypothetically can be accomplished with it long-term rather than preventing an immediate harm. I think altruism is more complex than a cold intellectual math problem to be solved.
I find these kinds of hypotheticals somewhat unfair because of how unrealistic they're likely to be. The actual odds anything remotely like that could happen are small, and it's one of those...
I find these kinds of hypotheticals somewhat unfair because of how unrealistic they're likely to be. The actual odds anything remotely like that could happen are small, and it's one of those "gotcha" style scenarios to show the flaw in some belief, which is pretty easy to do with almost anything.
That said it's just as easy to say "save the kid because they'll likely create more value than the picasso over the course of their life" and it just shows how out of it these people are that they can't even come up with such an answer.
That’s a good point, too! The painting already exists, the only way to get money from it is to extract that money from someone else. The painting itself provides/creates no value, so the choice...
That’s a good point, too! The painting already exists, the only way to get money from it is to extract that money from someone else. The painting itself provides/creates no value, so the choice should always be the child, because they can create new things of value rather than just extracting the value for a thing that already exists
It’s also not a very interesting scenario. When will you be able to sell a painting because you save it from a fire? I’d rather we test someone’s ethics by giving them realistic problems to solve.
It’s also not a very interesting scenario. When will you be able to sell a painting because you save it from a fire? I’d rather we test someone’s ethics by giving them realistic problems to solve.
I think the main prospect of effective altruism is bent. Generally to amass the kind of wealth effective altruism idealizes you have to have a pretty precarious relationship with workers rights,...
I think the main prospect of effective altruism is bent. Generally to amass the kind of wealth effective altruism idealizes you have to have a pretty precarious relationship with workers rights, monopoly business practices, and corporate regulatory capture. All things that generally make it worse for the "common person" of a country. Donating through a charity mean a tax write off, a decision of what "issues" are worthy of funding, and the benefits afforded those who "give" - political sway, gifts, positions, etc...
It all reeks of the "gifts to humanity" from Getty, Carnegie, and Rockefeller. Their donations in the forms of libraries, theaters, and museums would have been better spent as additional wages to their severely underpaid employees. I'm not in for another era where capital skewers the middle and lower classes with justifications of benevolence and collective benefit. SBF is absolutely the poster child this movement deserves.
I think it’s fair to say that a lot of charity work takes wealth as a given - they’re starting with the world as it is and thinking about how to make it better, in an incremental way. There are...
I think it’s fair to say that a lot of charity work takes wealth as a given - they’re starting with the world as it is and thinking about how to make it better, in an incremental way.
There are incentives for people who are very interested in charity to end up working with wealthy people who have a lot of money to give away. Here’s how it works: GiveWell uses “money moved” as a key metric. They had some success publishing recommendations for where to make charitable donations. They can measure the donations that flow through them and also ask donors to report what they give on their own.
So far, so good. But then a billionaire comes along (in particular, Dustin Moskovitz, a co-founder of Facebook, along with his wife) who is interested in what they’re doing and asks for advice about how to give his fortune away, and then it turns out that’s going to move more money than anything else they can do. So they set up a whole separate organization (Open Philanthropy) that’s funded by the billionaire. This was so that GiveWell could remain focused on its original mission of advising smaller donors.
Wanting to control more money is hardly unique to effective altruism. Other big charities are largely funded by wealthy donors too. There are charities that have more small donors, but it often happens that funding from big donors swamps the small ones. Sometimes they use “matching funds” to convince small donors to contribute too.
I’m not sure what’s to be done about that? Should they not take the money from big donors? Some politicians do make a pledge to take campaign contributions in small donations only, but I think that’s because they hope it will convince people to vote for them, and a charity doesn’t have that problem.
Even if you don’t like where the money came from, the charity isn’t in control of that. The framing is, “here’s a large pot of money. What can we do?” Especially for the older charities, because the original founders (and the workers) are dead and the businesses they started evolved into something else. The big pot of money is still there, though.
The impulse to go where the money is isn’t limited to charity work - it’s also a reason people go into government, because governments control far larger budgets than charities. There are people who think, with some justification, that they can have the most impact that way. But then you’re subject to the whims of legislators.
Just being around large pots of money has its temptations, and FTX does show how it can go very badly, even for people who didn’t do anything unethical themselves. Some EA people thought they could advise Sam Bankman-Fried on how to give away a lot of money without worrying overly about where it came from. They thought it was the normal charity deal rather than fraud. Some people lost their jobs at non-profits because they started charities based on promises and their funding got cancelled when FTX went bankrupt.
If you want to press “undo” on a big pot of money, I can understand the impulse, but I don’t see it as the best use of funds. Pressing “undo” is what the new owners of FTX are doing when they try to claw back funds and return it to the original investors. I imagine they get some satisfaction in that, serving justice in that way, and we do have bankruptcy courts for good reasons. But I don’t see cryptocurrency investors as particularly sympathetic. If the money instead went to poor people in Africa via GiveDirectly, I think I would be mostly okay with that?
Similarly for other charities. Maybe an impulse for justice suggests tracking down the descendants of oil workers and steelworkers and giving them money? Or maybe the Gates Foundation should find all the people who bought Microsoft software and give them back some money, instead of funding vaccine research or whatever? I don’t know, it seems pretty dubious. Occasionally I get a small amount of money back from some class action lawsuit and I’m, like, sure, whatever.
If you don’t entertain ideas about undoing past injustices and instead focus on making the future a little less unjust, it comes down to having a big pot of money and deciding what to do with it, without exploring the history of how that pot of money came to be.
And that’s, like, the original mission that got EA started. Maybe college endowments and enormous art museums aren’t the best use of funds? EA is against that.
Figuring out how to be around money and wealthy people without being corrupted by it is apparently pretty hard. Still, at least they’re trying. It’s much more impact than my personal donations will have.
I have what I think will be a frustrating perspective for most people: Charity is pointless and ineffective. I think the best use of funds for billionaire that actually want to do good would be to...
I have what I think will be a frustrating perspective for most people: Charity is pointless and ineffective.
I think the best use of funds for billionaire that actually want to do good would be to put there money into lobbying for a much more progressive tax - i.e. more aggressive tax like the [Wealth Tax of 1935] (75% rate above 1M) - and more aggressive taxes on generational wealth. Strong and consistent federal funds are the only viable solution for the majority of the large scale problems we have in the US: Mitigating the climate crisis, tackling our homeless epidemic, funding welfare/food insecurity programs, providing cost effective healthcare, etc. In those arenas charities are like bandaids on gashes. Small groups - and small being under 1 billion annual expenditures - can only do so much.
Yeah, I think that’s pretty extreme. Why is government spending useful, but at the same time, voluntary donations are useless? Is it a matter of scale?
Yeah, I think that’s pretty extreme. Why is government spending useful, but at the same time, voluntary donations are useless? Is it a matter of scale?
I think scale is the largest component of it. There are some effective non-profits, like Blue Forest Conservation, that focus on accruing diverse pots of funding to fully fund projects that would...
I think scale is the largest component of it. There are some effective non-profits, like Blue Forest Conservation, that focus on accruing diverse pots of funding to fully fund projects that would otherwise be unable to be carried out. But if you had an adequately funded Department of the Interior they could do that with even lower expectation of returns.
I agree with the sentiment that you and others discussed in the capitalism/socialism thread where public/private partnerships are key. I think the problem is that we have a number of issues that addressing through a capitalist lens don't work, because they will be inherently too expensive for any single organization off the ground and in most cases making them profitable makes them inaccessible to the majority of people. I thinking about transit infrastructure, healthcare, and to some degree housing. And while there are non-profits functioning in that space, some very effectively like Blue Forest, I think it has more to do with a scarcity of funding in federal programs rather than the efficiency of a NGO.
Beyond the benefit of scale, there is also a question of sustainability. Non-profits work on limited budgets and can rarely plan for a 10 year initiative, let alone a 100 year one. For sectors where longevity make sense, particularly transit infrastructure, programs need to have the security that the funds and support will be around past year 1, 2, or even sometimes 10. I worked in non-profits for the first half of my career and the number of dead end projects was astonishing. Folks we had partnered with were interested in continuing or expanding initiatives but there wasn't sustained funding or funder interest.
Lastly, non-profit designation can act as a method of tax dodging and gaining influence (socially and politically). My vote is to reap higher taxes through progressive taxes similar to the 30s and 40s and let federal and state program run at full capacity.
It’s true that government operates at a larger scale than even large private charities. We aren’t likely to see charities building subway systems. I think there are still plenty of useful things...
It’s true that government operates at a larger scale than even large private charities. We aren’t likely to see charities building subway systems. I think there are still plenty of useful things to be done at smaller scales, though, so “pointless and ineffective” is a bit much.
Often charities do things that government doesn’t do for one reason or another. Maybe it’s too experimental. Private charities can do pilot studies on approaches that governments eventually start doing after they’re proven.
Maybe these are projects that don’t need to last forever, either? And influence can go either way. Activism isn’t always bad.
I think it’s better to zoom in a little and say that some charities are better than others. Maybe we can agree that the average charity is pretty bad? But I think that’s all the more reason to be critical of them and search for the best ones, rather than condemning the whole category.
"She doth protest too much" - If someone spends a lot of effort to present themselves as very moral, you tend not to trust them. That and, the fact that it involves people justifying why they...
"She doth protest too much" - If someone spends a lot of effort to present themselves as very moral, you tend not to trust them.
That and, the fact that it involves people justifying why they personally having lots of money is a common moral good.
This doesn't match my impression. I get better information about charities from EA-aligned organizations than from any other source. There are other organizations that do it now, but GiveWell was...
This doesn't match my impression. I get better information about charities from EA-aligned organizations than from any other source. There are other organizations that do it now, but GiveWell was the first I'd seen to take charity evaluation seriously and do it in public. [1][2]
Besides organizations, you can also look at the EA forum and see that there are several posts about specific charities.
Most people don't spend much time evaluating charities at all, and if they do, they don't post about it. (And I'll include myself in that; I'm basically cribbing off GiveWell.)
[1] Many charitable foundations have staff doing this, but they tend to keep their evaluations confidential to avoid upsetting the charities.
[2] Charity Navigator is well-known for rating charities, but their evaluation criteria was very crude back when GiveWell started. It was largely about administration overhead.
I have the same problem with it as I do most self identifying "ists". The moment you've made it a part of your core ethos/identity, I feel like you're no longer actually being reasonable and...
I have the same problem with it as I do most self identifying "ists". The moment you've made it a part of your core ethos/identity, I feel like you're no longer actually being reasonable and instead overly invested in some supposed philosophical concept, often to make you feel better about something else.
This post is by William MacAskill who is a leader in the effective altruist community. I think it’s interesting to get some insight into how bad decisions are made. It seems that apparent success...
This post is by William MacAskill who is a leader in the effective altruist community. I think it’s interesting to get some insight into how bad decisions are made. It seems that apparent success causes people to forgive a lot:
It’s true that a number of people, at the time, were very unhappy with Sam, and I spoke to them about that. They described him as reckless, uninterested in management, bad at managing conflict, and being unwilling to accept a lower return, instead wanting to double down. In hindsight, this was absolutely a foreshadowing of what was to come. At the time, I believed the view, held by those that left, that [Alameda] had been a folly project that was going to fail.
…
When, instead, it and FTX were enormously successful, and had received funding from leading VCs like Blackrock and Sequoia, this suggested that those earlier views had been mistaken, or that Sam had learned lessons and matured over the intervening years. I thought this view was held by a number of people who’d left Alameda; since the collapse I checked with several of those who left, who have confirmed that was their view.
This picture was supported by actions taken by people who’d previously worked at Alameda. Over the course of 2022, former Alameda employees, investors or advisors with former grievances against Sam did things like: advise Future Fund, work as a Future Fund regranter, accept a grant from Future Fund, congratulate Nick on his new position, trade on FTX, or even hold a significant fraction of their net worth on FTX. People who left early Alameda, including very core people, were asked for advice prior to working for FTX Foundation by people who had offers to work there; as far as I know, none of them advised against working for Sam.
I was also in contact with a few former Alameda people over 2022: as far as I remember, none of them raised concerns to me. And shortly after the collapse, one of the very most core people who left early Alameda, with probably the most animosity towards Sam, messaged me to say that they were as surprised as anyone, that they thought it was reasonable to regard the early Alameda split as a typical cofounder fallout, and that even they had come to think that Alameda and FTX had overcome their early issues and so they had started to trade on FTX.
Having access to (former) insiders is not necessarily an advantage. While you can get early warnings, relying on others in this way also exposes you to groupthink.
He writes about lessons learned. Here’s one:
Against “EA exceptionalism”: without evidence to the contrary, we should assume that people in EA are about average (given their demographics) on traits that don’t relate to EA. Sadly, that includes things like likelihood to commit crimes. We should be especially cautious to avoid a halo effect — assuming that because someone is good in some ways, like being dedicated to helping others, then they are good in other ways, too, like having integrity.
I think this is very telling of the patterns of thought these people are vulnerable to:
I think this is very telling of the patterns of thought these people are vulnerable to:
Askill on why he wanted SBF and Musk to talk: Sam also thought that the blockchain could address the content moderation problem. He wrote about this here, and talked about it here, in spring and summer of 2022. If the idea worked, it could make Twitter somewhat better for the world, too.
Commentor: I think this is an indication that the EA community may have hard a hard time seeing through tech hype. I don't think this this is a good sign now we're dealing with AI companies who are also motivated to hype and spin.
The linked idea is very obviously unworkable. I am unsurprised that Elon rejected it and that no similar thing has taken off.
As usual, it could be done cheaper and easier without a blockchain.
twitter would be giving people a second place to see their content where they don't see twitters ads, thereby shooting themselves in the foot financially for no reason.
while facebook and twitter could maybe cooperate here, there is no point in an interchange between other sites like tiktok and twitter as they are fundamentally different formats.
There's already a way for people to share tweets on other social media sites: it's called "hyperlinks" and "screenshots".
How do you delete your bad tweets that are ruining your life is they remain permanently on the blockchain?
What pattern do you mean? Both Askill and the person who wrote that comment are members of the EA community. As you can see, there is no EA consensus on cryptocurrency, any more than there is on...
What pattern do you mean?
Both Askill and the person who wrote that comment are members of the EA community. As you can see, there is no EA consensus on cryptocurrency, any more than there is on Tildes.
We already have #5 without the blockchain. As you pointed out in #4, they live on as screenshots. (Now I wonder how long until there is a high-profile case of someone submitting faked screenshots...
We already have #5 without the blockchain. As you pointed out in #4, they live on as screenshots.
(Now I wonder how long until there is a high-profile case of someone submitting faked screenshots to the blockchain. I hope soon.)
I find there's something... untrustworthy about effective altruism as a movement, but I'm yet to be able to describe exactly what. Perhaps it's that effective altruists seem to spend so much time talking about how they're going to make money in order to give it away and relatively less time talking about where they're going to give it.
There’s a psychological effect where people who say they’re going to do something good (“I’m going on a diet next week”, “This year I’ll start working out”, “I’m going to cut back on drinking after tonight”, etc.) get the dopamine reward as though they’ve already done the thing. For this reason I avoid ever telling anyone when I’m making a healthy change to my lifestyle. I’ll even outright deny I’m doing anything new. I don’t want to get the instant gratification for something that necessarily takes work to do right.
I think the EA people are trying to cash out early on a potential future. And then once they get patted on the back they don’t really need to give anyone anything.
Just to add on to this, I think one of the things that I find kind of unsavory about EA people is that they occasionally have the idea that they will do these high paying jobs and maximize their compensation with the justification of the good they are doing. But the irony is that those high paying jobs do a lot to suck away value from the economy, and the core idea of EA is that it's supposed to give the best "bang for your buck" - so it's effectively an excuse to give less away. It's more like good PR.
Granted this is not everyone, but it's enough that it's bothersome.
On the other hand, there are high-profile people in EA who give a substantial percentage of their income every year.
(I would also ask what's the base rate on this. What do average people do?)
I think average for Americans is like 1-2% of pre tax income
Scott Alexander wrote a blog post defending EA today that has one statistic:
Of course that's among his readers, so it probably doesn't generalize.
I think if you surveyed certain religious communities then you'd see higher-than-average charitable donations too. That doesn't necessarily mean they're good. A statistic like this isn't going to convince a suspicious person that they're not some kind of cult; if anything it's evidence that EA's act differently than "normal" people, and different is often seen as bad.
Even the Wikipedia page for them goes over plenty of ways EA has gone wayward, become a culty "movement" that ends up becoming cover for bad behavior, its alignment with Silicon Valley, and how it went from frugality to opulence with the arrival of big donors. It's insistence on focusing on the wrong types of "AI safety" (the big scary apocalypse vs. valid short/immediate-term harms) doesn't help either.
Personally, I don't like its seemingly very cold and hyper-rationalist approach that leads to hypothetical philosophical conclusions like this:
While I can understand the type of thinking that results in such answers, I'm uncomfortable with that type of thinking always influencing altruistic decisions. Even in a hypothetical problem presented in a debate, the EA focus is money and what hypothetically can be accomplished with it long-term rather than preventing an immediate harm. I think altruism is more complex than a cold intellectual math problem to be solved.
I find these kinds of hypotheticals somewhat unfair because of how unrealistic they're likely to be. The actual odds anything remotely like that could happen are small, and it's one of those "gotcha" style scenarios to show the flaw in some belief, which is pretty easy to do with almost anything.
That said it's just as easy to say "save the kid because they'll likely create more value than the picasso over the course of their life" and it just shows how out of it these people are that they can't even come up with such an answer.
That’s a good point, too! The painting already exists, the only way to get money from it is to extract that money from someone else. The painting itself provides/creates no value, so the choice should always be the child, because they can create new things of value rather than just extracting the value for a thing that already exists
It’s also not a very interesting scenario. When will you be able to sell a painting because you save it from a fire? I’d rather we test someone’s ethics by giving them realistic problems to solve.
I like all of the responses thus far, but this is particularly good.
I think the main prospect of effective altruism is bent. Generally to amass the kind of wealth effective altruism idealizes you have to have a pretty precarious relationship with workers rights, monopoly business practices, and corporate regulatory capture. All things that generally make it worse for the "common person" of a country. Donating through a charity mean a tax write off, a decision of what "issues" are worthy of funding, and the benefits afforded those who "give" - political sway, gifts, positions, etc...
It all reeks of the "gifts to humanity" from Getty, Carnegie, and Rockefeller. Their donations in the forms of libraries, theaters, and museums would have been better spent as additional wages to their severely underpaid employees. I'm not in for another era where capital skewers the middle and lower classes with justifications of benevolence and collective benefit. SBF is absolutely the poster child this movement deserves.
I think it’s fair to say that a lot of charity work takes wealth as a given - they’re starting with the world as it is and thinking about how to make it better, in an incremental way.
There are incentives for people who are very interested in charity to end up working with wealthy people who have a lot of money to give away. Here’s how it works: GiveWell uses “money moved” as a key metric. They had some success publishing recommendations for where to make charitable donations. They can measure the donations that flow through them and also ask donors to report what they give on their own.
So far, so good. But then a billionaire comes along (in particular, Dustin Moskovitz, a co-founder of Facebook, along with his wife) who is interested in what they’re doing and asks for advice about how to give his fortune away, and then it turns out that’s going to move more money than anything else they can do. So they set up a whole separate organization (Open Philanthropy) that’s funded by the billionaire. This was so that GiveWell could remain focused on its original mission of advising smaller donors.
Wanting to control more money is hardly unique to effective altruism. Other big charities are largely funded by wealthy donors too. There are charities that have more small donors, but it often happens that funding from big donors swamps the small ones. Sometimes they use “matching funds” to convince small donors to contribute too.
I’m not sure what’s to be done about that? Should they not take the money from big donors? Some politicians do make a pledge to take campaign contributions in small donations only, but I think that’s because they hope it will convince people to vote for them, and a charity doesn’t have that problem.
Even if you don’t like where the money came from, the charity isn’t in control of that. The framing is, “here’s a large pot of money. What can we do?” Especially for the older charities, because the original founders (and the workers) are dead and the businesses they started evolved into something else. The big pot of money is still there, though.
The impulse to go where the money is isn’t limited to charity work - it’s also a reason people go into government, because governments control far larger budgets than charities. There are people who think, with some justification, that they can have the most impact that way. But then you’re subject to the whims of legislators.
Just being around large pots of money has its temptations, and FTX does show how it can go very badly, even for people who didn’t do anything unethical themselves. Some EA people thought they could advise Sam Bankman-Fried on how to give away a lot of money without worrying overly about where it came from. They thought it was the normal charity deal rather than fraud. Some people lost their jobs at non-profits because they started charities based on promises and their funding got cancelled when FTX went bankrupt.
If you want to press “undo” on a big pot of money, I can understand the impulse, but I don’t see it as the best use of funds. Pressing “undo” is what the new owners of FTX are doing when they try to claw back funds and return it to the original investors. I imagine they get some satisfaction in that, serving justice in that way, and we do have bankruptcy courts for good reasons. But I don’t see cryptocurrency investors as particularly sympathetic. If the money instead went to poor people in Africa via GiveDirectly, I think I would be mostly okay with that?
Similarly for other charities. Maybe an impulse for justice suggests tracking down the descendants of oil workers and steelworkers and giving them money? Or maybe the Gates Foundation should find all the people who bought Microsoft software and give them back some money, instead of funding vaccine research or whatever? I don’t know, it seems pretty dubious. Occasionally I get a small amount of money back from some class action lawsuit and I’m, like, sure, whatever.
If you don’t entertain ideas about undoing past injustices and instead focus on making the future a little less unjust, it comes down to having a big pot of money and deciding what to do with it, without exploring the history of how that pot of money came to be.
And that’s, like, the original mission that got EA started. Maybe college endowments and enormous art museums aren’t the best use of funds? EA is against that.
Figuring out how to be around money and wealthy people without being corrupted by it is apparently pretty hard. Still, at least they’re trying. It’s much more impact than my personal donations will have.
I have what I think will be a frustrating perspective for most people: Charity is pointless and ineffective.
I think the best use of funds for billionaire that actually want to do good would be to put there money into lobbying for a much more progressive tax - i.e. more aggressive tax like the [Wealth Tax of 1935] (75% rate above 1M) - and more aggressive taxes on generational wealth. Strong and consistent federal funds are the only viable solution for the majority of the large scale problems we have in the US: Mitigating the climate crisis, tackling our homeless epidemic, funding welfare/food insecurity programs, providing cost effective healthcare, etc. In those arenas charities are like bandaids on gashes. Small groups - and small being under 1 billion annual expenditures - can only do so much.
Yeah, I think that’s pretty extreme. Why is government spending useful, but at the same time, voluntary donations are useless? Is it a matter of scale?
I think scale is the largest component of it. There are some effective non-profits, like Blue Forest Conservation, that focus on accruing diverse pots of funding to fully fund projects that would otherwise be unable to be carried out. But if you had an adequately funded Department of the Interior they could do that with even lower expectation of returns.
I agree with the sentiment that you and others discussed in the capitalism/socialism thread where public/private partnerships are key. I think the problem is that we have a number of issues that addressing through a capitalist lens don't work, because they will be inherently too expensive for any single organization off the ground and in most cases making them profitable makes them inaccessible to the majority of people. I thinking about transit infrastructure, healthcare, and to some degree housing. And while there are non-profits functioning in that space, some very effectively like Blue Forest, I think it has more to do with a scarcity of funding in federal programs rather than the efficiency of a NGO.
Beyond the benefit of scale, there is also a question of sustainability. Non-profits work on limited budgets and can rarely plan for a 10 year initiative, let alone a 100 year one. For sectors where longevity make sense, particularly transit infrastructure, programs need to have the security that the funds and support will be around past year 1, 2, or even sometimes 10. I worked in non-profits for the first half of my career and the number of dead end projects was astonishing. Folks we had partnered with were interested in continuing or expanding initiatives but there wasn't sustained funding or funder interest.
Lastly, non-profit designation can act as a method of tax dodging and gaining influence (socially and politically). My vote is to reap higher taxes through progressive taxes similar to the 30s and 40s and let federal and state program run at full capacity.
It’s true that government operates at a larger scale than even large private charities. We aren’t likely to see charities building subway systems. I think there are still plenty of useful things to be done at smaller scales, though, so “pointless and ineffective” is a bit much.
Often charities do things that government doesn’t do for one reason or another. Maybe it’s too experimental. Private charities can do pilot studies on approaches that governments eventually start doing after they’re proven.
Maybe these are projects that don’t need to last forever, either? And influence can go either way. Activism isn’t always bad.
I think it’s better to zoom in a little and say that some charities are better than others. Maybe we can agree that the average charity is pretty bad? But I think that’s all the more reason to be critical of them and search for the best ones, rather than condemning the whole category.
"She doth protest too much" - If someone spends a lot of effort to present themselves as very moral, you tend not to trust them.
That and, the fact that it involves people justifying why they personally having lots of money is a common moral good.
This doesn't match my impression. I get better information about charities from EA-aligned organizations than from any other source. There are other organizations that do it now, but GiveWell was the first I'd seen to take charity evaluation seriously and do it in public. [1][2]
Besides organizations, you can also look at the EA forum and see that there are several posts about specific charities.
Most people don't spend much time evaluating charities at all, and if they do, they don't post about it. (And I'll include myself in that; I'm basically cribbing off GiveWell.)
[1] Many charitable foundations have staff doing this, but they tend to keep their evaluations confidential to avoid upsetting the charities.
[2] Charity Navigator is well-known for rating charities, but their evaluation criteria was very crude back when GiveWell started. It was largely about administration overhead.
I have the same problem with it as I do most self identifying "ists". The moment you've made it a part of your core ethos/identity, I feel like you're no longer actually being reasonable and instead overly invested in some supposed philosophical concept, often to make you feel better about something else.
This post is by William MacAskill who is a leader in the effective altruist community. I think it’s interesting to get some insight into how bad decisions are made. It seems that apparent success causes people to forgive a lot:
…
Having access to (former) insiders is not necessarily an advantage. While you can get early warnings, relying on others in this way also exposes you to groupthink.
He writes about lessons learned. Here’s one:
I think this is very telling of the patterns of thought these people are vulnerable to:
What pattern do you mean?
Both Askill and the person who wrote that comment are members of the EA community. As you can see, there is no EA consensus on cryptocurrency, any more than there is on Tildes.
We already have #5 without the blockchain. As you pointed out in #4, they live on as screenshots.
(Now I wonder how long until there is a high-profile case of someone submitting faked screenshots to the blockchain. I hope soon.)
To be fair, screenshots can say anything whereas a permanently live link provides authenticity, forever.