18 votes

Defective altruism - the repugnant philosophy of “Effective Altruism” offers nothing to movements for global justice

63 comments

  1. [25]
    Macil
    (edited )
    Link
    A good chunk of this article could be summarized as "EA encourages capitalist-friendly liberals trying to work within the system to improve the world instead of changing the system like...
    • Exemplary

    A good chunk of this article could be summarized as "EA encourages capitalist-friendly liberals trying to work within the system to improve the world instead of changing the system like socialists", except that probably defangs it much more than the author wants it to be.

    I'm not personally the biggest fan of EA's almost strict focus on existential threats, but I like knowing there is at least one organization dedicated to that. It seems like the sort of thing that a healthy civilization would devote some fraction of its effort to. I'm also glad that EA is not the only social movement trying to improve the world; there's no social movement I'm absolutely sure of the principles or effectiveness of.

    In the past, I’ve talked to people who think that while some of Effective Altruism is kooky, at least EAs are sincerely committed to improving the world, and that’s a good thing. But I’m afraid I don’t agree. Good intentions count for very little in my mind. Lots of people who commit evil may have “good intentions”—perhaps Lyndon Johnson really wanted to save the world from Communism in waging the criminal Vietnam war, and perhaps Vladimir Putin really thought he was invading Ukraine to save Russia’s neighbor from Nazis.

    So what exactly is the evil that EA is doing? Donating too much to AI safety research and malaria nets? Failing to be part of "Fight For 15"? (... I don't mean to be too dismissive of that cause in particular, it's just one he called out EA for not participating in and I find the idea of condemning someone over which of these causes they donate to silly.)

    It doesn’t really matter whether you mean well if you end up doing a bunch of deranged stuff that hurts people, and I can’t praise a movement grounded in a repulsive utilitarian philosophy that has little interest in most of the world’s most pressing near-term concerns and is actively trying to divert bright young idealists who might accomplish some good if they joined authentic grassroots social movements rather than a billionaire-funded cult.
    The truth about doing good is that it doesn’t require much complicated philosophy. What it requires is dedication, a willingness to do things that are unglamorous and tedious. In a movement, you do not get to be a dazzling standout individual who single-handedly changes the world. It’s unsexy. You join with others and add your small drop to the bucket. It’s not a matter of you, but we. There’s already a form of effective altruism, and it’s called socialism. Socialists have been working for a better world for a very long time. Those who want to do “the most good they can” and serve the interests of humanity long-term are welcome to come on board and join the struggle.

    The big evil EA does is ... distract people from socialism.

    I really dislike this antagonistic zero-sum approach to social movements. If you want to sell me on an idea of socialism, just do that instead of arbitrarily pitting it against an unrelated social movement that's not in opposition to it.

    (Also uh maybe this is a cheap shot, but I find it funny the author acts like he's definitely figured out the one acceptable route to social progress when he's a NIMBY and also fired his staff after workplace organizing. Both of these strike me as concretely contributing to much bigger social issues than some nerds being maybe too focused on possibly hypothetical future existential risks!)


    Personally, I don’t really give this kind of importance to stopping human extinction. (Let the manatees have the planet, I say!) It’s more important for me to make sure the humans that do exist have good lives than to make sure humanity keeps existing forever.

    This sounds much more alien of a perspective to me than any EA positions as he gave them. (Sure I agree that there exists some trade-off where it might be better to have a short-lived happy civilization than an eternal civilization that never ever saw happiness and somehow had no chance to ever do so even in the infinity of time, but in general I'm pretty comfortable naming "human extinction" as a realistic scenario to be desperately avoided. I imagine most EAs including many self-proclaimed longtermists would agree that a strict concept of longtermism that only prioritizes sheer numbers of people is incomplete and doesn't handle this hypothetical well.)

    I think the article in trying to explain EA's problems assigns too much political importance to thought experiments over-stretching an idea when "EAs prefer the safe, predictable, and quantifiable route working within the system" is a much simpler explanation for its lack of radical politics.

    Nick Bostrom, another EA-aligned Oxford philosopher whose other bad ideas I have criticized before, says that truly maximizing the amount of well-being would involve the “colonization of the universe,” and using the resulting Lebensraum to run colossal numbers of digital simulations of human beings.

    I wish "colonization movements on Earth were bad and Nazi-like against indigenous peoples, and space colonization has the same word so it must be bad in a similar way" was logic saved for an Onion article.

    21 votes
    1. [24]
      psi
      Link Parent
      Well... maybe? I'm going to paint a wide brush here, which is admittedly a bit unfair, but in my experience techbros/Silicon Valley/HN-types overestimate the value of their judgements; and since...
      • Exemplary

      So what exactly is the evil that EA is doing? Donating too much to AI safety research and malaria nets?

      Well... maybe? I'm going to paint a wide brush here, which is admittedly a bit unfair, but in my experience techbros/Silicon Valley/HN-types overestimate the value of their judgements; and since effective altruism is a utilitarian-based philosophy, I don't think my criticism is an ad hominem. Indeed, if someone is to practice effective altruism, they must necessarily be effective at casting good judgement.

      So let me elaborate on the point because I think software developers/techbros/Silicon Valley/HN-types too often fall into the trap of xkcd/793. (Of course, not all effective altruists are software developers, but I'm concentrating on these folk since they're overrepresented in the effective altruist movement.)

      1. Software developers, on the whole, are compensated extraordinarily well compared to most other professions. Since we live in a society, this additional compensation affords them additional class privileges, and their opinions are often elevated over "lower class" individuals. However, that doesn't mean that a software developer is more qualified to discuss existential risks than a barista -- it's just appeal to authority.

      2. Software development rarely has "wrong" answers, at least in the sense that there are infinitely many different ways to approach a problem. A developer can argue with a coworker about how to best implement some feature, but likely either person's implementation would ultimately work; it's simply a matter of managing drawbacks. I think this leads to a sort of intellectual complacency -- if you are never wrong, how can you recognize your own philosophical shortcomings?

      As an example of poor judgement, let's consider AI safety, which I think is a classic example of people overemphasizing a problem because it lies within their domain of expertise. Sure, a runaway AI could lead to apocalypse, but it's hedged on so many assumptions that it's impossible to quantify the likelihood of it actually happening (an effective altruist in the article supposes it could be 2% -- but frankly we don't even know whether it's 20% or 0.02%). On the other hand, there are real, cognizable threats that exist right now: namely climate change and nuclear war. (And worse yet, these issues aren't even uncorrelated! As climate catastrophe approaches, resource shortages will effect armed conflicts, raising the probability of nuclear confrontation.)

      • To be more specific, the concern over AI safety is an abuse of expectation values. When we're talking about existential threats, we don't live in the limit of large numbers -- either the existential threat will happen, or it won't, which means that we either need to treat the threat as inevitable or impossible. Meanwhile, actual human suffering does exist in the limit of large numbers -- the amount of reduction is directly proportional to the donation. So yea to malaria nets, nay to AI safety.

      However, my largest reservation against effective altruism is that it feels like a post-hoc rationalization for doing the bare minimum. Why should you be a public defender when you could be a Wall Street banker and donate some portion of your salary? (Has anyone considered that Wall Street bankers might, in fact, be part of the problem?) Frankly, it's an excuse to be complacent with one's own moral shortcomings, as any act of immorality is permissible, so long as you're willing to offset your "moral footprint".

      12 votes
      1. [4]
        skybrian
        Link Parent
        I think that “software developers/techbros/Silicon Valley/HN-types” is so vague that it isn’t even an ad hominem attack. You’re lumping together a lot of people who don’t agree on anything or have...

        I think that “software developers/techbros/Silicon Valley/HN-types” is so vague that it isn’t even an ad hominem attack. You’re lumping together a lot of people who don’t agree on anything or have any common characteristics.

        The employees of big tech company firms (okay, Google is where I worked) tend to resemble the United Nations. I have met libertarians and woke activists and conservatives and feminists. Germans and Indians and Chinese and Ukrainians and Russians. More trans people than I met anywhere else.

        13 votes
        1. [3]
          psi
          Link Parent
          I think you have a fair point, as I did write that in a way that was uncharitably vague. So instead of parsing "software developers/techbros/Silicon Valley/HN-types" as a list, parse it as a fuzzy...

          I think that “software developers/techbros/Silicon Valley/HN-types” is so vague that it isn’t even an ad hominem attack. You’re lumping together a lot of people who don’t agree on anything or have any common characteristics.

          I think you have a fair point, as I did write that in a way that was uncharitably vague. So instead of parsing "software developers/techbros/Silicon Valley/HN-types" as a list, parse it as a fuzzy diagram -- someone who could roughly be described as possessing all those quantities.

          So let me be more specific and choose one of those descriptors: Hacker News-types (with the caveat that I also use HN). I don't think anyone would consider Tildes and HN comparable in terms of inclusivity. I've seen HN users downplay a black person's experience because it doesn't conform with their meritocratic world view. I've been told that if my advisor ever demanded anything unreasonable (not that she would have -- she was great!), I would only have myself to blame, as I always have the option of restarting my graduate career (yeah, right). Hell, just today I saw someone advocating for eugenics with respect to type 1 diabetics (without even understanding why eugenics wouldn't prevent type 1 diabetes anyway).

          That's the sort of thing I take issue with -- these high-in-the-sky ideals that don't match people's lived experiences. Specifically, it's a lack of epistemic modesty, i.e. an unwillingness to acknowledge the limits of one's knowledge. That's why I roll my eyes when someone lists AI as the foremost existential threat: it demonstrates an intellectual blind spot.

          9 votes
          1. [2]
            skybrian
            Link Parent
            Sure, there are plenty of ignorant (and yet confident) comments on Hacker News. However, I think it’s a mistake to think we actually know the people who make them. When they’re not usernames you...

            Sure, there are plenty of ignorant (and yet confident) comments on Hacker News. However, I think it’s a mistake to think we actually know the people who make them. When they’re not usernames you recognize, it’s easy to confuse the opinions of different commenters who probably disagree with each other, and assume that people who said something ignorant on one subject are terrible on all subjects.

            One way to think of it is that the “hive mind” (in effect) for some forums can be a lot different from the people who make the comments due to social dynamics. The people might behave differently in a different environment?

            Also, I sometimes wonder if some of the more ignorant opinions are coming from people who are younger than they appear. Like, you might be arguing with a teenager or something?

            Maybe it would be better to ague against the text? Something like “people often say X on Hacker News” doesn’t try to characterize the people behind it.

            8 votes
            1. psi
              Link Parent
              To be honest, I don't disagree with you; it's more of the effective altruist vibe that I take issue with -- and again, I do think you need to trust the effective altruist activists if you're to...

              To be honest, I don't disagree with you; it's more of the effective altruist vibe that I take issue with -- and again, I do think you need to trust the effective altruist activists if you're to trust the movement -- but I agree that a vibe is too nebulous to capture fairly. (Of course, this is why I also gave the specific example regarding AI safety.)

              1 vote
      2. [19]
        FlippantGod
        Link Parent
        Isn't it an example of specialization? Some people will become doctors, some people will become social services workers, some people will donate a lot of money required for charitable activities.

        However, my largest reservation against effective altruism is that it feels like a post-hoc rationalization for doing the bare minimum.

        Isn't it an example of specialization? Some people will become doctors, some people will become social services workers, some people will donate a lot of money required for charitable activities.

        6 votes
        1. [18]
          psi
          Link Parent
          I don't think it's quite the same thing as specialization. My understanding is that, given a choice between becoming a soup kitchen manager and a Wall Street banker, under an effective altruist...

          I don't think it's quite the same thing as specialization. My understanding is that, given a choice between becoming a soup kitchen manager and a Wall Street banker, under an effective altruist framework the more ethical choice is to become a Wall Street banker, as you are able to more greatly reduce human suffering through your charitable donations (even if that means, for example, you "only" donate a few million dollars per year and keep hundreds of millions for yourself). It's a rather convenient conclusion that discourages someone from a profession involving greater sacrifice.

          (Again, I don't want to denigrate donating to charity -- obviously that's great! But when we evaluate how "moral" someone's behavior is, we should also consider the cost to themselves. Biden calling out Trump for Jan 6 is well-deserved, but Liz Cheney's condemnation is far more honorable.)

          3 votes
          1. [5]
            EgoEimi
            Link Parent
            I think that's the problem of sacrifice vs. impact, and more broadly: feeling good vs. doing good. Sacrifice is a positive social signal. People like people who are willing to give up their self...

            I think that's the problem of sacrifice vs. impact, and more broadly: feeling good vs. doing good.

            Sacrifice is a positive social signal. People like people who are willing to give up their self interests to contribute to the greater whole. But sacrifice doesn't necessarily translate to impact.

            It is extreme sacrifice to become a monk or a nun, give up worldly pleasures for an ascetic and celibate lifestyle, and pray for the soul of mankind — but it has no impact.

            On the flip side, there's cultural stigma against profit as a motive. But private businesses provide a lot of utility to their communities.

            Grocery Outlet is a for-profit chain of grocery stores that sell overstocks and closeouts of groceries very cheaply. They have extremely efficient supply chains and procurement. They provide an immense amount of value to low-income families in my area. You can load up a shopping cart with food to feed a family for weeks for less than $100.

            In contrast, I see a lot of non-profit, ostensibly-for-the-people food collectives that aim to promote food justice or whatever. The most recent one I came across was the Berkeley Student Food Collective. Invariably they prioritize mission over business. Invariably they fail to create the systems to efficiently procure and manage inventory at scale, so they end up with tiny yet overpriced product selections that serve no one. I would bet that a single Grocery Outlet store has 1000x more positive impact on poor families than all the food collectives in Oakland and Berkeley combined.

            7 votes
            1. NaraVara
              Link Parent
              This isn't really what monks do though. Monasteries are usually large enterprises that grow food, provide spiritual counsel and services to community around them, maintain and preserve old...

              It is extreme sacrifice to become a monk or a nun, give up worldly pleasures for an ascetic and celibate lifestyle, and pray for the soul of mankind — but it has no impact.

              This isn't really what monks do though. Monasteries are usually large enterprises that grow food, provide spiritual counsel and services to community around them, maintain and preserve old knowledge or old artifacts as well as the folkways that give them context and meaning, and otherwise advance the interests of their spiritual movement. You can't just measure impact based on the stuff that shows up in an accounting ledger.

              10 votes
            2. MimicSquid
              Link Parent
              I've worked with a person in the non-profit space who exemplifies this 1000%. They're independently wealthy and over the last 15 years have sunk 15+ million into various self-directed efforts to...

              I've worked with a person in the non-profit space who exemplifies this 1000%. They're independently wealthy and over the last 15 years have sunk 15+ million into various self-directed efforts to make healthy food more available to poor people. As far as I can tell, what they've actually done is paid a series of business people to develop plans that they abandon for the next socially conscious trend, administrative people to make powerpoints talking about said plans, and accountants to file reports and taxes for organizations that never end up doing anything. They has talked a great game over all that time, but the only good they've done is make sure that some people have money to live on, and it's not the people they say they're helping.

              While this is a condemnation of mission over effectiveness, it's also kind of a condemnation of wealthy individuals claiming they know how to help other people.

              5 votes
            3. [2]
              psi
              Link Parent
              I would disagree here -- I don't think feeling good vs doing good is an abstraction of sacrifice vs impact; I think they're (nearly) orthogonal concepts. For example, donating to a charity and...

              I think that's the problem of sacrifice vs. impact, and more broadly: feeling good vs. doing good.

              I would disagree here -- I don't think feeling good vs doing good is an abstraction of sacrifice vs impact; I think they're (nearly) orthogonal concepts. For example, donating to a charity and working at a charity both will result in "feeling good", but the latter will likely result in greater (financial) sacrifice, since you're likely limiting your future earnings.

              It is extreme sacrifice to become a monk or a nun, give up worldly pleasures for an ascetic and celibate lifestyle, and pray for the soul of mankind — but it has no impact. [...] I would bet that a single Grocery Outlet store has 1000x more positive impact on poor families than all the food collectives in Oakland and Berkeley combined.

              Absolutely, if we want to evaluate how ethical someone's behavior is, we should consider both impact and sacrifice; I certainly didn't mean to suggest that impact should be excised from the equation. But to push back a bit, I'm not sure how much moral weight we should ascribe to a grocery outlet without knowing specifics. If the business only exists to make a profit, and it only incidentally helps people, then all the company's doing is filling a market void. Intentions, like sacrifice, matter -- it's the difference between murder and manslaughter.

              Let me give a counterexample to yours, which is more aligned with what I was imagining. Consider the difference between a Wall Street banker who donates a 10th of their salary versus a a teacher who donates half. Of course the Wall Street banker has the greater impact, but I doubt someone would consider the Wall Street banker more ethical based on that fact alone.

              1. skybrian
                Link Parent
                Here is another example of "sacrifice versus impact." Consider the difference between a non-profit and for-profit organization. A charity can have all the same problems as a for-profit company....

                Here is another example of "sacrifice versus impact." Consider the difference between a non-profit and for-profit organization.

                A charity can have all the same problems as a for-profit company. They have budgets and leaders who decide how to spend them. The spending decisions can be good or bad. They can treat their workers well or poorly. Often, non-profits rely on low-paid or volunteer labor. Or they might spend lots of money on marketing and pay some people very well despite not doing all that much.

                Is a non-profit hospital better than a for-profit hospital? Hospitals seem to have similar problems due to their mission, regardless of formal structure.

                Yet, charities typically get social credit for the sacrifice of foregoing profit. Meanwhile, any good work by a for-profit company, no matter the impact, will be discounted because they were doing it to make money. (Even if the argument for how they're going to make money off of it is kind of tenuous, like "it's good marketing.")

                One of the reasons I'm a fan of GiveWell is that in a way, it's an attempt to hold charities accountable. They can only evaluate a few charities, though, and they have specific ideas of what "impact" looks like. There's certainly room for other charity evaluators that have different goals.

                (And there are other charity evaluators, but I don't know enough about them.)

                2 votes
          2. spctrvl
            Link Parent
            For me, the convenient conclusion also sets itself apart from specialization in that a lot of high paying professions, like hedge fund manager or CEO, are the same ones that are causing the misery...

            For me, the convenient conclusion also sets itself apart from specialization in that a lot of high paying professions, like hedge fund manager or CEO, are the same ones that are causing the misery that EAs ostensibly seek to alleviate. It's like a paid arsonist donating their a portion of their money to the fire department.

            4 votes
          3. DawnPaladin
            Link Parent
            The idea you're talking about is called "earning to give", and the EA movement has actually backed away from it a lot in recent years. Their more recent career guides focus much more on finding an...

            The idea you're talking about is called "earning to give", and the EA movement has actually backed away from it a lot in recent years. Their more recent career guides focus much more on finding an important problem to work on, joining a group focused on working on that problem, and maximizing your effectiveness working on that problem.

            3 votes
          4. [10]
            FlippantGod
            Link Parent
            I don't agree that we need to evaluate to that extent how "moral" someone's behavior is, nor do I find it feasible. Being able to say, without a doubt, this made the world better, is enough for...

            I don't agree that we need to evaluate to that extent how "moral" someone's behavior is, nor do I find it feasible. Being able to say, without a doubt, this made the world better, is enough for me.

            For example, in parables from the Bible, Jesus multiplies food for the people gathered to eat. As I child, I wondered why Jesus would not have spent his time on Earth multiplying food for everyone to eat. I was told that instead, Jesus taught people to fish.

            I don't think anyone would argue that Jesus as portrayed in the Bible was anything but exceptionally moral, or that he did not sacrifice everything to help others (in what Christianity would consider the most significant way). But, I think it should be self evident that he would pale in comparison and impact to a Jesus who fed everyone he possibly could have.

            This is a silly thought experiment, but I feel it demonstrates the purposelessness of measuring one's morality by the self sacrifice made to do good.

            2 votes
            1. [9]
              lou
              (edited )
              Link Parent
              I don't like the "fish" analogy. It's a weak response. So let me provide an answer that makes more sense to me, but that you'll probably find even weaker: giving bread to the entirety of humankind...

              But, I think it should be self evident that he would pale in comparison and impact to a Jesus who fed everyone he possibly could have

              I don't like the "fish" analogy. It's a weak response. So let me provide an answer that makes more sense to me, but that you'll probably find even weaker: giving bread to the entirety of humankind throughout his life would only satiate the amount of people he could get to until his death. However, providing an example of morality would guarantee, for everyone he converts to virtuous morals, an eternity of bliss in the presence of God. That's a lot more than bread.

              Of course, this argument will only hold for those who believe in some version of Christianity.

              2 votes
              1. [8]
                FlippantGod
                Link Parent
                Yes, I acknowledge that he cannot be considered more moral in the view of Christianity. But I argue that feeding as many people as possible (and yes, I recognize the logistics problem) would...

                Yes, I acknowledge that he cannot be considered more moral in the view of Christianity.

                But I argue that feeding as many people as possible (and yes, I recognize the logistics problem) would ultimately outweigh his individual impact because Christianity outlived him. So long as his morals reach people and endure, saving as many people from starvation as possible and lifting people out of subsistence survival would have a greater impact.

                I suppose it is impossible to know if constantly multiplying food would have lessoned the spread or longevity of his teachings in practice, but I'd absolutely expect more people in the history of Christianity would be impacted by a Jesus who fed as many people as possible.

                1 vote
                1. [7]
                  lou
                  (edited )
                  Link Parent
                  I'd argue that giving bread to everyone, in detriment to the teachings, is only more moral if the hypothesis of eternal bliss is false, since even those who died of hunger might be more than...

                  I'd argue that giving bread to everyone, in detriment to the teachings, is only more moral if the hypothesis of eternal bliss is false, since even those who died of hunger might be more than compensated by, well, eternal bliss.

                  However, in practice, a good Christian should do both, since giving to the poor is not really in opposition to providing a good moral example that is conducive to virtue. In fact, they're kinda the same thing.

                  And oh, not all Christians believe the bible is the literal word of God that must be interpreted literally and followed without reservation. That is certainly not the Catholic doctrine (even though Catholics in particular will have their own interpretations, practices, and beliefs, and US Catholic church is its own thing for sure...).

                  2 votes
                  1. [6]
                    FlippantGod
                    (edited )
                    Link Parent
                    My argument is just that by feeding so many people, the possible impact is greater, and more people could be reached. So that numerically, either more people live good lives or more people reach...

                    My argument is just that by feeding so many people, the possible impact is greater, and more people could be reached. So that numerically, either more people live good lives or more people reach what Christianity would believe to be salvation, however you want to look at it.

                    Basically, more healthy living people + more time = greater net benefit, AND/OR greater conversion to Jesus' morals in the short and long run.

                    EDIT: sorry, got off track. the original point was that the fact that Jesus sacrificed his life, while definitely an important part of Christianity, isn't really what made his life morally good; it was his actions, and what he did with his time on Earth could lessen or increase his impact regardless of his sacrifice.

                    Perhaps feeding people would have hurt the spread of his teachings. But I don't think his death has a bearing on how morally good his life was.

                    1 vote
                    1. [5]
                      lou
                      Link Parent
                      That makes sense. It would be interesting to see this math, but the problem I see here is that giving bread is easily quantifiable, but leading people into eternal bliss is not. And you'd have to...

                      That makes sense. It would be interesting to see this math, but the problem I see here is that giving bread is easily quantifiable, but leading people into eternal bliss is not. And you'd have to account for the indirect effects as well, since theoretically those he converts might in turn leader others to eternal bliss.

                      I guess I'm trying to say that this is not really quantifiable, and consequentialism is unlike to provide an answer.

                      1 vote
                      1. [4]
                        FlippantGod
                        Link Parent
                        Yes, you are correct. It is not as clear as I thought if feeding people would increase his positive impact in Christianity's moral framework. But please see my edit, the original point was...

                        Yes, you are correct. It is not as clear as I thought if feeding people would increase his positive impact in Christianity's moral framework. But please see my edit, the original point was supposed to be that he could have variable impact during his life, but measuring how moral that made him serves no purpose, and that his ultimate sacrifice, while religiously significant and probably responsible in part for the rapid conversion of Christianity, does not impact how moral his actions were.

                        Now that I think about it, since he would be considered to ascend to heaven in glory forever, one might say he made little to no sacrifice during his time on Earth. Yet he was still morally good.

                        1. [2]
                          lou
                          Link Parent
                          That is a good discussion I'm in now way qualified to have :P

                          That is a good discussion I'm in now way qualified to have :P

                          1 vote
                          1. FlippantGod
                            Link Parent
                            Haha yeah, this has been very interesting to talk about but I should probably stop. Thank you for pointing out some of the flaws in my rationalization, I just took it for granted that more fed...

                            Haha yeah, this has been very interesting to talk about but I should probably stop. Thank you for pointing out some of the flaws in my rationalization, I just took it for granted that more fed people would be "more better" until now. Also you were very polite about it!

  2. [17]
    skybrian
    Link
    Over a dozen articles by Nathan J. Robinson have been shared on Tildes and I find myself reluctant to read another one, since I hardly ever like anything he writes. (I'm a bit surprised that I...

    Over a dozen articles by Nathan J. Robinson have been shared on Tildes and I find myself reluctant to read another one, since I hardly ever like anything he writes. (I'm a bit surprised that I shared two articles of his myself, though.) Maybe someone who actually likes his writing can summarize?

    In general, I think that a focus on "Effective Altruism" as a philosophical movement (what do philosophers like Peter Singer or William MacAskill have to say) is a mistake. You can tie yourself in knots discussing philosophy so it's better not to take it too seriously. I'm also doubtful that the philosophers have better answers than the rest of us.

    Similarly, it's a mistake to treat it as a social movement - how do you live your life? Yes, young people do have important questions about what sort of lives they want to live, but I'm not sure that looking to philosophy for answers makes sense?

    A more focused name, for my interests anyway, would be "Effective Charitable Donations". The important question isn't philosophical, it's pragmatic: given some funds already allocated to charity work, what's the best way to spend the money? GiveWell still has the best answers I know to that question, for individuals anyway. You can write a check and get on with your life. It doesn't need to be a lifestyle.

    If you disagree, write a better argument in favor of some other charities. Be specific. Why are your favorite charities so great? How do you know for sure that they're effective?

    There may be different answers for governments or for billionaires, but we don't advise them, so we don't have to focus on their questions.

    What seems to have happened is that people who are serious about finding good charities tend to attract the attention of rich people who want to do charity work, so they end up advising on spending large funds. Good for them, I support giving large charities better advice. The bigger the budget, the bigger the responsibility, though.

    14 votes
    1. [16]
      Akir
      Link Parent
      I know this is being pedantic but the question of how one should live one’s life is a category covered under philosophy. You can find answers without studying philosophy but it may be as good at...

      Yes, young people do have important questions about what sort of lives they want to live, but I'm not sure that looking to philosophy for answers makes sense?

      I know this is being pedantic but the question of how one should live one’s life is a category covered under philosophy. You can find answers without studying philosophy but it may be as good at answering those personal and subjective questions.

      5 votes
      1. [14]
        skybrian
        Link Parent
        I think the problem is that philosophy studies such questions at high levels of abstraction in a quest for universal answers. This is the wrong level, because these questions are the opposite of...

        I think the problem is that philosophy studies such questions at high levels of abstraction in a quest for universal answers. This is the wrong level, because these questions are the opposite of universal. They're personal.

        Would you, say, turn to philosophy for dating advice? It's not like they even know you. You're going to get platitudes.

        6 votes
        1. [9]
          lou
          (edited )
          Link Parent
          I'm not sure about dating, but Plato's distinction between love and friendship is quite illuminating. Gender Trouble: Feminism and the Subversion of Identity, by Judith Butler, was profoundly...

          I'm not sure about dating, but Plato's distinction between love and friendship is quite illuminating.

          Gender Trouble: Feminism and the Subversion of Identity, by Judith Butler, was profoundly formative to my views on gender and identity as a whole, and actually did help solve some of my identity troubles.

          SEPs entry on love helped me figure out some troublesome relationships.

          J.L. Austin's How to Do Things With Words, a seminal work in philosophy of language, helped me understand the functioning of conversations and social interactions as a whole, and I became more socially competent as a consequence.

          Saul Kripke's Naming and Necessity helped me understand the concept of possible worlds, from which I came to understand fictional narratives in a way that directly impacted my artistic production.

          Allan Watt's The Way of Zen had a profound influence on how I ascribe meaning to the world around me (or, maybe, it thought me to stop seeing meaning in everything and understand that things just kinda are).

          Spinoza, Thomas Aquinas, and Augustine of Hippo showed me that faith is not in opposition to reason.

          Along scattered studies of elementary philosophical logic, Descartes gave me great emotional solace just because, sometimes, I like to feel that the world makes sense.

          10 votes
          1. [8]
            skybrian
            Link Parent
            It sounds like Socrates's advice might have been useful to Hippothales (or so we are told). Whether we should be studying ancient dating advice seems like quite another question? For one thing,...

            It sounds like Socrates's advice might have been useful to Hippothales (or so we are told). Whether we should be studying ancient dating advice seems like quite another question? For one thing, this sounds a lot like a pedophilia situation, though I guess we'll never know for sure. And that's not really why people study the texts attributed to Plato, I don't think?

            A problem with studying such ancient texts is that we're mostly not equipped to do it, so we have to read other people's interpretations. And historians often aren't quite sure what they mean either due to how little writing has survived. Translators have to make educated guesses.

            That said, most of the other work you've linked to is considerably more recent, and if you say you've found it helpful there's no point in arguing with that.

            Other than to say, how it applies to any specific situation is up to you. I don't mean to say that there isn't interesting philosophical reading out there, just trying to understand the limits. I think attempted universality and lack of specifics are limits to be aware of. (And the same true of self-help books.) To the extent that it works, it's because it's prompted you to do your own thinking.

            3 votes
            1. [7]
              lou
              (edited )
              Link Parent
              Your criticism of philosophy as a whole seems to be rooted on a very specific and fundamentally reductive understanding of what philosophy actually is. Because this criticism is rooted in a flawed...

              Your criticism of philosophy as a whole seems to be rooted on a very specific and fundamentally reductive understanding of what philosophy actually is. Because this criticism is rooted in a flawed premise, it doesn't seem very productive to address it directly.

              It might be more valuable to simply ask: what is it that you call philosophy, and why do you believe it possess those attributes that you criticize?

              Also, are you aware that your criticism of how philosophy makes universal claims is itself an universal claim about philosophy?

              6 votes
              1. [6]
                skybrian
                Link Parent
                Yes I’m aware of that. It’s not that universal claims never work (consider physics and math) but rather, the domains where they do work seem limited, and it’s useful to try to understand the...

                Yes I’m aware of that. It’s not that universal claims never work (consider physics and math) but rather, the domains where they do work seem limited, and it’s useful to try to understand the boundaries, as best we can. Expecting a field to be as successful as physics in finding universal laws seems very optimistic, and philosophy is where we put all the stuff we haven’t figured out yet. (Once it’s successful it tends to become its own field.)

                It tends to be easier to poke holes in universal arguments than to find universal arguments that work, and that seems to me like a good use of philosophical argument. After you’ve torn down the over-broad claims, what remains is “well, it depends on the situation, but here are some possibilities.”

                4 votes
                1. [5]
                  lou
                  (edited )
                  Link Parent
                  The notion that the only thing philosophy do is to make universal claims is very reductive. One could argue that, since the late 19th century, many philosophers became intensely preoccupied with...

                  The notion that the only thing philosophy do is to make universal claims is very reductive. One could argue that, since the late 19th century, many philosophers became intensely preoccupied with solving localized problems that their predecessors would consider unworthy of attention, such as Austin's investigation of a very specific aspect of everyday language. Sure, philosophers do make and support universal claims, but it is not true that universal claims are the whole business of philosophy, and you would have no difficulty finding philosophers that oppose the very notion that something like universals is even possible. If you look at post-structuralism, deconstruction, and post-modernism as a whole, you'll find people questioning the very notion of truth, and their writings are an exploration into creating a rather difficult, intricate intellectual route, using poetic numinous language to avoid and undo the essentialist traps created by centuries of intellectual activity under the principles of logic and Cartesianism. To Jacques Derrida, the charismatic figure of deconstruction, traditional essentialist philosophy failed to take into account that meaning is not just a relation between language and object, but rather something that lives in language itself.

                  4 votes
                  1. [4]
                    skybrian
                    Link Parent
                    From a quick glance, "How To Do Things With Words" (assuming that's what you're referring to) seems pretty interesting and has specific examples, so that's good. But I still think of it as in the...

                    From a quick glance, "How To Do Things With Words" (assuming that's what you're referring to) seems pretty interesting and has specific examples, so that's good. But I still think of it as in the business of "universal claims" because it seems to be about "how do humans use language" [1] and not "what did my sister-in-law mean when she said X to me last Thursday?"

                    That is, perhaps my idea of a "universal claim" is more sweeping than you're imagining, and therefore not as reductive as you think?

                    The habit of speaking generally is quite difficult to shake; almost all my online communication is of that sort.

                    I think that's true of others as well. As an example, on r/accordion we get lots of requests for more information about the brand or model of an accordion found in someone's attic, when actually the worth of a used accordion has to be judged individually, as it could be in good or bad shape based on its history. It needs to be examined in person and knowing the model hardly helps you at all.

                    In an industrial age, people are accustomed to mass-produced products being similar enough to each other that researching other instances of the same type would actually be useful. That's why we read product reviews, right? And it is often useful, but it only goes so far.

                    (And yeah, I'm still doing it, speaking generally.)

                    [1] Although, the scope is a bit unclear. All examples are in English, but do we implicitly assume that non-English languages work similarly?

                    4 votes
                    1. [3]
                      lou
                      (edited )
                      Link Parent
                      Your definition of "universal" is astonishingly broad. Forget philosophy, there's not much humans can say at all that would be specific enough for those standards, and by that logic many of the...

                      Your definition of "universal" is astonishingly broad. Forget philosophy, there's not much humans can say at all that would be specific enough for those standards, and by that logic many of the things we find useful (including science in its entirety) would be lost. I'm really curious why you developed such an inflexible fascination with a very particular notion of universality, and also with an (in my opinion) unachievable idea of ultra-specificity.

                      6 votes
                      1. [2]
                        skybrian
                        Link Parent
                        Yes, it's a very broad category. You do need a big box to fit all of philosophy, since there are philosophies of many subjects! But I still think it's interesting boundary because there actually...

                        Yes, it's a very broad category. You do need a big box to fit all of philosophy, since there are philosophies of many subjects! But I still think it's interesting boundary because there actually are a lot of factual things on the other side that might be overlooked, if you're of a scientific or philosophical mindset:

                        • Storytelling. I'm not a natural storyteller and have a tough time just coming up with examples, but some people can tell stories all day.

                        • Travel writing. Visiting places and writing about what you see.

                        • Interviewing people about their own lives. Biographies.

                        • History. Much of history is not generalizable, it's just a bunch of things that happened once and are unlikely to happen the same way again. (Historians do sometimes generalize, but they're reluctant and fairly careful about it.)

                        • Journalism.

                        • Court cases and testimony about specific events.

                        Also, much of everyday life is not about finding general rules or categories for things, it's about dealing with individual messes as they come up. (In this I've been heavily influenced by David Chapmans ideas about rational versus reasonable thought.)

                        2 votes
                        1. lou
                          (edited )
                          Link Parent
                          Stories allow for generalizations from events and characters made to compose a whole. Travel writings are syntheses of impressions made to compose a whole. Interviews are the reproduction of turns...

                          Stories allow for generalizations from events and characters made to compose a whole.

                          Travel writings are syntheses of impressions made to compose a whole.

                          Interviews are the reproduction of turns of conversation.

                          History is a bundle of facts made to compose a whole from which hypotheses about the past are confirmed or denied.

                          The work of the journalist is to acquire information from multiple sources and present them in a way that allows the reader to make generalizations about reality.

                          Court cases are sets of documents, testimony, and reasoning, which must be generalized in order to achieve a verdict.

                          It is not possible to say things about the world without generalization. You yourself generalized in the very comment to which I am answering. When you used the word "storytelling", you made a generalization which was necessary to refer to all the different things we call storytellyng without needing to name its parts. The same goes for "travel writing", "interviewing", "history", "journalism", and "court cases". You're addressing categories, not specific instances.

                          I've read the link... I'm not sure it agrees with your reading :/

                          3 votes
        2. [2]
          Akir
          Link Parent
          I would disagree with your characterization of philosophy. There is academic philosophy which is highly abstract and attempts to be universal but there are works of philosophy which are not...

          I would disagree with your characterization of philosophy. There is academic philosophy which is highly abstract and attempts to be universal but there are works of philosophy which are not academic and are designed to be subjective. In fact, of the many works that deal with philosophy (especially when it comes to questions about how to live a life), I would say that the majority of works are specifically designed to be read as anything but academic philosophy. And by that, I mean books, plays, movies, etc. Even unchallenging media exists to reinforce the philosophy of what the status quo is at the time. Especially movies with romances in them!

          5 votes
          1. skybrian
            Link Parent
            Claims of universality are certainly not limited to philosophy. Probably the most damaging claims of that sort are elsewhere, as cultural knowledge and religion. And often these are implicit...

            Claims of universality are certainly not limited to philosophy. Probably the most damaging claims of that sort are elsewhere, as cultural knowledge and religion. And often these are implicit claims, because no other possibilities are mentioned.

            One way that reading widely can be helpful, even though it's not direct advice, is by learning that there are more possibilities than you might have thought. (Whether they work for you is another question.)

        3. [2]
          rich_27
          Link Parent
          I think often people find value in philosophy not because they are looking to it for concrete, specific advice, but because reading about someone's generalised perspective can allow you to think...

          I think often people find value in philosophy not because they are looking to it for concrete, specific advice, but because reading about someone's generalised perspective can allow you to think about your personal, specific situation from another perspective.

          Philosophy doesn't necessarily have to have any answers; posing questions can be enough to help the reader reframe their life and specific situations for themselves.

          5 votes
          1. skybrian
            Link Parent
            Yes, I think that's a healthy attitude to take towards it. I think I've been too dismissive and so we got a bit too far into an "is philosophy overrated?" discussion. But for someone who isn't...

            Yes, I think that's a healthy attitude to take towards it.

            I think I've been too dismissive and so we got a bit too far into an "is philosophy overrated?" discussion. But for someone who isn't looking to philosophy for specific answers, it's not a bad thing at all.

            2 votes
      2. lou
        Link Parent
        I'm not sure you're being pedantic at all. Looking to philosophy for answers makes total sense.

        I'm not sure you're being pedantic at all. Looking to philosophy for answers makes total sense.

        2 votes
  3. spctrvl
    Link
    I do find it tragically impressive how quickly effective altruists managed to speedrun what I used to think were pretty weak criticisms of utilitarianism. The repugnant conclusion was bad enough,...

    I do find it tragically impressive how quickly effective altruists managed to speedrun what I used to think were pretty weak criticisms of utilitarianism. The repugnant conclusion was bad enough, I had no idea people actually existed that wanted to maximize some abstract aggregate utility instead of per capita, but they've actually managed to build a utility monster in the form of this potential far future K3 civilization.

    12 votes
  4. [6]
    rich_27
    Link
    That was a very long article; I only managed to make it to the first break point. Whilst I can't comment on the remainder of the article, the first section reads as a very emotional argument...

    That was a very long article; I only managed to make it to the first break point. Whilst I can't comment on the remainder of the article, the first section reads as a very emotional argument against Effective Altruism based on an assumed position that utilitarianism is wrong/morally reprehensible. I found the opening paragraph, especially, to be pretty sensationalist - with the author seemingly inferring that people claiming to have found a better way to do good inherently criticises those not part of that way - which may well have coloured my opinion of the rest of the article.

    I think an article like this should first establish their base position and rational for it (utilitarianism is bad, in this case) before analysing a subject through that lens, because otherwise you run the risk of presenting that position as fact that the reader 'should' agree with. Unless the reader is actively thinking about the underpinning assumptions as well as the conclusions presented, they might find themselves adopting that base standpoint without ever questioning it; to me, it feels like arguing in bad faith.

    As someone who developed a pretty utilitarian view without reading much on the subject, I found I disagreed with quite a few of the things things the author stated without exploring or supporting:

    The EA community is rife with arguments in defense of things that conflict with our basic moral intuitions

    I didn't find anything the author presented thus far conflicted with my basic moral intuitions.

    This is because it is heavily influenced by utilitarianism, which always leads to endless numbers of horrifying conclusions until you temper it with non-utilitarian perspectives

    The phrase "which always leads to" is a very sweeping unsubstantiated generalisation. The author then takes this opinion-presented-as-fact as a firm foundation from which to continue:

    Patching up utilitarianism with a bunch of moral nonnegotiables is what everyone ends up having to do unless they want to sound like a maniac

    In addition to the foregone conclusion the first part depends on, the author also frames rationalising morality as mania.

    Neuroscientist Erik Hoel, in an essay that completely devastates the philosophical underpinnings of Effective Altruism, [...]

    Again, whether or not the essay devastates the philosophical underpinnings is a conclusion that should probably be left to the reader to draw.

    its proponents face two unpalatable options: get rid of the attempt to pursue the Quantitatively Maximum Human Good, [...], or keep it and embrace the horrible repugnant conclusions of utilitarian philosophy that no sane person can accept.

    The author continues to present utilitarianism as a cancer to be excised. I don't think the utilitarianism was established to be either horrible or repugnant, nor be a philosophy that only the sane can accept.

    The article continues in the same vein, which I will not expand on further, except to highlight the delightful irony in the author's supposition that "the movement’s intellectual core is so poisoned by bad philosophy as to be unsalvageable"!

    I should, however, add that I did enjoy the read. It was interesting to learn about a topic I had not come across before (Effective Altruism), and it certainly made me think - thanks for sharing!

    9 votes
    1. [5]
      rkcr
      Link Parent
      The rest of the article goes on to point out many specific parts of the utilitarianism of EA that's problematic. It's odd to read partway through an article then (rather thoroughly!) criticize it...

      the first section reads as a very emotional argument against Effective Altruism based on an assumed position that utilitarianism is wrong/morally reprehensible.

      The rest of the article goes on to point out many specific parts of the utilitarianism of EA that's problematic.

      It's odd to read partway through an article then (rather thoroughly!) criticize it for not including things that it does include, but later.

      8 votes
      1. [4]
        rich_27
        (edited )
        Link Parent
        I skimmed the first paragraphs of each of the following sections to get a rough idea of what they contained; I must have missed that. My rational for reading-, and then writing-, as much as I did...

        I skimmed the first paragraphs of each of the following sections to get a rough idea of what they contained; I must have missed that.

        My rational for reading-, and then writing-, as much as I did was because I did not find continuing to read what felt like unsubstantiated opinion presented as fact particularly appealing, but - having read the other comments here - it felt important to point out the issue I saw with the article that others reading may not pick up on.

        If there are other sections that do evaluate utilitarianism as well as just EA for its utilitarianism, I'd be interested to read those. Would you be able to point me to which section I should look at?

        5 votes
        1. [3]
          Greg
          Link Parent
          I didn’t find the first part of the article especially convincing, and I also thought it was a little frustrating in its generalisations (apart from anything else, why is utilitarianism needing a...

          I didn’t find the first part of the article especially convincing, and I also thought it was a little frustrating in its generalisations (apart from anything else, why is utilitarianism needing a few moral “patches” to work well inherently a problem? Who’s arguing that axiomatic purity is a requirement here?).

          What I did find interesting is his description of longtermism, which I hadn’t come across. If anyone truly believes in that as laid out, I’m frankly terrified of them; he explains it as the maximisation of absolute rather than per capita utility over time, to the extent that a proponent would consider it more moral to have a multiplanetary society of many billions with low quality of life compared to a small overall population where everyone has a very high quality of life.

          The idea that utility can be quantified and maximised resonates with me, the idea that blindly increasing it by adding more people to the calculation is in any way valuable strikes me as totally insane.

          That said, I’m not sure how widespread that belief is within the EA community, and the article doesn’t make it particularly clear, so it could be an extremely fringe interpretation. I certainly hope that’s the case, at least.

          5 votes
          1. [2]
            FlippantGod
            Link Parent
            Considering effective altruism is supportive of family planning, birth control, and women's education, I'd say without any further evidence that EA generally regards slowing the global growth...

            Considering effective altruism is supportive of family planning, birth control, and women's education, I'd say without any further evidence that EA generally regards slowing the global growth rate, reducing global poverty, and improving global quality of life to be high priorities.

            Also consider that EA publicly comes under fire for sending money to third world countries abroad because it can be more cost effective to substantially improve those conditions than it is to improve a struggling first world neighborhood.

            This example of a low QOL multi-planetary population is just.... That ain't it.

            5 votes
            1. skybrian
              Link Parent
              I think there is still lots of disagreement among effective altruists (itself a pretty vague category), so you could be talking about different people with differing opinions. Also, it's logically...

              I think there is still lots of disagreement among effective altruists (itself a pretty vague category), so you could be talking about different people with differing opinions. Also, it's logically consistent to support slowing the global growth rate in the short term (say, for environmental reasons) and wanting humanity to expand across the galaxy in the long term (once some tough engineering problems are solved).

              Once you start talking in very abstract science fictional concepts, there are all sorts of possibilities that are logically consistent. That's the trouble.

              2 votes
  5. [2]
    Comment deleted by author
    Link
    1. FlippantGod
      Link Parent
      I think Utilitarianism is popularly accepted as some ethical framework through which convenient assumptions such as "this will eventually pay off" justify anything. In practice, i.e. in a...

      I think Utilitarianism is popularly accepted as some ethical framework through which convenient assumptions such as "this will eventually pay off" justify anything.

      In practice, i.e. in a professional setting the popular application is in limited scenarios where action taken will reduce loss of life at some expense, and where lots of data exists.

      "Do we spend the money to adjust this road to make it safer? At what point can we say that this road needs to be altered more urgently than our other unsafe roads?"

      1 vote
  6. [3]
    lou
    (edited )
    Link
    Maybe it is not a coincidence that the abuses of utilitarianism come at a time when secularism effectively excised virtue ethics from enlightened public discourse due to its close association with...

    Maybe it is not a coincidence that the abuses of utilitarianism come at a time when secularism effectively excised virtue ethics from enlightened public discourse due to its close association with outdated concepts such as traditional Christianity and moral virtue.

    However, it seems intuitively true that some things are good or bad due to their intrinsic qualities rather than their consequences, and it is dangerously dissociative to ignore the effect that our choices have on the formation of our character.

    It may prove valuable to reassess virtue ethics under a non-religious, non-moralistic framework, as a way to reconnect our ethical practices to whatever it is that makes us human.

    5 votes
    1. [2]
      NaraVara
      Link Parent
      Religious/non-religious is a common dichotomy in Western philosophy but is not really part of Eastern (or even pre-Christian Western) philosophical traditions. It's going to be difficult to talk...

      It may prove valuable to reassess virtue ethics under a non-religious, non-moralistic framework, as a way to reattach your ethics to whatever it is that makes us worthy human beings.

      Religious/non-religious is a common dichotomy in Western philosophy but is not really part of Eastern (or even pre-Christian Western) philosophical traditions. It's going to be difficult to talk about virtue ethics without some amount of discussion of how we should conceptualize virtue and how they should work. Spirituality is a fundamental dimension of human existence, any attempt to arrive at an ethics that can't speak to that is going to have issues.

      4 votes
      1. lou
        (edited )
        Link Parent
        I agree with you, but let me translate your conclusion into something a bit more universal: religion and spirituality are popular systems that mediate our relationship with ethics, modality, and...

        I agree with you, but let me translate your conclusion into something a bit more universal: religion and spirituality are popular systems that mediate our relationship with ethics, modality, and metaphysics.

        What every human being considers good is intrinsically related to what they believe is possible under the first principles that structure reality. Spirituality is one way to navigate this territory, but it is certainly not the only one.

        Every time you love someone, and act accordingly, you are making dozens of implicit, non-consequentialist, metaphysical assumptions about reality. And we all know unbelievers are capable of love.

        So I propose that it is possible to hold virtue-centric positions that are not spiritualistic. Pragmatically, every atheist and materialist I know holds numerous positions which can be classified as virtue-centric. There is seemingly no contradiction there.

        6 votes
  7. [10]
    skyfaller
    Link
    I think the simplest takedown of "effective altruism" is that it can essentially turn the potentially infinite people of the future into a "utility monster" whose interests can massively outweigh...

    I think the simplest takedown of "effective altruism" is that it can essentially turn the potentially infinite people of the future into a "utility monster" whose interests can massively outweigh the interests of everyone alive today, even though those future people are only theoretical as of now.

    Certainly protecting future people is vital, and our current civilization tends towards dangerously short-term thinking, but one question is, how much do you discount the interests of the future people, and when? What is the discount curve? Refusing to discount them at all could result in mass death and horrifying conditions for the very real people of today.

    And since we suck at predicting the future, making decisions about theoretical futures based on almost certainly flawed predictions is an ethical nightmare.

    4 votes
    1. [2]
      skybrian
      Link Parent
      That’s only the case if certain philosophies are taken overly seriously. I think very few people would do that, unless it justifies something they’re inclined to do already? More likely, weird...

      That’s only the case if certain philosophies are taken overly seriously. I think very few people would do that, unless it justifies something they’re inclined to do already? More likely, weird philosophical results become fodder for lots of philosophical discussions, not action.

      Better to judge people on what they actually do. What gets funded?

      3 votes
      1. vektor
        Link Parent
        Allow me to take the related german org "effektiv spenden" as an example, related to the german branch of EA, but this particular org focuses on effective donations. Their focus lies on three...

        What gets funded?

        Allow me to take the related german org "effektiv spenden" as an example, related to the german branch of EA, but this particular org focuses on effective donations.

        Their focus lies on three topics: Climate action, developmental aid and animal protection.

        • Their recommended climate action orgs focus on policy and research; notably, fickle and ineffective (but very feel-good) direct compensation charities that grow trees in brazil or whatever are completely missing. It's all about leveraging force-multipliers. Help and lobby lawmakers to write laws that effectively curb emissions, fund research that has a high expected reduction in emissions.
        • Developmental aid focuses on direct cash donations to the world's poorest (supposedly a very effective long-term solution) and fighting easily-treated healthcare concerns (malaria, parasites). That these healthcare matters are criminally underfunded is no secret.
        • Animal welfare, their main focus is welfare of animals in captivity, not conservation of biodiversity in the wild. Not my cup of tea, but the methods are again focused on force multipliers: Mobilizing the public to put pressure on companies to adopt better methods, and funding research to develop meat alternatives that aren't a sacrifice from the consumer's perspective.

        There's no BS here about AI safety or anything. Those are tangible problems with tangible solutions. You can dig deeper (and EA does) into these orgs to find out what exactly they fund.

        To me, the main thesis of EA philosophy is that charity work/donations should be allocated for maximum effect on relevant problems. What's wrong with filtering the list of charities for effectiveness? The choice of relevant problems is less mechanistic and more subjective, but a bit of rational discourse there about what is or isn't important according to this or that criterion also doesn't hurt. The main criticism imo shouldn't be leveled at the philosophical framework (though I suppose there are a few minutiae that aren't completely trivial to resolve if you go full utilitarianism) but more at the methods used to implement that philosophy: Are the criteria for problem selection sound? Is the evaluation of effectiveness sound? How do we assess likelihood of undesirable events, and how those are affected by charitable organizations? How reliable is past effectiveness when predicting future effectiveness? Certainly in my mind, those are well in scope of EA philosophy.

        3 votes
    2. [7]
      psi
      Link Parent
      I think it's worth clarifying that not all effective altruists subscribe to longtermism, not that I disagree with your conclusions. Longtermism is so fundamentally flawed that I didn't even bother...

      I think the simplest takedown of "effective altruism" is that it can essentially turn the potentially infinite people of the future into a "utility monster" whose interests can massively outweigh the interests of everyone alive today, even though those future people are only theoretical as of now.

      I think it's worth clarifying that not all effective altruists subscribe to longtermism, not that I disagree with your conclusions. Longtermism is so fundamentally flawed that I didn't even bother to address it in my argument against effective altruism.

      And since we suck at predicting the future, making decisions about theoretical futures based on almost certainly flawed predictions is an ethical nightmare.

      For sure, this is really the heart of the problem. We don't even know if humanity will be around in a few hundred years, and from a certain perspective, it doesn't seem likely -- if there were to be trillions of trillions of human beings in the future, consider how unlikely it would be for us to be among the first ~100 billion based on chance alone.

      1 vote
      1. [4]
        skyfaller
        Link Parent
        Frankly I think the drive to "optimize" inherent to "effective altruism" will tend to create other utility monsters, even if longtermism is somehow excised from the movement. Using utilitarianism...

        Frankly I think the drive to "optimize" inherent to "effective altruism" will tend to create other utility monsters, even if longtermism is somehow excised from the movement. Using utilitarianism as a guide to try to do more good than harm can be troubling but is probably harmless in most real world situations. Trying to optimize utils is a disaster because the units are made up and can't actually be compared apples to apples, you can't do math with them. You will find any utility monster and feed it everything you can, becoming a paperclip maximizer.

        I hesitate to speculate as to other realistic short-term utility monsters, but an obvious one is just conventional eugenics and/or racism. If you convince yourself that, say, people who have had genetic treatments to increase intelligence just matter more than other people (perhaps because you believe they're more likely to create better intelligence-enhancing treatments), you could pour all your resources into creating Gattaca. If there are only a handful of genetically engineered children, they could be classic utility monsters.

        One of the best defenses against utility monsters is making egalitarianism a fundamental principle, which is what the "Defective altruism" article is arguing for. Socialism is the best way to guard against, well, fascism, despite the scoffing of many in this thread.

        5 votes
        1. [3]
          skybrian
          Link Parent
          In recent years, many authoritarians have tended to be right-wing. But in the 20th century, "socialism is good" didn't work very well as a heuristic. Leaders who talk a good game about socialist...

          In recent years, many authoritarians have tended to be right-wing. But in the 20th century, "socialism is good" didn't work very well as a heuristic. Leaders who talk a good game about socialist (or communist) principles often turn out to be untrustworthy.

          (And despite his many articles, it's unclear if Nathan J. Roberson is himself committed to socialism when it comes to the organization of Current Affairs itself.)

          So I'm quite skeptical of the notion that socialism could guard against anything. What's the argument for that? Believing hard is not enough. There has to be more than that.

          It seems like Scandinavian countries have done pretty well with socialist democracy, while countries in other parts of the world have often done badly. It's unclear why this is. Education, maybe?

          3 votes
          1. [2]
            skyfaller
            Link Parent
            I was being sloppy, I was using socialism as shorthand for socialist democracies, as opposed to communist totalitarian states run by dictators or a Party oligarchy. You are correct that leftist...

            I was being sloppy, I was using socialism as shorthand for socialist democracies, as opposed to communist totalitarian states run by dictators or a Party oligarchy. You are correct that leftist economics alone will not protect egalitarianism or equality, and that many communist countries are/were well known as places where "some animals are more equal than others". Some actual power of the people over their government is also necessary.

            The best way I can put it is Wilhoit's formulation of conservatism: "There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect." And his inverse statement: "So this tells us what anti-conservatism must be: the proposition that the law cannot protect anyone unless it binds everyone, and cannot bind anyone unless it protects everyone."

            If you think that capitalist societies have the same law for the rich and the poor, well, it sure isn't that way in the USA. And as the climate crisis progresses, I predict that all countries will either fall to fascism or rise to meet the mounting danger with socialism; they must eventually choose to sacrifice some portion to steal luxury for the remainder, or deliver climate justice for all and protect the vulnerable. The middle ground will vanish under the rising waters. Capitalism without at least some Green New Deal-style socialism lacks the will or ability to protect its people, there is no pure capitalist solution without human sacrifice. And none of the other aspects of the law matter to people left to die.

            Distributing resources more evenly is tricky when the people with the most resources also have the most power, and getting to a more egalitarian place while preserving democracy isn't easy. Fascists hate democracy, and capitalists will cheerfully destroy democracy (or allow it to be destroyed) if they think that's required to continue making their profits, even if they don't think of themselves as fascists. And of course if you have a revolution, there's a high chance that coup-plotters will never give power back to the people. But preserving the neoliberal status quo is truly not an option. Change is coming, one way or another.

            1 vote
            1. skybrian
              Link Parent
              Having the same laws for everyone is an important principle of justice and one that I support, but I don't expect it to result in the kind of changes you seem to be hoping for. There's a common...

              Having the same laws for everyone is an important principle of justice and one that I support, but I don't expect it to result in the kind of changes you seem to be hoping for. There's a common quote about how it can easily go wrong:

              In its majestic equality, the law forbids rich and poor alike to sleep under bridges, beg in the streets and steal loaves of bread.

              It seems pretty clear that advantages correlate, and so do disadvantages. Rich people tend to get better education, often have better health, have supporting friends and family members, and so on. Getting ahead doesn't necessarily require any legal advantages, though they often have those too. It's hard to say what's causal when many graphs move in the same direction. Meanwhile, people who are suffering the most often have so many different things going wrong with their lives that it's hard to know where to begin to even stabilize things for them.

              When there are new opportunities, these advantages often snowball. A legal regime that attempts to limit them (with steeply progressive tax rates, for example) can help, but it only does so much. I think it would be very difficult to prevent Big Tech from happening in some form, even if the top companies turned out to be different than the ones we got. (There are big companies and billionaires in China, too.)

              I expect that whatever crises or disasters might happen, some people will be better able to take advantage of them than others. Relatively few of them will come from a poor background, but a few poor people becoming rich doesn't change much. Wars can cause some rich families to be ruined, but inequality will reappear, often with new people on top. And who will suffer the most? People who don't have the connections and resources to cope.

              Climate change results in lots of investment opportunities for venture capitalists. Many of those investments were in companies that went out out of business, but electric cars turned out to be a good investment for Musk, and here we are. If we ever did get even more serious about climate change, I expect there would be more opportunities for companies that are well-positioned to help.

              And similarly, having a good, fair, just, efficient legal system would probably be an advantage for rich people. Highly corrupt countries tend to be poor.

              It's hard to think of any good thing that well-off people couldn't somehow take advantage of. I don't think it's a good reason to hope for wars, revolutions, or other disasters.

              1 vote
      2. [2]
        vektor
        Link Parent
        That observation relies on self-sampling. You're assuming that your consciousness was plopped at random into one of the N people that will ever live. This argument will always result in the same...

        if there were to be trillions of trillions of human beings in the future, consider how unlikely it would be for us to be among the first ~100 billion based on chance alone.

        That observation relies on self-sampling. You're assuming that your consciousness was plopped at random into one of the N people that will ever live. This argument will always result in the same (relative) estimate, no matter how counterfactual: The 100th human could've made the same argument, given some stats knowledge, and concluded that since he was in the first 100 people, the chance that there will ever be more than 100,000 is exceedingly small. Turns out, we are billions by now. If you don't self-sample, this all goes away. If I reason not about my own position in human history, but about yours, the problem becomes much clearer: Of course you're not a person randomly chosen from all people ever born, you're a contemporary of mine, therefore near me in history. Of course your existence doesn't give me any info about how many humans will ever live, because you're a biased sample.

        It's the doomsday argument and it is deeply flawed statistically, imo.

        You could argue that humanity is nearing extinction, but not using self-sampled statistics. Try the likelihood of a deadly, infectious disease, or nuclear war, or a collapse of the world economy due to climate change.

        Put differently: From an objective outside observer, the fact that we're having this discussion in 2022 about it being unlikely that we're the first few billions from a species that will grow to be trillions is completely irrelevant to the number of people that will ever live - because we're not randomly chosen from the set of all people that ever lived. We're chosen for his experiment because we're in 2022. People in y4k, part of a trillion strong population could predict that same doomsday, because how unlikely is it to be in the first trillion when there should eventually be quintillions of us.

        5 votes
        1. psi
          Link Parent
          This is why I added the weaselly "from a certain perspective" -- the doomsday argument proves to be controversial after all. :p Yes, of course; but the point is that, given the question "How many...

          This is why I added the weaselly "from a certain perspective" -- the doomsday argument proves to be controversial after all. :p

          This argument will always result in the same (relative) estimate, no matter how counterfactual: The 100th human could've made the same argument, given some stats knowledge, and concluded that since he was in the first 100 people, the chance that there will ever be more than 100,000 is exceedingly small.

          Yes, of course; but the point is that, given the question "How many people will ever live?", X percent of people will lie within the X percent confidence interval. Obviously the folk in the tails will be incorrect in self-locating, but the last majority of people will not be in the tails. You are correct that at any given moment in time, you would paradoxically always arrive at the conclusion that we are near the middle. But although the vast majority of predictions at a given moment of time would be incorrect, the vast majority of predictors (i.e., people) must necessarily be correct. Asking "How long will humanity be around for?" is a different question than asking "How many people will ever live?", although we do suppose that knowing the latter will give us a rough estimate of the former.

          But I (somewhat) agree with your larger point about biased samples. I don't think that it's sensible to talk about a counterfactual "me" that exists in the year 40000 AD. That person would have different genetics, memories, etc -- as far as I'm concerned, they're not me at all. However, my current existence also shouldn't be contingent on the future, so in that sense I shouldn't be a biased estimator for predicting how many people will ever live. (Although, now that I think about it, the probability that I exist now is contingent on the fact that at least N people existed before me, so it is biased in that direction.)