31 votes

Sam Bankman-Fried is a feature [of effective altruism], not a bug

45 comments

  1. [2]
    lou
    (edited )
    Link
    There is nothing inherently wrong about trying to be rational about doing good. Maybe the problem arises when groups and individuals insulate themselves in the delusion that they are part of an...

    There is nothing inherently wrong about trying to be rational about doing good. Maybe the problem arises when groups and individuals insulate themselves in the delusion that they are part of an intellectual elite whose reasoning is inherently superior to that of everyone else.

    47 votes
    1. Adverbly
      Link Parent
      There is probably something to be said in favor of a quick sanity check about using ethical philosophies to prescribe actions. The sanity check would be to never take a philosophy to the extreme...

      There is probably something to be said in favor of a quick sanity check about using ethical philosophies to prescribe actions.

      The sanity check would be to never take a philosophy to the extreme or some edge case.

      I think such a sanity check makes sense because if you just think about the history of ethical philosophy... How many times have we come up with an ethical philosophy where we think to ourselves " okay now this is the one. " - only to just come up with another philosophy 20 or so years later . The absolute arrogance to look at the span of history across thousands of years where we have been iterating continuously, and then to devote your entire life to something under the assumption that you have actually got things solved this time... It's just crazy... Especially when you're operating at the fringe of the capabilities of the philosophy to prescribe actions.

      There's absolutely nothing wrong with using ethical philosophies as tools(if anything, we should be doing it way more than we presently are as a general population), but we need to be realistic about the fact that we haven't solved ethics . You should strive to be good, and you should use ethical philosophies to inform your decisions, but I don't think you should bet the farm on any particular one.

      16 votes
  2. [2]
    ignorabimus
    Link
    the last paragraph sums it up pretty well

    the last paragraph sums it up pretty well

    Those who still defend EA as a philosophy believe that SBF was an accident, an outlier, an anomaly. This could not be further from the truth. SBF is, in fact, the perfect poster child for a philosophy of excess in the name of moral superiority, a living embodiment of the dangerous hubris that can arise when ethical boundaries are blurred in the pursuit of a greater good. His actions were not deviations from the EA script; instead, they were its logical extension. In effect, SBF is a feature of Effective Altruism, not a bug. In the unbridled quest to maximise impact, EA inadvertently laid the groundwork for a culture where the end justifies any means, where moral trade-offs are not just tolerated but often encouraged. SBF's saga is a stark reminder of the perils inherent in a philosophy that loses sight of the ethical compass in its relentless pursuit of utilitarian outcomes.

    21 votes
    1. V17
      Link Parent
      I don't think this is a great summary. I'd say that at the least the author skips a few steps in their reasoning, because at this moment, the article seems to me so say something in tune of "SBF...

      I don't think this is a great summary. I'd say that at the least the author skips a few steps in their reasoning, because at this moment, the article seems to me so say something in tune of "SBF perfectly embodies the things that I decided to project into Effective Altruism" more than what it claims to say.

      It's all based on relatively vague claims that are presented as facts (unjustifiable in pursuit of an ill-defined "greater good.", Its message is unsettling: any means can be justified, so long as the end is noble enough. etc.), but are in fact the author's opinions, which would be fine if they were provided with some evidence and reasoning to back them up, but they aren't.

      It is possible that the claims are right and the categorization of SBF is also right, but I don't think this article gives that information. It could easily be rewritten to make the same claims about the culture of MIT without too many changes because of the vagueness.

      Imo the option that SBF is quite simply just a psychopath who used EA as cover because it has noble goals and already hosts some weirdos is just as likely and this article didn't exactly convince me otherwise.

      29 votes
  3. [26]
    Minori
    Link
    Effective altruism is sometimes used to justify abhorrent behavior, but some of their activities like determining how money can do the most good make sense to me. Sure, it's a cold utilitarian...

    Effective altruism is sometimes used to justify abhorrent behavior, but some of their activities like determining how money can do the most good make sense to me. Sure, it's a cold utilitarian calculus, but where should we donate our money to maximize the benefit is an entirely reasonable question. I could donate my $5 to the homeless guy on the street, or I could help buy malaria nets.

    18 votes
    1. [21]
      Grayscail
      Link Parent
      I see the value of trying to figure out how to use resources effectively, but to use your analogy, donating $5 to buy malaria nets doesn't do anything to help the homeless guy, and it's not like...

      I see the value of trying to figure out how to use resources effectively, but to use your analogy, donating $5 to buy malaria nets doesn't do anything to help the homeless guy, and it's not like the homeless guys problems don't matter even if they are probably less significant than the devastating effects of malaria as a whole. But if everyone decides to use their money on the most important thing, all the less than most important things stop mattering to anyone.

      On an individual basis it might make sense to try and do that kind of analysis, but on the whole it would be good to spread resources across all types of efforts because in niche circumstances any option might end up being the best, or maybe even only, option.

      20 votes
      1. wervenyt
        (edited )
        Link Parent
        This eventuality is primarily what's led to the secrecy and cultlike structures around EA, in my opinion. There's a bunch of smaller social politeness reasons too, but the crowd that "developed"...

        This eventuality is primarily what's led to the secrecy and cultlike structures around EA, in my opinion. There's a bunch of smaller social politeness reasons too, but the crowd that "developed" the subculture is very rooted in the Silicon Valley mentality of being a bunch of hackers who know better than the fools handing them free money, and so this idea flourished behind curtains with the fertilizer of "we're doing something that is far from optimal when taken as virtue in general, but only people as smart as us would figure this out anyway." It exacerbates all the issues with backfed worldviews and overconfidence in predictions, too.

        The way that this crowd has been covered in popular channels feels slightly ridiculous, like they're a conspiracy trying to hijack humanity. That's understandable, but when they're mostly a bunch of guys who got into a filter bubble in 2013 and just keep hyping up their own ideas to each other, the tone comes across as overly smug, like their initial goal wasn't literally to do good using their unjustifiable salaries with science and reason. Not to defend them on any latitudes beyond those, it's just funny to me.

        21 votes
      2. V17
        Link Parent
        The thing is, I don't think we're in any danger of everyone donating precisely in tune with effective altruism. Personally I don't have the money to spare, but if I had, I'd probably selfishly...

        The thing is, I don't think we're in any danger of everyone donating precisely in tune with effective altruism. Personally I don't have the money to spare, but if I had, I'd probably selfishly donate either to something that affects my local community and therefore positively affects my own life, or to research of my currently incurable chronic illness. But I still think that the effort to find out how effective it is to donate to various different causes is incredibly useful and can help people at least decide not to donate to bullshit, of which there's a lot in my experience.

        9 votes
      3. [18]
        Minori
        Link Parent
        Effective altruism usually focuses on how wealthy people can do the most good under a capitalist system that rewards greed. In some ways it's like a religious organization. Those with the least...

        Effective altruism usually focuses on how wealthy people can do the most good under a capitalist system that rewards greed. In some ways it's like a religious organization. Those with the least stand to benefit the most from those of us with means.

        I don't think any effective altruist would say we shouldn't help our neighbor. They'd just ask why you're prioritizing your neighbor over dozens of people on a Caribbean island that you're never met. Countries usually spend the vast majority of their resources at home, so in some ways effective altruists are actually targeting niche circumstances that don't get much focus.

        When was the last time you read a front page article about malaria nets in Africa or universal dengue fever vaccines? How much more often do you see articles about "devastating homelessness in our communities"?

        5 votes
        1. [8]
          DefinitelyNotAFae
          Link Parent
          Ironically the last malaria article I read was how we've evolutionarily selected for mosquitos that bite during the day and are now seeing a rise in cases again. (I'm reminding hopeful that mRNA...

          Ironically the last malaria article I read was how we've evolutionarily selected for mosquitos that bite during the day and are now seeing a rise in cases again. (I'm reminding hopeful that mRNA vaccines for malaria turn out the results they're hoping for).

          The problem to me is that this would maybe make sense if all the money was in one big pot being distributed rationally. But it isn't, it's in millions of little pots and none of us are free from bias, and personally I don't trust people rich enough to finagle tax breaks from everything.

          7 votes
          1. [5]
            skybrian
            Link Parent
            I don't think you need "one big pot." Rather, the ideal case for effective altruism is when one charity or wealthy individual is figuring out how to spend a fixed budget for charitable giving, and...

            I don't think you need "one big pot." Rather, the ideal case for effective altruism is when one charity or wealthy individual is figuring out how to spend a fixed budget for charitable giving, and they don't have other preferences about where the money goes, as long as it's not wasted. GiveWell has an easy answer: they can choose for you, or alternatively, you can give to the causes they recommend.

            If you're not controlling vast funds, you don't have to think about "what would happen if everyone did EA?" Everyone isn't going to do EA. Other charities will still get donations.

            You also don't have to stop donating to other causes. Just think of it as coming out of a different budget. It's okay to spend on things that aren't charity at all, so it should also be okay to have a budget for charities that maybe aren't the most effective, but you have a connection to them.

            The "earning to give" thing is a bit more problematic. I think for someone already settled on and happy with a high-earning career, it may make sense to keep doing what you're good at and donate a lot. But it's unclear to me that EA thinking makes sense as career advice.

            5 votes
            1. [4]
              DefinitelyNotAFae
              Link Parent
              I'd argue if you don't make large money you as an individual don't have to think about EA at all. (And if you do make large money, apparently EA puts you on the track of AI and saving future lives...

              I'd argue if you don't make large money you as an individual don't have to think about EA at all.
              (And if you do make large money, apparently EA puts you on the track of AI and saving future lives not current ones)

              I guess all I'm saying is it makes sense to have a strategy with large amounts of charity money. I don't think EA is that philosophy, even if trying to make the most impact for your money makes sense.

              1 vote
              1. [3]
                skybrian
                Link Parent
                I don’t know how much you mean by “large money,” but the way I think about it is that it’s sort of like getting into investing. Some people start investing pretty early with small amounts, but...

                I don’t know how much you mean by “large money,” but the way I think about it is that it’s sort of like getting into investing. Some people start investing pretty early with small amounts, but they can take it seriously. They can care about things like risk and how much they’ll make, much like larger investors do. It’s possible to over-analyze things, but you can also keep it simple and boring.

                Similarly, some people start getting into charitable giving earlier than I did. Of course it doesn’t make sense if you’re broke; you need to take care of yourself first.

                I learned about GiveWell when I started working at Google and I went to a tech talk they gave. I thought their framework for charitable giving made a lot of sense. Google was matching charitable donations and I figured I should take advantage of it.

                The animal welfare and existential risk stuff came later. Some people have funny beliefs, but I’m broadly accepting. People don’t have to agree on which charities they support. I don’t consider it to be a core part of EA, maybe because of my history with it.

                To run with the investing analogy a bit more, it’s like how there are investors in things I don’t invest in, but they’re still investors. People can disagree. You can ignore things you consider to be fads.

                I’m not sure there is a core philosophy other than “maybe we should try to systematically analyze which charities are most effective. How can we do that?” The analogy between investing and charitable giving seems pretty fundamental, too. The GiveWell folks came from a hedge fund where comparing different investments is a natural thing to do and many financial professionals are doing it, and they were surprised that nobody was doing it seriously for charities, or at least not in public. (Charitable foundations do evaluations, but they usually don’t publish what they learned.)

                Investing and charitable giving have different goals but they are both about funding other people’s organizations and hoping it will have good results. Charities are much harder to evaluate though, because there are philosophical issues: what should you measure and is that what really matters?

                4 votes
                1. DefinitelyNotAFae
                  Link Parent
                  I am thinking multimillionaires and billionaires when I say big money. I've used Charity Navigator since 2001 (not sure what Givewell does that they don't but I haven't used GW) and have found it...

                  I am thinking multimillionaires and billionaires when I say big money.

                  I've used Charity Navigator since 2001 (not sure what Givewell does that they don't but I haven't used GW) and have found it a helpful tool. I don't disagree with the general intent of making effective impact with your donations. I stated in a comment that I have feelings about the association of EA with its current vocal supporters. Because to me it feels like a removal of empathy and a complete ignorance to the underlying biases in the system. (How many possible future lives can even be saved if the present ones die.) And I loathe the insistence on pure rationality over empathy and zero feelings only facts*. I also am not a fan of investment culture so to speak so the comparison isn't a positive one even if I intellectually understand your point.

                  *Facts we won't examine for bias.

                  I fully acknowledge that this is coming fifty percent from vibes and the other fifty probably from the "worst" actors in the system. I'm not saying having a philanthropic philosophy or trying to get the most bang for your buck is bad.

                  4 votes
                2. vektor
                  Link Parent
                  Also, often another issue is that you don't even have a way of measuring things. Like, if you decide to solve poverty for the world's poorest, and you have different projects attackign that...

                  what should you measure and is that what really matters?

                  Also, often another issue is that you don't even have a way of measuring things. Like, if you decide to solve poverty for the world's poorest, and you have different projects attackign that problem: The costs involved are usually simple. But how do you measure how effective each project was after completion? Just measure the change in project recipients' financial status? How do you account for confounders? Do you intentionally deprive a control group of project resources, just to have a control? And keep in mind we're in the simple case, where the desired end goal can in principle be measured in terms of dollars. How do you measure population health? Or how do you assess the ecological condition of the planet?

                  Of course, that doesn't excuse the defeatist attitude of not even bothering. Do the math as best as you can, but be aware of the error bars you introduce.

                  4 votes
          2. [2]
            Minori
            Link Parent
            I mean you don't need to be cheating the system to want to donate money. I live comfortably, and I want to help people with any excess money I have. No method of distributing resources is perfect;...

            I mean you don't need to be cheating the system to want to donate money. I live comfortably, and I want to help people with any excess money I have. No method of distributing resources is perfect; I just want to do good in the world.

            1 vote
            1. DefinitelyNotAFae
              Link Parent
              No you don't! I donate money to a variety of things. But the idea of a perfectly rational min-maxed charity system - which seems to be what Effective Altruism has turned into, rather than being a...

              No you don't! I donate money to a variety of things.

              But the idea of a perfectly rational min-maxed charity system - which seems to be what Effective Altruism has turned into, rather than being a philosophy for individuals, is not the reality of it. And is IMO likely more about effective tax breaks than actually doing good when it comes to billionaires and the like. And when there's millions of us donating bits and bobs here and there, I think it is equally reasonable to donate to local community charities, things you're passionate about, and to like, NPR or a scholarship fund, and whatever.

              Maybe not, maybe I'm wrong. But I can only describe my distaste for the Effective Altruism in practice as reacting to the "tech bro min-maxed" stink on it.

              7 votes
        2. [8]
          PuddleOfKittens
          Link Parent
          No, those that are easiest to help stand to benefit the most. If malaria nets cost $10k each then we wouldn't be talking about it. Or, to take it to an extreme... Suppose you're an effective...

          Those with the least stand to benefit the most from those of us with means.

          No, those that are easiest to help stand to benefit the most. If malaria nets cost $10k each then we wouldn't be talking about it.

          Or, to take it to an extreme...

          Suppose you're an effective altruist in the 1930s. And suppose you have a lucrative job at IBM; should you participate in well-paid contracts with the Nazi party? They're paying you enough for a lot of malaria nets, after all. And sure, they're killing some jews here and there, but why should you spend a week or two saving one jew when you could instead shut up and invest your wages into building up hospitals in developing countries instead?

          If everyone took this approach then the holocaust would have completed uninterrupted, because stopping it wasn't a low-hanging fruit. So EA doesn't pass a Kantian sniff test. Also, that's horrifying.

          Effective altruism also abstracts away the idea of contributing to society from work, which is really weird when you think about it - it encourages grift or lowering your moral standards in exchange for money, because making money and spending it on the right things is now moral. It discourages taking expensive stands on moral principle, such as resigning from a profitable job at a company that's starting to do shady things.

          5 votes
          1. [4]
            skybrian
            Link Parent
            I find it hard to believe that anyone in the EA movement would argue in favor of working for Nazis. Do you have a source, or is it just something you made up?

            I find it hard to believe that anyone in the EA movement would argue in favor of working for Nazis. Do you have a source, or is it just something you made up?

            9 votes
            1. Macil
              (edited )
              Link Parent
              Yeah, so much EA criticism along these lines in the thread and article itself are baffling. We have one high-profile example of a criminal donating to EA and now everyone fully assumes that's the...

              Yeah, so much EA criticism along these lines in the thread and article itself are baffling. We have one high-profile example of a criminal donating to EA and now everyone fully assumes that's the typical EA participant and thought process.

              7 votes
            2. [2]
              PuddleOfKittens
              Link Parent
              You misunderstand me. I'm not claiming "they would explicitly argue in favour of it", I'm saying it's a logical extension of their underlying philosophy - given the right incentives, their...

              You misunderstand me. I'm not claiming "they would explicitly argue in favour of it", I'm saying it's a logical extension of their underlying philosophy - given the right incentives, their ideology justifies working with nazis. I'm just taking their existing framework and dropping in some novel inputs (or not so novel inputs, since I'm pretty sure I'm jusst poorly repeating an argument by philosphytube).

              From experience, I know in practice they'll ignore that implication (or rationalize an exception), because they're not completely bananas. And because I've discussed slavery with libertarians who are loose with "your body is your property" but also aren't completely bananas - this is the same conversation, I know how it goes: they tend to make an obviously wrong argument against it, because the people who are absolutely against the conclusion just want to be agreed with, and (if they're in the rabbithole themselves) the people who are far enough down the lunatic rabbithole will see through the BS argument and be prompted to see the conclusion as logically consistent with the premise. It's a great dogwhistle trick.

              This whole situation is dangerous because they'll ignore it because they don't like the conclusion and thus they're just ignoring the logic, and not because they have an ideological foundation for disagreeing, which means they'll drop that objection the moment the feelings-based justification falters. Which tends to happen if the rabbithole gets deep enough; there are currently libertarians who embrace the conclusion and are actively okay with the idea of selling yourself into slavery.

              Ditto to /u/Macil, who's saying the same thing as you AFAICT. I'm treating EA as a real, consistent code of ethics that can be analyzed or critiqued, if your response is that it's not a real code of ethics and that they're just a bunch of opinions tthat they're just making up as they go along, then ok but wow you are dunking on them harder than I ever could.

              2 votes
              1. skybrian
                Link Parent
                Yeah, no, it’s just not. It’s more like a broad framework for thinking about a problem. I think this is a common tactic used by critics of EA. First, make unwarranted assumptions about what the...

                I'm treating EA as a real, consistent code of ethics that can be analyzed or critiqued

                Yeah, no, it’s just not. It’s more like a broad framework for thinking about a problem.

                I think this is a common tactic used by critics of EA. First, make unwarranted assumptions about what the “core philosophy” of EA is, based on what someone said. This is taking the beliefs of a minority and attributing to the whole. Then claim that something terrible logically follows.

                From my reading, I don’t see that kind of agreement on a core philosophy. See my other post for my take on a rough framework that I think is core to the EA approach, but other people may disagree. You don’t even need to agree with the most famous people in the EA movement.

                The claim that EA people have a core philosophy is sort of like saying investors have a core philosophy. They (we) don’t necessarily agree on much other than taking evaluation and funding of charitable organizations seriously.

                Sure, there are fads, and maybe some of them might be harmful? Maybe “earning to give” is a way some young people fool themselves. But a lot of young people choose careers for reasons that have nothing to do with EA. It’s not clear that EA types do any better or worse?

                7 votes
          2. vektor
            Link Parent
            I'd strongly disagree. Take a quick look at the org "80000 hours". They're effective altruist as fuck, they just don't do it by "giving well", but by "working well". And sure, the usual critiques...

            Effective altruism also abstracts away the idea of contributing to society from work, which is really weird when you think about it

            I'd strongly disagree. Take a quick look at the org "80000 hours". They're effective altruist as fuck, they just don't do it by "giving well", but by "working well". And sure, the usual critiques about what exactly they prioritize are there (existential AI risks galore, groan), but it's clearly in line with making your altruism effective. Whether working amorally to earn and then giving most away, or working morally and having nothing left to give is more effective is, as far as I know, an unanswered question that is probably too multi-faceted to arrive at a simple answer. But that question too is ultimately accessible to rational inquiry and thus which path to choose - work well or give well - is "decidable" within EA.

            4 votes
          3. TemulentTeatotaler
            (edited )
            Link Parent
            I think the key flaw of utilitarianism/EA is that it tricks you into believing you know something unknowable. The top experts in the world have routinely performed no better than 5 year olds or...

            I think the key flaw of utilitarianism/EA is that it tricks you into believing you know something unknowable.

            The top experts in the world have routinely performed no better than 5 year olds or chimps in predictions of chaotic systems, but philosophy often skips these sorts of limitations to its application.

            From an (inaccessible) omniscient point of view you can evaluate policies as if they were choices in game theory, and you may come to some very unintuitive conclusions. That dissonance is something people have to resolve.

            For instance, you may have:

            • Policy A - resign immediately
            • Policy B - sabotage your work
            • Policy C - whistle blow
            • Policy D - climb the corporate ladder

            If you could know the actual or expected outcome of those policies you may evaluate based on the outcome:

            • Policy A - Your company immediately replaces you with someone equally talented. $0MM, 0 Jews saved.
            • Policy B - Depending on how aggressively you will be sacked. $1.2MM, 5 Jews saved
            • Policy C - No apparatus exists to successfully take down IBM. Depending on when you whistleblew you land between A/B in outcome.
            • Policy D - You learn the summoning ritual for Kissinger and have 4 sets of components to try to slay the fel beast. $50MM, 0 Jews saved, 10k Cambodians saved

            In this glib and ridiculously oversimplified/non-representative thought experiment: is it morally correct to quit your job at IBM? Why shouldn't you sandbag, if nothing else? You would be indirectly killing Jews if you didn't, but by staying you would be benefiting as part of an institution that was complicit in other deaths.

            It's a hard question. Some people don't care about the outcome, following something like deontological/virtue ethics approaches, and would resign even if they contributed very slightly to the death of a Jewish person while saving tens of thousands of Africans, or future Cambodians from similarly heinous events by growing their influence. Some make a distinction between taking an active role, ala flipping a switch on a trolley instead of stating their preferred outcome between 5 dead and 1 dead.

            Personal feelings can be factored in as well. Peter Singer can say he would be an unstable/unproductive emotional wreck if he spent the money he used to take care of his aging mother on mosquito nets. EA doesn't have to advocate working 100% of your waking hours or not loving/prioritizing your children over strangers because it's impossible. You can consider the suffering or corrupting influence of working for IBM. It isn't a gotcha for a utilitarian to consider the human condition (not since JSM).

            But we aren't omniscient. An analytical consequentialist approach has value, but I think it obviously needs to be paired with heuristics for being a good person with imperfect information.

            Kantian sniff test

            Fun to see the categorical imperative mentioned instead of the typical objection to Kantian ethics, where he'd call for telling the truth to Nazis asking about Jews hiding in your basement. All my homies hate Kant

            3 votes
          4. Minori
            Link Parent
            You're right that some people are easier to help than others. That part you quoted was referring to the fact that, in relative terms, an extremely poor person gains significantly more from $5 than...

            You're right that some people are easier to help than others. That part you quoted was referring to the fact that, in relative terms, an extremely poor person gains significantly more from $5 than a homeless person in many first world countries. Both deserve aid.

            The Kantian sniff test is fair, but I'm not proposing EA as some universal system of ethics or anything. I've only claimed that an individual determining how their money can do the most good and donating excess money to such causes is a just pursuit. Sure it won't fix everything, but it seems like a reasonable framework for anyone that has wealth and wants to improve the world. EA can be combined with advocating for systematic change as well.

            My original comment addresses the fact that I don't think Effective Altruism in any way justifies immoral behavior. I'm fully aware that some people use it as a cover for their ill-gotten gains and iffy income.

            1 vote
        3. vektor
          Link Parent
          I mean, this whole thing of every EA adherent focusing on the same thing is kind of solved for in good methodologies. As a sole actor with mortal-human amounts of disposable income you probably...

          I mean, this whole thing of every EA adherent focusing on the same thing is kind of solved for in good methodologies. As a sole actor with mortal-human amounts of disposable income you probably can't justify to analyse in depth how much money each potential project is already getting and how much they're already within diminishing returns. But if you've got a lot of money, or are doing the cost/benefit research for lots of people, it's absolutely within the EA philosophy and methodologically feasible to ask "how much money is this project getting and how much does it need?" and then based on that to recommend it or not. Or more properly, you get a curve where cost/benefit tapers off. Because EAs aren't blind to things like diminishing returns, and it's kind of silly to assume they are. If a project is already overfunded, it just hasn't had time to actually deploy the developed solution just yet, it's no problem to work out a methodology that does reflect that.

          If you always give to the most worthy cause according to whatever cost function you chose, as you solve the most dire problem, eventually what used to be the 2nd biggest problem will have its turn, then the 3rd biggest problem.

          3 votes
    2. [4]
      moocow1452
      Link Parent
      That's when you start to go into longtermism, where there's what your money can do for the most good right now (malaria nets?) vs what your money can do for the most good in the future. (Invest in...

      That's when you start to go into longtermism, where there's what your money can do for the most good right now (malaria nets?) vs what your money can do for the most good in the future. (Invest in people working towards friendly AI?) Problem with that in the short term is that the people loudest about all the money to go towards AI have a financial incentive towards all the money going towards AI, so EA is a way to launder that bias because its actually good "fr for real" that they are being paid from the tithe so to speak.

      9 votes
      1. [3]
        nukeman
        Link Parent
        I think many Effective Altruism (TM) folks gravitated toward AI stuff because they were in the computing space. They have just enough knowledge that they know more than the general public, but...

        I think many Effective Altruism (TM) folks gravitated toward AI stuff because they were in the computing space. They have just enough knowledge that they know more than the general public, but most weren’t full-time AI researchers. This led them to assume that:

        • A hostile AGI was just around the corner, rather than being potentially hundreds of years out, and:
        • Planetary colonization (again, likely in the next century or so) would result in significantly increased fertility rates; thus:
        • A hostile AGI could threaten to enslave tens to hundreds of billions of people within the next century or two.

        Inputs and assumptions matter a lot in calculations and modeling. If we rejigger those items above (e.g., hostile AGI won’t be possible until the 23rd century, and fertility rates will stabilize around the replacement rate sometime in the mid-22nd century), that paints a very different picture in terms of QALY for bed nets versus AI research.

        10 votes
        1. [2]
          Invisiblemann
          Link Parent
          In the same vein of "techbro EAs gravitate toward X because of their particular assumption matrix", I find it very disheartening that a lot of weight is given towards "billions of lives yet...

          In the same vein of "techbro EAs gravitate toward X because of their particular assumption matrix", I find it very disheartening that a lot of weight is given towards "billions of lives yet unalive" versus "millions of lives currently alive."

          Eg, that EA assumptions put equal or more weight on future lives to the extreme detriment of those currently living today. Like they have written off life right now...

          5 votes
          1. wervenyt
            Link Parent
            The actual reasoning is a little less depressing. Basically, since the core idea is to do the most with the least, irrespective of intimacy (time or otherwise), then the easiest way to effect...

            The actual reasoning is a little less depressing. Basically, since the core idea is to do the most with the least, irrespective of intimacy (time or otherwise), then the easiest way to effect change will always appear to be to plan for millennia ahead. Grand Canyon, Colorado River, type stuff. If a butterfly flapped its wings a year ago and that led to a hurricane, just think of what a butterfly flapping its wings a thousand years ago did!

            Of course, like all their other absurd conclusions, it comes down to overconfidence in their models. If you didn't think you could have predicted that hurricane if you were just the butterfly, then you wouldn't think it was the actual impetus.

            4 votes
  4. [6]
    BitsMcBytes
    Link
    I can empathize with the emotional tie between Effective Altruism (EA) and SBF in this essay, yet I find the logical linkage lacking. Especially when it comes to the aspect of SBF allowing Alameda...

    I can empathize with the emotional tie between Effective Altruism (EA) and SBF in this essay, yet I find the logical linkage lacking. Especially when it comes to the aspect of SBF allowing Alameda to use FTX customer funds to offset their trading losses as a representation of EA. EA embodies a variety of goals, none of which entail covering an ex-gf's trading firm's $8B losses.

    One aspect of EA focuses on providing medical aid to impoverished individuals suffering from neglected tropical diseases, given the lack of support from markets and governments. This is exemplified by organizations like GiveWell (https://www.givewell.org/), which is a sentiment I resonate with.

    Conversely, another facet of EA aims at utilizing influence to persuade governments on the immediate enactment of AI alignment and regulation policies. This is a side of EA I find myself disagreeing with to a large extent, but I digress.

    Had we'd seen SBF engaging in fraudulent utilization of customer funds with an EA-driven motive, like the ones above, I think this essay would click a lot more for me. But AFAIK that wasn't really the case, the fraudulent aspect of his business operations (intentional or not) was mostly deploying customer funds just to keep Alameda alive, which doesn't seem EA-related? And we've seen in his DMs that he did not believe in much of the ethical agendas he talked about (likely includes climate change and ESG as well), he was just saying "all the right shiboleths" to get people to like him, which feels more like a feature of sociopathy, than a bug in EA to me.

    14 votes
    1. [5]
      lou
      (edited )
      Link Parent
      I believe the argument is worthy of a more charitable reading, just because there is clearly something there that is intuitively sound and compelling. The more compelling idea seems to be that...

      I believe the argument is worthy of a more charitable reading, just because there is clearly something there that is intuitively sound and compelling.

      The more compelling idea seems to be that Effective Altruism provided the intellectual grounds for a dissociation between ethics and empathy which fueled, or at the very least gave a semblance of legitimacy, to cold-hearted behavior that goes against all that that most righteous people consider good, virtuous, and lawful.

      The notion that it is inherently dangerous to dissociate ethics from empathy is, to me, a very strong one.

      10 votes
      1. [4]
        BitsMcBytes
        (edited )
        Link Parent
        I find myself interested in searching for intellectual honesty here. Were SBF's actions actually the embodiment of EA? Or, do we just find him to be a scapegoat at the intersection of a broad...

        I find myself interested in searching for intellectual honesty here. Were SBF's actions actually the embodiment of EA? Or, do we just find him to be a scapegoat at the intersection of a broad class of ideas that are... fun to dunk on.

        Don't get me wrong, I think EA has flaws (my opinion, I think their step on the gas with AI regulation is a flaw of the movement.) But I don't think EA is necessarily saying, if you steal money to do good, then its worth it. (And if it is, I concede that its a bad idea, I just couldn't pick that up from the article! (Though strangely, this is almost the tale of robinhood, except the caveat is that you're only supposed to steal from the rich.))

        9 votes
        1. [3]
          lou
          Link Parent
          I also do not believe EA literally tells people it's okay to steal money under any circumstances.

          I also do not believe EA literally tells people it's okay to steal money under any circumstances.

          6 votes
          1. [2]
            moocow1452
            Link Parent
            I'm not sure there is an EA pope who can make a judgement on that one way or another. Assuming the checks still clear, and all you have to do to stop the robots is fudge some math no one will ever...

            I'm not sure there is an EA pope who can make a judgement on that one way or another. Assuming the checks still clear, and all you have to do to stop the robots is fudge some math no one will ever have to know about, that's a net good, right?

            1 vote
            1. lou
              (edited )
              Link Parent
              It is true that there is not an EA pope checking anything. I'm not sure why you feel the need to clarify that, but I do agree it is true ¯\_(ツ)_/¯

              It is true that there is not an EA pope checking anything. I'm not sure why you feel the need to clarify that, but I do agree it is true ¯\_(ツ)_/¯

              2 votes
  5. Atvelonis
    Link
    I may have a different understanding of effective altruism than the author of this article. I hear and appreciate Westenberg's critique of unconstrained utilitarianism, which is not new. Bentham...

    I may have a different understanding of effective altruism than the author of this article. I hear and appreciate Westenberg's critique of unconstrained utilitarianism, which is not new. Bentham and Mill's contemporaries pointed out in various philosophical dilemmas the flaws of "sacrificing" virtuousness in a particular action for the sake of a virtuous consequence. But the existence of this extreme does not strike me as contradictory to most of the core principles of charitable or strategic giving in the effective altruist doctrine. As an equity-minded individual, it is worthwhile to think strategically about where to allocate your resources when solving a particular social problem, because some allocations have more impact than others. To me, that has always been the essential principle of the movement. As far as I'm concerned, this does not preclude recognizing when a solution/implementation is more or less ethical than another and using that to inform your decisions.

    I feel that the author does not define in clear terms what constitutes a failure to address "the undercurrents of systemic issues" and therefore does not convincingly argue that this entire branch of philosophy is inherently incapable of it. In my opinion the most "effective" way to realize social justice is by addressing systemic issues, and I think many people interested in altruistic lifestyles would agree with that. As one person, you cannot typically single-handedly reorder an entrenched system, but you can reallocate any inequitably distributed resources in your possession, which is why Singer and others support charitable giving. I do not see any reason why the position of "I would like to do good with the resources I can access" is incompatible with doing so in a way that addresses systemic issues. Here, I think Westenberg focuses too heavily on tangible resources like capital at the expense of comparatively intangible ones, like an individual's altruistic use of their perspective, subjective experiences, education, skills, and more—including their choice to work for an ethically responsible organization instead of an irresponsible one, to engage productively in community outreach and political reformation, or simply to be a role model. There are multiple interpretations of effective altruism and the one that I am most keen on would prioritize a holistically socially beneficial lifestyle over the examples of "sidestepping moral quandries" Westenberg provides; what she defines to be the entirety of the EA movement.

    With that said, the logical potential for compatibility between systemic activism and some form of EA does not automatically correspond to its realization. The EA movement as followed by people like Bankman-Fried clearly does not follow that paradigm. His belief, for example, that "regulators make everything worse" in regard to federal oversight on cryptocurrency or in general toward business taxation is a consequentially ridiculous claim which I believe speaks either to malicious intent or to an interior moral inconsistency rooted in narcissism. If this is the way the EA movement as currently framed is actually understood, then indeed its principles need to be re-evaluated. I would argue, for example, that it should be impossible for any individual to acquire so many resources that they can personally wield as much influence as Bankman-Fried. Likewise I would argue that a morally fraught action toward a positive, broad goal (systemic or otherwise) affects other action toward that goal; i.e. doing bad to do some human-subjective definition of ultimate good makes it harder for others to support that human-subjective definition of ultimate good, even by moral means; and so that initial action is inherently not effective in achieving good even if it is altruistic. While I don't believe these statements to be incompatible with EA necessarily, they are certainly not explicit in the philosophy. Westenberg states powerfully:

    SBF's saga is a stark reminder of the perils inherent in a philosophy that loses sight of the ethical compass in its relentless pursuit of utilitarian outcomes.

    When I was a college student, several friends (upon reflection, all STEM majors, mostly computer scientists) were involved with the "official" (?) EA organization, attending conferences and such. I thought it was a little ironic to fly across the ocean to attend a conference specifically advocating, among other things, personal initiative to address climate change. I also thought it was a little strange for one friend to work for a company that I would consider relatively socially harmful (he described his role as "morally neutral") exclusively in order to acquire money to donate charitably. I liked him very much and didn't really hold any of this against him—as we often joked, "go get that bread"—and he did make an effort to live in a generally thoughtful and sustainable way. He was a vegan (climate/ethical reasons), he was a wonderfully kind person, and he was vocal about social injustice. But it did strike me as odd where precisely he drew the line at individual responsibility: no animal products, but will develop software for a tax-dodging, economy-destroying, fraud-committing, and bribery-riddled corporation. I suppose the abstraction of the modern world allowed him to feel comfortable making those decisions.

    8 votes
  6. drannex
    Link
    I've worked on a project or two in the past with Joan, they have always had rather succinct insights and thoughts on these type of things, this is not an exception to that. It's a great short read.

    I've worked on a project or two in the past with Joan, they have always had rather succinct insights and thoughts on these type of things, this is not an exception to that. It's a great short read.

    4 votes
  7. [2]
    moocow1452
    Link
    I've heard it best described that if SBF was just greedy, he would had a pretty good thing going, since FTX was liquid, well endorsed and it's hard to lose money if you run a bank. But he was also...

    I've heard it best described that if SBF was just greedy, he would had a pretty good thing going, since FTX was liquid, well endorsed and it's hard to lose money if you run a bank. But he was also a true believer who bet with company funds, and the sure fire way that a rich person gets thrown into prison is messing with a richer guy's money.

    3 votes
    1. BitsMcBytes
      Link Parent
      Two alternate histories could have happened which would have left SBF comfy: If he didn't give Alameda unlimited access to FTX customer funds, and just ran an exchange which, tbf, was a much...

      Two alternate histories could have happened which would have left SBF comfy:

      1. If he didn't give Alameda unlimited access to FTX customer funds, and just ran an exchange which, tbf, was a much better in terms of fees and UX than any other exchange that existed at the time, and was quick to build new features (though, we now know they were quick because they took major accounting shortcuts.)
      2. (And this is the scary one) If Alameda was actually good at trading and operated at a profit instead of having a $8B loss on their balance sheet. Had Alameda been profitable and properly risk managed, this fraudulent use of customer funds could have gone on much longer with none the wiser... and had FTX actually grown to say... acquire Goldman Sachs or some other large bank... this part of their history might have never been known.
      6 votes
  8. skybrian
    Link
    If anyone is so vulnerable to a particular philosophy that they lose all concern or respect for people to the point of lying all the time and starting an enormous fraud, maybe it was really...

    If anyone is so vulnerable to a particular philosophy that they lose all concern or respect for people to the point of lying all the time and starting an enormous fraud, maybe it was really something wrong with them to begin with?

    Most EA types don't do things like SBF did, whatever philosophical ideas they may entertain. Most people don't behave as if the philosophical ideas they entertain are really true. Here's how David Hume put it once:

    Whether your scepticism be as absolute and sincere as you pretend, we shall learn by and by, when the company breaks up: we shall then see, whether you go out at the door or the window; and whether you really doubt if your body has gravity, or can be injured by its fall; according to popular opinion, derived from our fallacious senses, and more fallacious experience.

    On the other hand, I'm reminded of Bret Devereaux's take on universities:

    Universities are laboratories for the study of dangerous ideas, the same way a chemistry lab may study dangerous chemicals.

    This is why, for instance, universities can't be intellectual safe spaces - they are, by design, intellectual dangerous spaces.

    Just like chemical labs (should) have safety measures to handle dangerous chemicals, the university is designed to handle dangerous ideas.

    It emphasizes free speech and non-violence, for instance, in an educational setting cordoned off from real power.

    It seems like most of the time we don't act as if this were true? We don't treat the ideas we discuss as dangerous. It's kind of wild to think that a philosophy can actually be dangerous.

    But then again, it's pretty obvious that sometimes deeply-held religions can be dangerous, and that does seem to extend to ideologies. Perhaps the thing that keeps such ideas from being dangerous in a university setting is that people don't believe in them to that extent?

    3 votes
  9. [3]
    Eji1700
    Link
    As with other vague philosophy discussions, I think this is a mostly pointless way to engage with the topic. Every philosophical system produces bad actors to some extent, or even encourages them...

    As with other vague philosophy discussions, I think this is a mostly pointless way to engage with the topic. Every philosophical system produces bad actors to some extent, or even encourages them depending on how you engage with it.

    The fact that EA is Ayn Rand with slightly more sense behind it of course attracts people with undesirable traits, but it doesn't really determine it's usefulness because it does. In fact, currently, many of the most successful philosophies in real world environments are the ones that encourage bad actors to still cause better outcomes, rather than hoping they just won't exist or will somehow be caught.

    That said, I think EA is mostly bs as well, but I feel that way about just about all philosophies? They are, at best, a framework for discussion that will never survive contact with reality.

    2 votes
    1. [2]
      skybrian
      Link Parent
      I don't think EA and Ayn Rand have anything in common other than appealing to college students.

      I don't think EA and Ayn Rand have anything in common other than appealing to college students.

      2 votes
      1. Eji1700
        Link Parent
        And the affluent, which was mostly my point. There's a lot of people who approach philosophy as "everything i've ever done is right, so what model can I bend to justify that", and both Rand and EA...

        And the affluent, which was mostly my point. There's a lot of people who approach philosophy as "everything i've ever done is right, so what model can I bend to justify that", and both Rand and EA are popular in those kinds of people.

        For the record I believe SBF is exactly this kind of personality, and I personally feel that's exactly what Rand did as well, just making her own "philosophy" rather than trying to bend one to her desires.

        1 vote