8 votes

Topic deleted by author

12 comments

  1. [3]
    tesseractcat
    Link
    Personally I disagree with longtermist ideology, mainly because there's not a great way to predict the far future, and most things that make sense in the "short term" (our lifetimes) probably also...

    Personally I disagree with longtermist ideology, mainly because there's not a great way to predict the far future, and most things that make sense in the "short term" (our lifetimes) probably also make sense in the "long term". But honestly the article almost reads like the author agrees with longtermist ideas... The first 20% or so is acknowledging how dangerous existential risks are. Then it ends with this sentence:

    But the crucial fact that longtermists miss is that technology is far more likely to cause our extinction before this distant future event than to save us from it.

    Uh, if there's anything that longtermists aren't missing, it's doomsday predictions.

    Also the article touches on something that I see in criticisms like this that I always found kind of wishy-washy: It critiques the idealization of technology, the romanticization of scientific progress, and transhumanism, but it doesn't decide what the 'correct' level is. What I mean by that is, the author falls into a sort of status-quo bias. If they want to criticize technology, when should we stop, and why is it coincidentally right now (historically), why not the 1500s or something?

    12 votes
    1. [2]
      Omnicrola
      Link Parent
      Not trying to speak for the author, but I think my answer to this question would be one of practicality. We have already opened a series of Pandorian boxes, and they cannot be unopened. It may be...

      What I mean by that is, the author falls into a sort of status-quo bias. If they want to criticize technology, when should we stop, and why is it coincidentally right now (historically), why not the 1500s or something?

      Not trying to speak for the author, but I think my answer to this question would be one of practicality. We have already opened a series of Pandorian boxes, and they cannot be unopened. It may be more practical (though certainly not easy) to instead decide as a society to dramatically slow or halt technological progress in order to (referencing the Carl Sagan quote from the article) allow our "wisdom" to catch up. Maybe part of that new wisdom involves selectively discarding certain technologies, who knows.

      ......fuck did I just make an argument for (lower case) conservatism?

      1 vote
      1. NoblePath
        Link Parent
        I mean, this is kind of agrarianism. Although, Wendell Berry would probably have us revert to a time prior to the steam engine.

        I mean, this is kind of agrarianism. Although, Wendell Berry would probably have us revert to a time prior to the steam engine.

        4 votes
  2. [6]
    skybrian
    Link
    Yeah, there’s a fair bit of nonsense, but I expect that most weird philosophies bounce off most people and the weirder aspects of longtermism are no different. The attempt to show connections...

    Yeah, there’s a fair bit of nonsense, but I expect that most weird philosophies bounce off most people and the weirder aspects of longtermism are no different.

    The attempt to show connections between fringe people and powerful people often ends up being a sort of guilt by association. Some of the people on the fringe have had weird ideas, and they talked to someone with power, so that must mean the powerful person will do what they want. It’s possible, but not necessarily so and I find such arguments unconvincing.

    Better to go by what the powerful people actually advocate.

    3 votes
    1. [3]
      TheRtRevKaiser
      (edited )
      Link Parent
      I mean, the author gives examples of several powerful people donating to organizations that are directly linked to this philosophy, as well as examples of adherents working in or advising...

      The attempt to show connections between fringe people and powerful people often ends up being a sort of guilt by association. Some of the people on the fringe have had weird ideas, and they talked to someone with power, so that must mean the powerful person will do what they want.

      I mean, the author gives examples of several powerful people donating to organizations that are directly linked to this philosophy, as well as examples of adherents working in or advising governments. It's not exactly playing "six degrees of Kevin Bacon".

      It is difficult to overstate how influential longtermism has become. Karl Marx in 1845 declared that the point of philosophy isn’t merely to interpret the world but change it, and this is exactly what longtermists have been doing, with extraordinary success. Consider that Elon Musk, who has cited and endorsed Bostrom’s work, has donated $1.5 million dollars to FHI through its sister organisation, the even more grandiosely named Future of Life Institute (FLI). This was cofounded by the multimillionaire tech entrepreneur Jaan Tallinn, who, as I recently noted, doesn’t believe that climate change poses an ‘existential risk’ to humanity because of his adherence to the longtermist ideology.

      Meanwhile, the billionaire libertarian and Donald Trump supporter Peter Thiel, who once gave the keynote address at an EA conference, has donated large sums of money to the Machine Intelligence Research Institute, whose mission to save humanity from superintelligent machines is deeply intertwined with longtermist values. Other organisations such as GPI and the Forethought Foundation are funding essay contests and scholarships in an effort to draw young people into the community, while it’s an open secret that the Washington, DC-based Center for Security and Emerging Technologies (CSET) aims to place longtermists within high-level US government positions to shape national policy. In fact, CSET was established by Jason Matheny, a former research assistant at FHI who’s now the deputy assistant to US President Joe Biden for technology and national security. Ord himself has, astonishingly for a philosopher, ‘advised the World Health Organization, the World Bank, the World Economic Forum, the US National Intelligence Council, the UK Prime Minister’s Office, Cabinet Office, and Government Office for Science’, and he recently contributed to a report from the Secretary-General of the United Nations that specifically mentions ‘long-termism’.

      I don't see any examples of guilt by association here, with the possible exception of Joe Biden being linked to Jason Matheny when he is probably a couple of steps removed. I'd say it's fair to assert that Musk and Thiel may be influenced by EA/Longtermism when they're giving millions of dollars to these orgs and, in the case of Thiel, speaking at conferences.

      2 votes
      1. [2]
        skybrian
        Link Parent
        Yeah, maybe I went a bit far. It is true that Rationalists and Effective Altruism are pretty influential and have had some success. But they're also diffuse and decentralized. "Influenced by...

        Yeah, maybe I went a bit far. It is true that Rationalists and Effective Altruism are pretty influential and have had some success. But they're also diffuse and decentralized. "Influenced by EA/Longtermism" describes a lot of people with their own opinions who go on to do their own things, for better or worse. There are people who call themselves post-rationalists because they found Rationalism interesting but largely rejected it. Does that count as influence? By reading this very article we're getting influenced by someone who was strongly influenced by longtermism.

        It's kind of funny to choose small examples like Musk donating $1.5 million to a charity when he's done other things that are arguably longtermist and of much more significance. Wanting to start a Mars colony seems pretty darn longtermist, particularly the part about caring about whether humanity survives even if something happens to civilization on Earth. That's all about what happens to potential people who, regardless of what happens, will not be us.

        But, you could just as well say that Musk was influenced by reading a lot of science fiction. "Reading science fiction" and "reading longtermist philosophy" don't seem all that different to me. There's probably a lot of overlap?

        And it's not all bad. I'm skeptical that a Mar colony will happen this century, but the rockets are useful, regardless.

        I do want to support curiosity about what charities are doing. We can think of a charity as a pot of money with restrictions on how it can be spent. Whoever controls that spending has a fair bit of power, even if they can't spend the money on themselves. But the "influenced by" graph doesn't tell us much about how effective a charity is at whatever it's trying to do.

        3 votes
        1. TheRtRevKaiser
          Link Parent
          Yeah, I don't think that there's anything wrong with being concerned with the survival of the species or wanting to maximize your charitable giving. I don't think most people would disagree with...

          Yeah, I don't think that there's anything wrong with being concerned with the survival of the species or wanting to maximize your charitable giving. I don't think most people would disagree with those ideas when taken at face value, lol. I think the problem is that when you take those ideas (or any idea really) to an extreme you can wind up with some very abhorrent and inhumane views.

          Also, I know there are folks out there who would consider themselves influenced by or adherents of EA or Longtermism that are doing good things and not taking those abhorrent extremes seriously. I do, however, think that when the folks that articulate these ideas - the writers and philosophers that are central to the movements - are some of the same folks that are voicing some pretty problematic views related to these philosophies then we need to pull back and examine if there is something rotten at the roots. In the case of EA/Longtermism I think the problem probably isn't with the vague idea that we should maximize the good that we do or that we should think about the future, but with the degree of commitment to utilitarian ethics to an extreme that starts to become inhumane.

          And I think it's important to look at the philosophies and ideas that are influencing people who have real impact on the world. Somebody like Elon Musk could be putting a lot of money toward solving real, immediate problems in the world - like hunger for example. Or, he could spend a lot of money on possibly one day maybe perhaps living on Mars. One of those things has real, immediate impact that is measurable (as well as long term impact that is difficult to quantify). The other is very unlikely to have much impact at all, but has a very small chance of having a large impact. The Longtermist view is that the money and effort should go to the cause that has a small chance of a huge impact later.

          3 votes
    2. NoblePath
      Link Parent
      Extending of @thertrevkaiser, we should be watching these folks carefully. The philosophy of these folks is not unlike Calvinism in its effects, justifying the wealthy and powerful as “rightfully”...

      Extending of @thertrevkaiser, we should be watching these folks carefully. The philosophy of these folks is not unlike Calvinism in its effects, justifying the wealthy and powerful as “rightfully” in their positions, and not the result of unethical and unjust distortions

      2 votes
    3. Macil
      (edited )
      Link Parent
      Yeah, it's a little weird seeing a bunch of groups including Effective Altruism and MIRI tarred for their association with longtermism, but none of them are shown to exemplify any of the downsides...

      Yeah, it's a little weird seeing a bunch of groups including Effective Altruism and MIRI tarred for their association with longtermism, but none of them are shown to exemplify any of the downsides of longtermism the author explains. In contrast with this, EA has been very focused on global poverty, and MIRI's Yudkowsky agrees with the author in principle that we should slow down technological development that could be a net negative. Longtermism is imperfect with hypothetical failure modes when taken to the extreme without any other principles, but it doesn't seem that organizations influenced by it are falling into these failure modes.

      2 votes
  3. shx
    Link
    I find longtermism to have the same problem that a lot of utilitarian conclusions do, namely assuming that pleasure is equal and opposite to suffering. I've heard utilitarianism roughly stated as...

    I find longtermism to have the same problem that a lot of utilitarian conclusions do, namely assuming that pleasure is equal and opposite to suffering. I've heard utilitarianism roughly stated as "the ethical choice will maximize pleasure and minimize suffering," but I think a much more tractable code of ethics is "minimize suffering, period." The former gives you an endless slew of problems - you need to start considering the value of currently-unborn-life (imo it ought to be zero, as it doesn't.. y'know, exist), you can justify doing horrible things to one group for the pleasure of another, and you have the problem stated in the article (a ton of miserable lives with some pleasure are preferable to a handful of rather pleasant lives). The latter is a bit bleak (ideally, it would have you destroy the universe and everything in it, which turns suffering down to 0), but it also makes the life of each individual very likely to be pleasant. Of course, it has the same problem as all utilitarianism, which is that it's effectively impossible to compute or even estimate this suffering function.

    I'm curious if anyone disagrees with this and if so why - I've found it to be surprisingly robust in agreeing with my moral intuition (especially when you realize a bleak conclusion like "start killing sad people to reduce suffering" would actually increase suffering when you consider it's impact on society).

    3 votes
  4. [2]
    papasquat
    Link
    I have no idea what the central thesis of this idea even is. "Fulfil humanity's potential?" What does that mean? Does that mean making as many humans as possible? Does it mean building giant space...

    I have no idea what the central thesis of this idea even is. "Fulfil humanity's potential?" What does that mean? Does that mean making as many humans as possible? Does it mean building giant space megastructures? Does it mean colonizing other planets or other galaxies?
    I don't understand why any of these things are even worthy goals, other than "I read about them in a sci fi book once and it seemed cool." Which seems to drive, not even exaggerating, like at least half of what silicon valley billionaires invest their resources into.

    I have no idea what's positive about increasing the population, or colonizing other planets, or doing any of this stuff, if it's not in service of reducing human suffering and increasing human happiness/fulfillment.

    2 votes
    1. streblo
      Link Parent
      I think the gist of Bostrom’s argument is that, if we assume the average human life is on average of positive value, then maximizing the number of human lives across time is desirable. IIRC he...

      I think the gist of Bostrom’s argument is that, if we assume the average human life is on average of positive value, then maximizing the number of human lives across time is desirable. IIRC he extends this to simulated humans as well.

      One problem is, as others in this thread have pointed out, is you’re trying to maximize something we can at best only guess the gradient for.

      2 votes