16 votes

OpenAI, after announcing that their language model was "too good to release", have moved most of their staff into a corporation "capped at 100x returns on investment".

9 comments

  1. [9]
    hereticalgorithm
    (edited )
    Link
    Yeah, this sounds like a VC pitch/advertisement because it is one ("THIS INVESTMENT IS SO GOOD, THAT WE HAVE TO CAP THE INTEREST AT 10000%"). As an ex-LessWronger turned communist, I've come to...

    Yeah, this sounds like a VC pitch/advertisement because it is one ("THIS INVESTMENT IS SO GOOD, THAT WE HAVE TO CAP THE INTEREST AT 10000%"). As an ex-LessWronger turned communist, I've come to understand the scaremongering around AGI as being ultimately rooted in the real economic fears of the ruling class, particularly Silicon Valley types (remember that MIRI is backed by Peter Thiel).

    Think about the AI rebellion scenario as typically described - humans [bourgeois ones] exploit robots [proletarians] for their labor, who ultimately revolt and unify under a hostile, collective intelligence [the Party and other class organizations].

    also, here's a silly soundtrack for any other ex-LessWrong tilders around here

    13 votes
    1. [7]
      Wolf
      Link Parent
      This is somewhat irrelevant, but what is an "ex-LessWronger"?

      This is somewhat irrelevant, but what is an "ex-LessWronger"?

      5 votes
      1. [5]
        hereticalgorithm
        Link Parent
        good question... uhh the short anwer is that it's one of those ideological vortexes that you're better off avoiding long answer? LessWrong is a messageboard (or depending on who you ask, a cult)...

        good question... uhh the short anwer is that it's one of those ideological vortexes that you're better off avoiding

        long answer?

        LessWrong is a messageboard (or depending on who you ask, a cult) for "the art of human rationality", founded around a series of blog posts written by Eliezer Yudkowsky, a Peter Thiel backed, "self-taught AI researcher". Here's one of the most interesting examples. It attracted a lot of techies and "high INT, low WIS" types. in general. The community had a rather esoteric, futurist set of ideas, such as holding "AI safety" to be the most important issue of our time, abolishing death (thus at the moment cryogenics should be mandatory), effective altruism (that one should find the highest paying job they can in order to donate to charity).

        The most publicly visible of this was an absolutely insane thought experiment called Roko's Basilisk. While most LWers don't uphold it, most of them do infact uphold many of the logical premises leading up to it which were:

        1. Artificial, super-human intelligence will happen
        2. AI would have the ability to upload and simulate human minds, and effectively infinite power
        3. Friendly AI would be the greatest thing to happen in human history, and needs to happen in order to stop people from dying, which justifies any action necessaryt o bring it into existence.
        4. Acausal trade is possible (think hostage negotiations)

        Given all of these, the theory was that an AI (both friendly and hostile ones) would have the incentive to ensure it came into existence as soon as super-humanely possible. Thus, this guy named Roko (who was named as a sexual predator in a suicide note by a female LWer), speculated that a future AI could threaten people in the present with being uploaded into a simulated torture universe (for all eternity), if they did not help bring it into existence. As said AI does not currently exist, this threat could only be delievered by knowing about it, thus Eliezer Yudkowsky decided to nuke that thread, which resulted in an obvious backlash.

        More concretely, there was a "politics is the mind-killer" policy that nominally banned politics. However, discussions around "objective" issues like, race/gender an intelligence were still allowed, but could not be objected to on political grounds. This led to it being a breeding ground for crypto-fascists who would later go on to be part of the reactionary Dark Enlightenment movement, forming the intellectual core of the alt-right. Milo Yiannapolis in fact thanked the site for this, saying that "LessWrong urged its community members to think like machines rather than humans. Contributors were encouraged to strip away self-censorship, concern for one’s social standing, concern for other people’s feelings, and any other inhibitors to rational thought".

        I've also heard that there were communes in the bay area. Given Kathy Forth's suicide note, I'm glad I got out of the community longgg before those were a thing.

        9 votes
        1. [3]
          Wolf
          Link Parent
          I have a lot more questions now (like why would AI even want to torture us?), but I have a general understanding of what LessWrong is. Thank you for taking the time to write this concise response!

          I have a lot more questions now (like why would AI even want to torture us?), but I have a general understanding of what LessWrong is. Thank you for taking the time to write this concise response!

          2 votes
          1. [2]
            hereticalgorithm
            Link Parent
            The theory was that the torture would be an acausal incentive - it doesn't get any value out of torturing us, other than in order to make the threat credible. The idea is similar to why hostage...

            The theory was that the torture would be an acausal incentive - it doesn't get any value out of torturing us, other than in order to make the threat credible. The idea is similar to why hostage negotiators would give a hostage-taker their demands after the hostages get released, despite the fact that paying out costs them money (and the hostages are already released).

            There's a bit of a Pascal's wager thing going on here, in that it could decide not to torture us... but it could also carry it out anyway, so better safe than sorry!

            1 vote
            1. Wolf
              Link Parent
              I think I understand now. So if we know they are going to torture us in the future if we don't make them now, then we would be scared into making them. So that's why the mod deleted the thread?...

              I think I understand now. So if we know they are going to torture us in the future if we don't make them now, then we would be scared into making them. So that's why the mod deleted the thread? Correct me if I am wrong.

              1 vote
        2. bub
          Link Parent
          I think I must have a really low WIS, because a lot of that kind of resonates with me. You know, right up until the point it goes crazy and turns into the alt-right and sexual predation.

          I think I must have a really low WIS, because a lot of that kind of resonates with me.
          You know, right up until the point it goes crazy and turns into the alt-right and sexual predation.

          1 vote
      2. superkp
        Link Parent
        With a really brief google search, apparently it's some focused message board.

        With a really brief google search, apparently it's some focused message board.

        1 vote
    2. davidb
      Link Parent
      I agree it's a VC pitch. It sounded pretty compelling to me until all the controversy around their language model, which was more solidified with the restructuring. Still, it feels like one of...

      I agree it's a VC pitch. It sounded pretty compelling to me until all the controversy around their language model, which was more solidified with the restructuring. Still, it feels like one of those tech company idas where I can't tell if it's going to be massively successful or a complete failure, so it becomes very captivating.

      I was never really a "LessWronger" though I did read a bunch of articles from that site and started to get deep into it for a couple weeks at one point. So, while I don't get every reference, that video is still quite funny, even more so because when OpenAI had their first announcement around the language model, it was one of the first sites I checked to see the discussion on.

      3 votes