39 votes

The majority AI view

11 comments

  1. [2]
    Rudism
    Link
    I'm among those who held that majority opinion for a long time. However mine is continuing to slide further and further into "I'm so sick of all the bullshit, I wish I lived in a different reality...

    Technologies like LLMs have utility, but the absurd way they've been over-hyped, the fact they're being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.

    I'm among those who held that majority opinion for a long time. However mine is continuing to slide further and further into "I'm so sick of all the bullshit, I wish I lived in a different reality where LLMs never existed" territory.

    46 votes
    1. Habituallytired
      Link Parent
      I'm right there with you. It's exhausting to constantly be in this brain space and have tools that are useless to you forced upon you, with no recourse and no way to say no. And then you are also...

      I'm right there with you. It's exhausting to constantly be in this brain space and have tools that are useless to you forced upon you, with no recourse and no way to say no. And then you are also proselytized to over and over.

      19 votes
  2. [2]
    Greg
    Link
    It’s a breath of fresh air to see an article like this. It seems like everything I read is billionaire hype or (understandable, justified, but often technically misguided) backlash against the...

    [W]hat they all share is an extraordinary degree of consistency in their feelings about AI, which can be pretty succinctly summed up:

    Technologies like LLMs have utility, but the absurd way they've been over-hyped, the fact they're being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.

    If we were to simply listen to the smart voices of those who aren't lost in the hype cycle, we might see that it is not inevitable that AI systems use content without the consent of creators, and it is not impossible to build AI systems that respect commitments to environmental sustainability. We can build AI that isn't centralized under the control of a handful of giant companies. Or any other definition of "good AI" that people might aspire to. But instead, we end up with the worst, most anti-social approaches because the platforms that have introduced "AI" to the public imagination are run by authoritarian extremists with deeply destructive agendas.

    It’s a breath of fresh air to see an article like this. It seems like everything I read is billionaire hype or (understandable, justified, but often technically misguided) backlash against the billionaire hype - and as someone who works in scientific ML, a field that’s firmly swept into the “AI” catch all but has nothing to do with LLMs, it’s nice to hear a voice of moderation.

    And the author’s absolutely right, if we treat it more like the “normal technology” that it is, we might just break this idea that it’s synonymous with Sam Altman and Mark Zuckerberg’s bullshit, defuse some of the backlash, and have an opportunity to make use of it in a positive way.

    35 votes
    1. dhcrazy333
      Link Parent
      I'm so mad that they had to brand all this as AI. If it was just marketed as a more advanced LLM, we wouldn't have the insane overhype that we have now and probably would see them implemented in...

      I'm so mad that they had to brand all this as AI. If it was just marketed as a more advanced LLM, we wouldn't have the insane overhype that we have now and probably would see them implemented in more thoughtful and useful/impactful ways.

      But of course marketing sells. I get why it was marketed that way, and I can't say I fault them for doing so, but it just annoys me beyond belief.

      5 votes
  3. [5]
    EgoEimi
    Link
    That's an unsubstantiated statement. Sam Altman, Dario Amodei, and Sundar Pichai are all liberal. Sam and Dario have publicly stated that there is/will be an AI inequality problem that needs to be...

    But instead, we end up with the worst, most anti-social approaches because the platforms that have introduced "AI" to the public imagination are run by authoritarian extremists with deeply destructive agendas.

    That's an unsubstantiated statement. Sam Altman, Dario Amodei, and Sundar Pichai are all liberal. Sam and Dario have publicly stated that there is/will be an AI inequality problem that needs to be solved through UBI and/or some socioeconomic reform; Sam himself spent $100m+ in cash on a UBI experiment. Only Elon Musk is an authoritarian extremist, but his Grok AI isn't a big player.

    I'm optimistic about AI. It's creating a lot of economic value. It's not a normal technology because it's targeting a human domain—information synthesis—that has been untouched by other technologies, which have targeted information retrieval or, in the past, manual labor automation/augmentation. In parallel, there are rapid advancements in robotics. I find the future quite bright — if we choose it to be.

    It's really on voters to choose leaders to harness this promethean fire for society's benefit. Unfortunately, the right lacks brains, while the left lacks imagination.

    We can build AI that isn't centralized under the control of a handful of giant companies.

    People are building that. Lots of folks and companies are working on alternative models and hardware for local or on-device compute.

    16 votes
    1. [4]
      mordae
      (edited )
      Link Parent
      Liberals are not pro-anything. They are anti-unearned-privilege. They like UBI because it allows them to claim the playing field is more level and thus that the outcomes are more based on merit...

      Liberals are not pro-anything. They are anti-unearned-privilege. They like UBI because it allows them to claim the playing field is more level and thus that the outcomes are more based on merit and thus more earned.

      We are never getting UBI so high that it would allow crowdfunding competition without violent revolution. Simply because most liberals with actual power are liberal conservatives, not liberal socialists.

      They believe they are vastly better than an average person at deciding future investment, so they go and spend other peoples lives on their bets and get angry when the people (they are supposedly working for the benefit of) tell them about their actual needs. And they keep sliding towards conservatism.

      19 votes
      1. [3]
        EgoEimi
        Link Parent
        What would be unearned privilege, and what would be earned privilege?

        What would be unearned privilege, and what would be earned privilege?

        1. creesch
          Link Parent
          You'd have to ask them, I suppose. If I had to guess, it is very close to the sort of privilege they enjoy as they firmly believe they “earned” it. Just a guess though, as I am not the person who...

          You'd have to ask them, I suppose. If I had to guess, it is very close to the sort of privilege they enjoy as they firmly believe they “earned” it. Just a guess though, as I am not the person who replied to you. It also isn't really that relevant, as the way you are zooming into that feels a bit like a deflection from what that person tried to point your attention to.

          6 votes
        2. mordae
          Link Parent
          Younger and older brother fight over a home computer. Older: "I am older, the computer was originally bought for me and I was in front of it first!" Younger: "Parents said I can use it too. And...

          Younger and older brother fight over a home computer.

          Older: "I am older, the computer was originally bought for me and I was in front of it first!"

          Younger: "Parents said I can use it too. And also that not doing chores means no computer time and you did not clean the table today, I did while you ran to the computer! Not fair!"
          ...
          They proceed to engage in tug of war with the mouse cable and snap it.
          ...
          Mother: "I don't care who started it, but your sister was supposed to write a report using that computer today. No allowance for you this week, I have to go buy a new mouse."

          Older brother is conservative, younger brother liberal and mother a socialist.

          4 votes
  4. [2]
    cdb
    Link
    It seems to me that part of the issue here is the inability to have a complex narrative in public discourse. Things are either all bad or all good. Having a "useful with flaws" kind of narrative...

    It seems to me that part of the issue here is the inability to have a complex narrative in public discourse. Things are either all bad or all good. Having a "useful with flaws" kind of narrative usually just trends towards "bad."

    I see some parallels in a discussion about Pro Football Focus (PFF) rating I saw on reddit today. PFF ratings are based on human scoring of videos of the game. The post had some data showing that PFF ratings are more predictive of future results than other stats/results based metrics. There are a lot of people in the comments bashing the PFF ratings saying that they're not representative of reality, pointing to disconnects between PFF ratings and the statistics. There are people defending the ratings saying that although they may have flaws, they measure things in a way that raw stats based on results can't. Some of the responses are some kind of results-oriented thinking or saying that there's no point if there are flaws. It seems like it's hard to get across that this stat might be interesting and useful in certain circumstances, but not the end-all-be-all. People seem to be looking for one stat that is the correct or best one, when the system is complex and such a stat doesn't exist. In a similar way, some of these AI models are far from perfect and far from universally applicable, but they are improvements in certain circumstances or in certain marginal ways.

    15 votes
    1. skybrian
      Link Parent
      Yes, very true. I think this this is related to confidence. A confident, bad take is more likely to go viral than a more reasonable one. People imitate what they see, so we get lots of...

      Yes, very true. I think this this is related to confidence. A confident, bad take is more likely to go viral than a more reasonable one. People imitate what they see, so we get lots of exaggerated, overconfident takes.

      5 votes