16 votes

ChatGPT is cutting non-English languages out of the AI revolution

16 comments

  1. [2]
    Shmiggles
    Link
    Alternative interpretation: Non-English speaking cultures will be spared from the torrent of convincing-but-incorrect disinformation generated by large language models like the GPTs.

    Alternative interpretation: Non-English speaking cultures will be spared from the torrent of convincing-but-incorrect disinformation generated by large language models like the GPTs.

    21 votes
    1. EgoEimi
      Link Parent
      Yeah, what a blessing. I may need to learn Icelandic, Tibetan, or Welsh in order to escape our imminent AI-infested information hellscape.

      Yeah, what a blessing. I may need to learn Icelandic, Tibetan, or Welsh in order to escape our imminent AI-infested information hellscape.

      3 votes
  2. [8]
    bioemerl
    Link
    Saying they are cutting anyone out isn't exactly accurate. That makes it sound like open AI is specifically seeking to exclude other languages. Open AI is training on its most readily available...

    Saying they are cutting anyone out isn't exactly accurate. That makes it sound like open AI is specifically seeking to exclude other languages.

    Open AI is training on its most readily available data set and dealing with quite a lot of hostility from other nations that might want a (language here) focused AI.

    The only people open AI wants to exclude are people who want open AI - to not have to use a web endpoint to access the technology of the future. And those seeking to make competing technologies by hampering them with government regulation and copyright law roadblocks.

    8 votes
    1. [7]
      skybrian
      Link Parent
      Yes, the framing this article uses annoys me too, but I think it's mostly the headline. There's an implied standard that out of fairness, products should be deployed with the same quality in every...

      Yes, the framing this article uses annoys me too, but I think it's mostly the headline. There's an implied standard that out of fairness, products should be deployed with the same quality in every language in the first release. That would be ideal, but it's unrealistic and would result in only the largest companies having a chance of competing.

      It sounds like you might think that OpenAI wants to shut down open source competitors? People say that online, but it's not what OpenAI is saying:

      We think it’s important to allow companies and open-source projects to develop models below a significant capability threshold, without the kind of regulation we describe here (including burdensome mechanisms like licenses or audits).

      So it seems like it's only the big competitors that they're hoping to see regulated (as well as following the same regulations themselves).

      4 votes
      1. [6]
        bioemerl
        Link Parent
        Very very important part of the quote there is: So basically they say "you can develop open source models but not if they compete with us."

        Very very important part of the quote there is:

        below a significant capability threshold,

        So basically they say "you can develop open source models but not if they compete with us."

        1 vote
        1. [5]
          skybrian
          (edited )
          Link Parent
          That depends on what the threshold is, though. It’s also theoretical until a government actually makes some regulations. It doesn’t seem like they’re going to do anything any time soon? Edit: also...

          That depends on what the threshold is, though. It’s also theoretical until a government actually makes some regulations. It doesn’t seem like they’re going to do anything any time soon?

          Edit: also they seem to say that the threshold they have in mind doesn’t apply to any system that currently exists, including their own:

          Today’s systems will create tremendous value in the world and, while they do have risks, the level of those risks feel commensurate with other Internet technologies and society’s likely approaches seem appropriate.

          By contrast, the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.

          1 vote
          1. [4]
            bioemerl
            Link Parent
            I don't think any threshold is acceptable, when it involves handing corporate control to what might be the most powerful creative and industrial tool known to humanity. It should be a government...

            I don't think any threshold is acceptable, when it involves handing corporate control to what might be the most powerful creative and industrial tool known to humanity.

            It should be a government controlled and regulated deadly weapon or it should be free. Open AI wants a dystopia where they get all the control.

            I hope nothing ever comes of it, but the fact open AI advocates for this at all should earn your spite.

            1. [3]
              skybrian
              Link Parent
              Well, that’s the libertarian position and many people agree with you, most recently Marc Andreesson. I think that if AI really is dangerous (as in actually gets many people killed, like, say, cars...

              Well, that’s the libertarian position and many people agree with you, most recently Marc Andreesson.

              I think that if AI really is dangerous (as in actually gets many people killed, like, say, cars or airplanes or some drugs) then there is a reasonable case for government regulation, and this doesn’t seem inherently dystopian, even though regulatory overreach is a real problem. And I don’t think spite makes for good online conversation.

              But the arguments for AI really being that dangerous are theoretical at this point, and I’m doubtful that generating text alone can do anything worse than people could already do themselves. It would need to be hooked up to something.

              1 vote
              1. [2]
                bioemerl
                Link Parent
                That's a big part of why I mentioned that it should be regulated as dangerous technology - akin to nuclear power where is basically under complete government control and command. Or it should be...

                That's a big part of why I mentioned that it should be regulated as dangerous technology - akin to nuclear power where is basically under complete government control and command.

                Or it should be public and accessable to all.

                Any in-between like what open AI is proposing where they get to control the technology and they get to have a high barrier to entry is the distopian alternative.

                Spite towards open AI I think is fine. I'm not having a conversation with them. They're using oodles of money and power to have a conversation neither I nor you can participate in - lobbying Congress. Public backlash and spit is our voice in these matters. Make Sam Altman a bad word.

                I agree with you entirely though on if AI is dangerous yet. There will be times that AI kills people. Self driving cars for example, but not in the sense that it's "dangerous" in the sense that it wants to kill people.

                Modern AI at least, we will see how it goes.

                1. skybrian
                  Link Parent
                  I mostly disagree and can explain at length, but before I do I want to say there’s no hard feelings, it’s just hitting on some things I’ve thought about before. Sometimes I like to write and...

                  I mostly disagree and can explain at length, but before I do I want to say there’s no hard feelings, it’s just hitting on some things I’ve thought about before. Sometimes I like to write and disagreeing with someone online is my most common motivation for writing. So here goes:

                  You’re thinking of regulation as ideally all-or-nothing, either full government control or free for everyone. But it seems like cooperation between government and private industry is common and it doesn’t seem terrible? Even nuclear power isn’t completely government-run since public utilities are private but heavily regulated monopolies. Other examples are bank and auto industry regulation. It’s not hard to make a libertarian case that these industries have thickets of damaging regulations that drive up costs and keep us from having nice things. I’m fairly sympathetic to those arguments, but it also seems like government-regulated big businesses can work and provide important services to the public. Maybe a dystopia could evolve from that, but it seems fairly normal?

                  My concern about spite is less about what it does to famous people being called names (often, they are unlikely to know or care what happens in an obscure forum) and more about what it does to us. Engaging in a hatefest about some common enemy often works for getting upvotes or reshares. There are some topics that often result in a hatefest on Hacker News, and OpenAI is one of them.

                  I’d rather not see it happen here. We’ve had our own hot-button topics that eventually, we learned to avoid, and I’d rather not see any other topics become no-go zones. So, I’m a little wary of that kind of thing.

                  Also, one way to justify a hatefest is to say that it’s sort of like political advertising or a political rally. It’s political advocacy and it’s important because it will eventually affect an election or something. This is a utilitarian argument that I think is a bit grandiose. It’s playing to an imaginary large audience of voters and we don’t even live in the same countries.

                  So I think a better metaphor is to think of online discussion as sort of like education. A classroom discussion might have an indirect effect on politics eventually, but it comes from helping the students have good, in-depth discussions and learn to think for themselves. The long-term effects of that are unknowable, but hopefully positive.

                  So it seems like we should pay more attention to having positive effects in the here and now, on the people who are actually here. We can often assume that whatever we talk about is unaffected by us talking about it. And therefore, there’s little reason to treat online arguments as the means to an end that’s outside the forum they’re in. They’re just for us.

                  (We’re writing in public, though, so it’s good to be cautious about oversharing.)

                  Thank you for coming to my TED talk.

  3. click
    Link
    Cant wait for AI Translations of AI Articles, written by AI Authors with accompanying AI Imagery, being discussed by AI Comments, promoted to us via an AI

    Cant wait for AI Translations of AI Articles, written by AI Authors with accompanying AI Imagery, being discussed by AI Comments, promoted to us via an AI

    4 votes
  4. SirDeviant
    Link
    The AI Revaluation is barely getting started. Other languages will have their chance in the spotlight.

    The AI Revaluation is barely getting started. Other languages will have their chance to be exploited in the spotlight.

    3 votes
  5. unkz
    Link
    No more than github is cutting out non-English users from open source, or [insert any major tech company of note]. It seems weird to single out ChatGPT here.

    No more than github is cutting out non-English users from open source, or [insert any major tech company of note]. It seems weird to single out ChatGPT here.

    2 votes
  6. [2]
    lou
    Link
    Me and my friends have been experimenting with chatgpt in our native language. It works fine.

    Me and my friends have been experimenting with chatgpt in our native language. It works fine.

    1 vote
    1. gco
      Link Parent
      I'm bilingual and have used it lightly for translating, so far it's worked out well.

      I'm bilingual and have used it lightly for translating, so far it's worked out well.

      1 vote
  7. SteeeveTheSteve
    Link
    I'm not sure that makes sense, of course it'll be in the language those developing it prefer it to be first. I'm sure at some point datasets and algorithms will be created that will make it easy...

    I'm not sure that makes sense, of course it'll be in the language those developing it prefer it to be first.

    I'm sure at some point datasets and algorithms will be created that will make it easy for an AI to speak any language. However, that'll have to come after AI translators are able to do their jobs as well as human translators and such info isn't kept secret.