27 votes

A developer built a 'propaganda machine' using OpenAI tech to highlight the dangers of mass-produced AI disinformation

9 comments

  1. [3]
    gowestyoungman
    Link
    I'm semi aware of the power of AI and ChatGPT as a relative uses it regularly in replacing a good part of his workload and we've discussed how it's power makes his work much easier. But that video...

    I'm semi aware of the power of AI and ChatGPT as a relative uses it regularly in replacing a good part of his workload and we've discussed how it's power makes his work much easier. But that video was my first 'oh sh*t, the genie really is already out of the bottle' moment. I have NO idea how we're going to inoculate the next generation against the power of this genie when it comes to manipulating public opinion.

    8 votes
    1. [2]
      Moonchild
      Link Parent
      This is not really so much the genie getting out of the bottle as 1) an incremental improvement over existing analogous tools that were being used in analogous ways, and 2) an effective...

      This is not really so much the genie getting out of the bottle as 1) an incremental improvement over existing analogous tools that were being used in analogous ways, and 2) an effective democratisation of the technology as some of it shifts out of government/private into public sector.

      7 votes
      1. gowestyoungman
        Link Parent
        I would argue that is exactly what qualifies as genie out of the bottle. Instead of being a few high power entities in control with the power, now virtually anyone can wield it - including any...

        an effective democratisation of the technology as some of it shifts out of government/private into public sector.

        I would argue that is exactly what qualifies as genie out of the bottle. Instead of being a few high power entities in control with the power, now virtually anyone can wield it - including any madman with nefarious goals. How will we ever know who is behind ANY campaign from now on?

        14 votes
  2. [5]
    Grumble4681
    (edited )
    Link
    I find it interesting that this didn't garner much comment or attention. I feel like they missed an opportunity here by not actually using the things they built and instead neutered their own...

    I find it interesting that this didn't garner much comment or attention. I feel like they missed an opportunity here by not actually using the things they built and instead neutered their own creation and relegated it to something that no one ever actually saw. I suspect if they had actually put this in use, there would be more comments and attention paid to this project and it might have created more awareness about the problem. They set out to prove it's not just theoretical, people could actually do these things, and to prove it, they built the tools to do it, but then didn't actually do it. In the end, they didn't break down the theoretical wall, it's still just a theory that someone could do it. They didn't prove that people would be fooled by it, because they didn't attempt to fool people with it, no one even saw it.

    Imagine if you were the person fooled by it, and then came across the reveal months later. That has the potential for a realization that this video doesn't.

    7 votes
    1. [3]
      SteeeveTheSteve
      Link Parent
      There are moral issues with that, they may be worried about losing trust with people as some won't be too happy with being tricked. Plus the misinformation may be spread as truth for years...

      There are moral issues with that, they may be worried about losing trust with people as some won't be too happy with being tricked. Plus the misinformation may be spread as truth for years afterward. I'm not sure if there is a topic that you could safely do that with and still get the reaction needed to make people see how dangerous it is. It might be easier to identify an existing disinformation campaign that people don't realize is even there.

      3 votes
      1. [2]
        Grumble4681
        Link Parent
        Yeah there's moral issues with it, but there's moral issues in just about everything with wildly different degrees of scale of course. Humans don't do anything that doesn't come at a cost, to the...

        Yeah there's moral issues with it, but there's moral issues in just about everything with wildly different degrees of scale of course. Humans don't do anything that doesn't come at a cost, to the environment, to ourselves, to other species. In a way it's the nature of life.

        There's also potential legal issues with it, so I can certainly understand that people might want to avoid any possible legal complications. Can only imagine if they actually did post on Twitter and Elon Musk gets the reveal and decides to sue them (like he's posturing with the ADL now). So I get that some people may not want to risk it, but I'm just highlighting that I think the risk is what makes it something relevant. No different than people talking about it as they highlighted in their own video, it just doesn't hit the same. I saw this video a couple days ago when it was posted but didn't comment, then saw it float back up today only to see it generated next to no conversation. I suspect it would be different if they had actually taken the risks. They wouldn't have to choose something as political as they did either, as that certainly is more legally fraught.

        But if what they're saying is real and not theoretical, and it can actually happen with little to no cost or resources, then presumably it's already happening except no one knows about it and no one can prove it. If it's as easy as they make it seem, the harm they may cause would be a drop in the ocean compared to the benefit it would provide in giving people a real example and real experiences.

        1 vote
        1. SteeeveTheSteve
          Link Parent
          Oh that's true, most people will ignore it, least until it affects too many people or affects them, while it alerts bad people to new opportunities to scam or mess with people. It isn't...

          Oh that's true, most people will ignore it, least until it affects too many people or affects them, while it alerts bad people to new opportunities to scam or mess with people.

          It isn't theoretical either, people have already successfully used AI to spread disinformation. "Fox8 botnet" is a great example, it was only caught due to being sloppy while someone was looking out for it. That's just one, like ants, termites and silverfish where there's one you know there's more. AI research will probably advance faster and faster just to create AI to fight malicious AI. There's no stopping it now.

          1 vote
    2. elcuello
      Link Parent
      I agree with you it seems like a missed opportunity and I'm a little surprised this didn't get much attention here either. On the other side how could you ever forgive yourself if this spread like...

      I agree with you it seems like a missed opportunity and I'm a little surprised this didn't get much attention here either. On the other side how could you ever forgive yourself if this spread like wild fire? Sure you could make videos, write articles and make press releases etc. to debunk and explain that it was all an experiment but as we all know when the cat's out of the bag a lot of people will never see it or simply don't care because it fits their world view and that's enough. This is so uncontrollable and impossible to predict the outcome off that it might be too unsafe to test out IRL.

      Edit: I just saw you discussed this in other comments now but I'll leave it anyways.