9 votes

The supply of disinformation will soon be infinite: Disinformation campaigns used to require a lot of human effort to be effective, but now artificial intelligence could take them to a whole new level

2 comments

  1. Kuromantis
    (edited )
    Link
    Meta EDIT: I took a 0 out of my link by accident, can someone please fix that for me? EDIT: Deimos did, so thanks for that.
    Meta

    EDIT: I took a 0 out of my link by accident, can someone please fix that for me?

    EDIT: Deimos did, so thanks for that.

    Someday soon, the reading public will miss the days when a bit of detective work could identify completely fictitious authors. Consider the case of “Alice Donovan.” In 2016, a freelance writer by that name emailed the editors of CounterPunch, a left-leaning independent media site, to pitch a story. Her Twitter profile identified her as a journalist. Over a period of 18 months, Donovan pitched CounterPunch regularly; the publication accepted a handful of her pieces, and a collection of left-leaning sites accepted others.

    Then, in 2018, the editor of CounterPunch received a phone call from The Washington Post. A reporter there had obtained an FBI report suggesting that Alice Donovan was a “persona account”—a fictitious figure—created by the Main Directorate, the Russian military-intelligence agency commonly known as the GU. Skeptical of the Russia link, but concerned about having potentially published content from a fake person, the CounterPunch editors pored over Donovan’s oeuvre, which spanned topics as varied as Syria, Black Lives Matter, and Hillary Clinton’s emails. They found her to be not only suspicious, but also a plagiarist: Some of the articles bearing her byline appeared to have been written instead by another woman, Sophia Mangal, a journalist affiliated with something called the Inside Syria Media Center.

    The ideal scenario for the modern propagandist, of course, is to have convincing personas produce original content. Generative text is the next frontier. Released in a beta version in June by the artificial-intelligence research lab OpenAI, a tool called GPT-3 generates long-form articles as effortlessly as it composes tweets, and its output is often difficult to distinguish from the work of human beings. In fact, it wrote parts of this article. Tools like this won’t just supercharge global propaganda operations; they will force internet platforms and average users alike to find new ways of deciding what and whom to trust.

    When I prompted GPT-3 to opine on these issues, it captured the problem succinctly:

    For the moment, at least, it seems unlikely that generative media will be effective in the same way as traditional media at promoting political messages. However, that’s not to say that it couldn’t be. What it will do is muddle the waters, making it much harder to tell what’s real and what’s not.

    One 2018 “opinion-rigging” operation in South Korea spearheaded by a popular blogger used a combination of human commenters as well as an automated program to post and boost comments critical of a particular politician. Seoul police noted the volume of two days of activity: “They manipulated about 20,000 comments on 675 news articles, using 2,290 different IDs from January 17 to 18.” In the quaint early days of social-media manipulation, such efforts were limited by human constraints. That will soon no longer be the case.

    2 votes
  2. skybrian
    Link
    There are ways to mostly-verifiably connect online accounts to real-world identities. This can be done anonymously provided that a trusted entity is willing to vouch for you. Sometimes journalists...

    There are ways to mostly-verifiably connect online accounts to real-world identities. This can be done anonymously provided that a trusted entity is willing to vouch for you. Sometimes journalists do this for their sources.

    I expect that at some point this will have to be done at scale. But who is going to do it?

    2 votes