18 votes

The editors protecting Wikipedia from AI hoaxes

3 comments

  1. lonk
    Link
    The article has interesting examples of challenges AI slop poses. I feel like the solution would involve some hybrid of reputation system and human review. That is, give reviewers tools to...

    The article has interesting examples of challenges AI slop poses. I feel like the solution would involve some hybrid of reputation system and human review. That is, give reviewers tools to identify clusters of bot/AI activity.

    7 votes
  2. [2]
    unkz
    Link
    I suspect a more scalable solution will be LLMs that read sources and confirm the article contents. That way injecting fake content will require making fake off-site sources which will at least be...

    I suspect a more scalable solution will be LLMs that read sources and confirm the article contents. That way injecting fake content will require making fake off-site sources which will at least be more work.

    4 votes
    1. creesch
      Link Parent
      That would help but would still require double checking by humans. I have seen LLMs argue something is mentioned in a text while the text is about something entirely different. It likely has to be...

      That would help but would still require double checking by humans. I have seen LLMs argue something is mentioned in a text while the text is about something entirely different.

      It likely has to be a combined approach and still will require a lot of manual work.

      5 votes