16 votes

Is it wrong to use AI to fact check and combat the spread of misinformation?

I’ve been wondering about this lately.

Recently, I made a post about Ukraine on another social media site, and someone jumped in with the usual "Ukraine isn't a democracy" right-wing talking point. I wrote out a long, thoughtful reply, only to get the predictable one-liner propaganda responses back. You probably know the type, just regurgitated stuff with no real engagement.

After that, I didn’t really feel like spending my time and energy writing out detailed replies to every canned response. But I also didn’t want to just let it sit there and have people who might be reading the exchange assume there’s no pushback or correction.

So instead, I tried leveraging AI to help me write a fact-checking reply. Not for the person I was arguing with, really, but more as an FYI for anyone else following along. I made sure it stayed factual and based in reality, avoided name-calling, and kept the tone above the usual mudslinging. And of course, I double-checked what it wrote to make sure it matched my understanding and wasn’t just spitting out garbage or hallucinations.

But it got me thinking that there’s a lot of fear about AI being used to spread and create misinformation. But do you think there’s also an opportunity to use it as a tool to counter misinformation, without burning ourselves out in the process?

Curious how others see it.

17 comments

  1. thereticent
    Link
    I admit my response is off-the-cuff, but I'm in favor of putting on the internet as much text as possible that spreads truth rather than falsehood. LLMs are ingesting all of it, and as they say:...

    I admit my response is off-the-cuff, but I'm in favor of putting on the internet as much text as possible that spreads truth rather than falsehood. LLMs are ingesting all of it, and as they say: garbage in, garbage out.

    Not to mention you're raising the tenor of the conversation and engaging in good faith. Nothing you've said strikes me as using AI in a wrong way.

    26 votes
  2. [5]
    karsaroth
    Link
    AI is a tool to aid in readable/understandable text generation, since you're checking it's accuracy, then you're using it appropriately IMO. However, if you're getting to the point where you're...

    AI is a tool to aid in readable/understandable text generation, since you're checking it's accuracy, then you're using it appropriately IMO.

    However, if you're getting to the point where you're feeling worn out by the interaction, a better move might be to disengage. One comment on the internet won't save the world but it might wear out your sanity, do take care of yourself.

    You're much more likely to have success in this area talking to people IRL, so if you want to change hearts and minds on something, that's a better place to spend your energy if you ask me.

    22 votes
    1. [4]
      OBLIVIATER
      (edited )
      Link Parent
      I agree, people have gotten way too caught up in the culture war around AI and forget that it can just be used as a tool like any other product. It's when someone relies on it wholly with no...

      I agree, people have gotten way too caught up in the culture war around AI and forget that it can just be used as a tool like any other product. It's when someone relies on it wholly with no critical thinking is when you run into issues.

      4 votes
      1. [3]
        vord
        Link Parent
        Which is basically the end game for every single technological advancement. How many users read error messages even 50% of the time? Or a daily status email that says "You will lose all data if...

        Which is basically the end game for every single technological advancement.

        How many users read error messages even 50% of the time? Or a daily status email that says "You will lose all data if you don't migrate before June."

        We're fools if we think that 98% of the population won't cease checking the AI blather over time.

        2 votes
        1. [2]
          OBLIVIATER
          Link Parent
          That's not really the point I was trying to make with my comment. /u/Merry seems almost gun-shy about using AI to help them try to combat misinformation online, perhaps because the arguments...

          That's not really the point I was trying to make with my comment. /u/Merry seems almost gun-shy about using AI to help them try to combat misinformation online, perhaps because the arguments around AI are so emotionally charged that it often results in people throwing out insults. I think the use case they are explaining here is perfectly reasonable and a great way to save mental energy while still keeping a human involved with the process of writing out information.

          3 votes
          1. vord
            (edited )
            Link Parent
            I get that. But if we agree that the tool, once made, will inevitability and irreversibly be used incorrectly by the masses (via the aforementioned "no critical thinking"), then I propose it is a...

            I get that. But if we agree that the tool, once made, will inevitability and irreversibly be used incorrectly by the masses (via the aforementioned "no critical thinking"), then I propose it is a tool better not made or used.

            Using the AI to fact check means you've already outsourced your critical thinking for finding reliable sources. You've gone from "author" to "editor".

            If you want to save mental energy, you drop a simple "This is wildly inaccurate/misleading. Here are some reasonable sources" Then drop some links and be done with it. The readers who care will read the sources. The readers who don't will get the message. It's a lot easier than fact checking AI slop afterwards. And I respect it more than using AI to pad out the response like a 6th grader stretching their 3 page paper to 5.

            Edit: Cut out duplicate blocks that occurred from trying to reword on phone.

            2 votes
  3. [4]
    Eji1700
    Link
    First: You might want to reassess this as anyone who thinks a non reply after a point is no pushback probably isn't going to be swayed by further evidence. For the rest: I see AI like wikipedia...

    First:

    After that, I didn’t really feel like spending my time and energy writing out detailed replies to every canned response. But I also didn’t want to just let it sit there and have people who might be reading the exchange assume there’s no pushback or correction.

    You might want to reassess this as anyone who thinks a non reply after a point is no pushback probably isn't going to be swayed by further evidence.

    For the rest:
    I see AI like wikipedia but a lot worse.

    Wiki is not perfect, and people love to pick and choose when it's good enough and when it's under the sway of bad faith actors or common misconceptions or the radical left/right/whatever.

    The upside of wikipedia though is that if I think it's wrong or citing suspicious sources, I can click the link, go to the supporting documentation, and check the primary sources myself.

    AI, in theory, outputs this. In practice I've found it to make questionable (at best) leaps based on the information provided, and the default output does not lend itself well to copy pasting and actually providing the sources it used so someone can realistically fact check it.

    As a quick and dirty example designed to touch on something difficult.

    Who's right in the Israel Palestine conflict

    Plugged into co-pilot gives (and i'm using the built in copy button to paste this):


    The Israel-Palestine conflict is a deeply complex and sensitive issue with historical, political, and social dimensions. It's challenging to definitively say who is "right" as both sides have their own perspectives and grievances.

    Israel's Perspective:

    Palestine's Perspective:

    The conflict has resulted in significant loss of life, displacement, and suffering on both sides. Many international efforts have been made to mediate peace, but a lasting solution remains elusive.

    It's important to approach this topic with empathy and an understanding of the complexities involved. What are your thoughts on this issue?

    Source: Conversation with Copilot, 3/6/2025
    (1) Israel-Palestine conflict: Who is to blame? - New Statesman. https://www.newstatesman.com/ideas/2023/10/israel-palestine-blame.
    (2) What’s the Israel-Palestine conflict about? A simple guide. https://www.aljazeera.com/news/2023/10/9/whats-the-israel-palestine-conflict-about-a-simple-guide.
    (3) Who is at wrong, and right, about the Israel- Palestine conflict? Is it .... https://www.michaellaitman.com/jewish/who-is-at-wrong-and-right-about-the-israel-palestine-conflict-is-it-much-more-complex/.


    Now that's mostly good, but if you are doing what most people I've seen do, and just copying the text and leaving out the sources, you're already sabotaging the discussion for anyone like you described (onlookers who might wish to know more).

    Further summarizing such a complex subject down to 3 points and 3 sources, is extremely reductionist.

    Finally, this is AFTER model tweaking. There's a lot of sensitive subjects that AI's get "wrong" and can actually have much stronger views than the text, or society, might support. Then there's subjects that it gets "wrong" because the literal owners of the code don't like the answer.

    Wiki can have biases like this as well, but at least you can in theory see the discourse on it if you want to dig deep enough. However whatever the fuck grok (musk's AI) is spitting out this week has probably had Musk's approval if he didn't like the output.

    You have 0 way of controlling for that with AI, and it's a perfectly fair criticism of arguments supported by it. Who knows if my input above hit some sanitation layer that said "uh oh, keywords detected, here's the canned response someone wrote" (which I suspect it did given the average discourse on the subject is NOT so middle of the road).

    In this case maybe you approve of the guardrails, but obviously nothing stops microsoft from having it say "They're both wrong because they're living where the lizard people rightly dwell as its the center of our flat earth".

    7 votes
    1. [3]
      ThrowdoBaggins
      Link Parent
      Forgive me for aggressively summarising your thoughtful comment into a single line, but it sounds like one of your bigger issues with AI is that someone else’s guardrails might end up...

      Forgive me for aggressively summarising your thoughtful comment into a single line, but it sounds like one of your bigger issues with AI is that someone else’s guardrails might end up significantly influencing the output.

      Assuming that my analysis here is at least kinda correct, do you think locally running my own AI (maybe multiple, with another one to compare their outputs against each other) could mitigate this risk enough that you’d be more likely to endorse their output?

      2 votes
      1. merry-cherry
        Link Parent
        Self hosting an AI doesn't free you from other people's guardrails. You're still beholden to the metrics and data used to create the model. A raw model is more unchecked than using one that...

        Self hosting an AI doesn't free you from other people's guardrails. You're still beholden to the metrics and data used to create the model. A raw model is more unchecked than using one that someone else is hosting, but that doesn't mean the response isn't biased by the training conditions. The only way you can have control there is by training your own model but you'll definitely end up building in your own bias.

        1 vote
      2. Eji1700
        Link Parent
        No worries on summarizing. Short version maybe? I could see a “canned responses” ai model for a local machine if you want. You’re just sorta getting back to the issue of the outside observer...

        No worries on summarizing.

        Short version maybe? I could see a “canned responses” ai model for a local machine if you want. You’re just sorta getting back to the issue of the outside observer trusting your canned model though.

        Again issue as mentioned for OP was what about those reading but not participating, and they’re probably right to be extra skeptical of AI content because they can’t verify what extra tweaks were made or if it was even a sane conclusion to begin with

  4. Grumble4681
    Link
    I'm hesitant about it, but it possibly depends on the level of involvement of the person who is using it. The biggest area where I'm hesitant with LLMs that is similar usage to what you're asking...

    I'm hesitant about it, but it possibly depends on the level of involvement of the person who is using it.

    The biggest area where I'm hesitant with LLMs that is similar usage to what you're asking about is for summarizing, which I've seen some users here on Tildes use it for that. There have been numerous reports that LLMs do not always summarize information with only the information provided in the source material but instead sometimes fabricate new information that wasn't included in the source material that was asked to be summarized. Considering that summaries are generally done to avoid reading the lengthier source material, this makes their fabrications more susceptible to going unnoticed because if you don't read the source material how do you know if it was in there or if the LLM fabricated it?

    I do think they can be used as a tool in the manner you're suggesting, but I don't see it as a net positive for LLMs in that regard. More so I see it as countering some of the misinformation that someone else possibly produced with an LLM and spread around. For every person who vets the information the LLM outputs before disseminating the information provided, how many aren't vetting it? Granted the person using it responsibly has no control over the people using it irresponsibly, so it's not to say one should abstain as though it stops others from using it irresponsibly, more so I'm saying that while it may be a tool for good in some cases, overall I seriously doubt that it's a net positive.

    6 votes
  5. skybrian
    Link
    Writing for an imaginary audience like that while you’re also replying to someone else is no good. If you no longer want to talk to person you’re actually replying to, it’s time to end that...

    Not for the person I was arguing with, really, but more as an FYI for anyone else following along.

    Writing for an imaginary audience like that while you’re also replying to someone else is no good. If you no longer want to talk to person you’re actually replying to, it’s time to end that conversation.

    But on the other hand, if you’re inspired to write by disagreement (as I often am), another approach is to leave them out of it and post somewhere else.

    3 votes
  6. pete_the_paper_boat
    Link
    Bit lazy if you also make it come up with the evidence and conclusions, that's been tried and has failed horribly (lawyers arguing with non existent, hallucinated cases in their arguments). But if...

    Bit lazy if you also make it come up with the evidence and conclusions, that's been tried and has failed horribly (lawyers arguing with non existent, hallucinated cases in their arguments).

    But if you've got a less agreeable personality and want to more effectively write a response which puts all the facts in, l don't see a problem with it.

    2 votes
  7. [2]
    Lobachevsky
    Link
    As someone who is extremely excited about AI of all kinds, I think it's a horrible idea. The models follow their training and that follows the data set. Until you can create a dataset completely...

    As someone who is extremely excited about AI of all kinds, I think it's a horrible idea. The models follow their training and that follows the data set. Until you can create a dataset completely free of bias (which I don't think is possible) you are not going to get a model free of bias. Even if you did, prompt injection is the next avenue you'd have to take care of. At that point you're just making a propaganda bot, just your own flavor of it

    At the end of the day, there almost isn't the truth, there's lots of different truths and everyone picks their truths. How do you identify what's true? You can say well, I'm only going to used verified facts and studies - but those are all unreliable at the end of the day. And sometimes even if you get your facts completely correct, different people can draw different conclusions from those facts. Finally, most of the time "the truth" changes, are you going to retrain your bot every time? How? It's not feasible.

    2 votes
    1. vord
      Link Parent
      This is that essential "critical thinking" bit which really can't be outsourced. You have to be willing to trace to original sources (and have the knowledge to properly vet said original sources),...

      This is that essential "critical thinking" bit which really can't be outsourced.

      You have to be willing to trace to original sources (and have the knowledge to properly vet said original sources), or you have to establish a complex chain of trust so you don't get misled.

      I trust my engineer friends to help diagnose wiring problems in my house. I don't trust their medical advice.

      1 vote
  8. hobblyhoy
    Link
    I'm my book, the true concerns with AI spreading misinformation stems from the ability for bad actors to spin up any n number of agents who pretend to be real people and engage with forums by...

    I'm my book, the true concerns with AI spreading misinformation stems from the ability for bad actors to spin up any n number of agents who pretend to be real people and engage with forums by flooding them with a single world view. A single person or organization can have a disproportionate influence on a massive group of people - and they don't even need to speak the same language.

    Using an AI tool which you guide, hand review, and post under your regular handle is worlds away from this and completely fine.

    1 vote
  9. PossiblyBipedal
    Link
    AI is ultimately a tool. Like others have said, the way you're using it is fine. You are not spreading misinformation, you're not using it to put down others, and you fact check the output of the...

    AI is ultimately a tool. Like others have said, the way you're using it is fine. You are not spreading misinformation, you're not using it to put down others, and you fact check the output of the AI.

    It becomes a problem when people take anything from the AI wholesale without checking the output.

    I've also had people tell me they use chatgpt as a search engine and don't fact check what they get. Which is really problematic.

    But if they use it together with other search engines, get chatgpt to list its sources and read that, that would be fine.

    1 vote