13 votes

YouTube's "fact-checking AI" intended to counter misinformation was attaching info about the 9/11 attacks to Notre Dame fire videos

7 comments

  1. [6]
    Deimos
    Link
    We've discussed some of YouTube's fact-checking efforts previously, but this is an embarrassing and high-profile failure of it. Even for companies like Google that have access to a ridiculous...

    We've discussed some of YouTube's fact-checking efforts previously, but this is an embarrassing and high-profile failure of it. Even for companies like Google that have access to a ridiculous amount of resources and data, AI isn't working well for tasks like this, and it's concerning that companies like Google, Facebook, and Twitter all seem to be relying on it as their solution for misinformation, hate speech, and more.

    13 votes
    1. [5]
      alyaza
      Link Parent
      i think at this point the question is probably less how they should do this, but rather is there even an objectively good way to do this at the scale in the first place (and if there's not, what...

      i think at this point the question is probably less how they should do this, but rather is there even an objectively good way to do this at the scale in the first place (and if there's not, what is the least bad way to do it)? because AI can fail pretty badly like it did here, but the major alternative is human moderation, which facebook still uses a lot, and it has a habit of really fucking those people up because it's labor intensive while still lacking clear guidelines, almost necessarily traumatic because people post all kinds of awful things, usually radicalizing because it exposes you to the genuinely crazy people of the world, and to top it all off it pays like shit because you can just get another stack of warm bodies to do it (or automate it, whichever's easier).

      7 votes
      1. [2]
        TheInvaderZim
        Link Parent
        The big point for human moderation to me is that the companies need to actually care about it. Google is insanely wealthy, and could literally employ a moderately-sized army of moderators to fact...

        The big point for human moderation to me is that the companies need to actually care about it. Google is insanely wealthy, and could literally employ a moderately-sized army of moderators to fact check such things, support them adequately while doing it, and ensure the health of their staff as they do so, but there is absolutely no incentive for them to make anything more than the token effort theyve made already, because our current copyright law doesnt hold youtube accountable for whats on its own platform - only the advertisers do.

        If there is a solution to be found in AI, it would seem like we're still a long way from reaching that point. In the interim, copyright law is what needs to change, to force Google to start accounting for whats present on its platform.

        5 votes
        1. Pilgrim
          Link Parent
          FB already does this to moderate violent/disturbing content, so yes, a real person actually looks at many of things people "report." Working one of these jobs is a harrowing look at the dark soul...

          Google is insanely wealthy, and could literally employ a moderately-sized army of moderators to fact check such things,

          FB already does this to moderate violent/disturbing content, so yes, a real person actually looks at many of things people "report." Working one of these jobs is a harrowing look at the dark soul of humanity: https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona

          I'm going to assume that the rapid adoption of AI for these tasks is to spare actual humans from seeing the absolute filth that is posted daily.

          I realize that you're talking about fact checking and copy right violations, and yes the incentive might not be there currently, but the human moderation described in the article seems psychologically damaging and I welcome AI that can handle the task.

          2 votes
      2. Amarok
        Link Parent
        I honestly don't think that paid moderation will really work. That makes moderation into 'just a job' or, put another way, it's a burger-flipping shit-task that pays poorly and brings a host of...

        I honestly don't think that paid moderation will really work. That makes moderation into 'just a job' or, put another way, it's a burger-flipping shit-task that pays poorly and brings a host of trouble. What's worse, it's hard to pay someone to really care. Money does not buy passion. The best moderators are the ones who get into it because they are trying to preserve and contribute to their communities, and that's not a financial motive.

        If we're ever going to tackle this problem, I think the solution is combining humans with the AI. Let the AI systems work like lasers scanning through all the data for things to bring to a human's attention, rather than making decisions themselves. If the tools are built to amplify human labor, rather than replacing it, that multiplicative effect might make a real difference. Using modern adversarial neural networks, it might even be possible to improve these AI systems slowly over time using feedback from the moderators so that they become more effective at finding problems and executing batch-like tasks on large swaths of content.

        We need one click from a moderator to take dozens, hundreds, or even hundreds of thousands of clicks of effort from the asshats to overcome. When the effort scale is shifted that severely against the hostile community actors, they'll give up. That's the only way to win. We're nowhere near this right now, if anything the scale is shifted thousands of clicks in favor of the hostile actors, because our moderation tools haven't traditionally been designed this way.

        Oh, and banning the asshats works too. Plenty of communities out there that have survived since the mid-80s or even longer, and all of them do it by restricting or charging for membership somehow. That's also why most of them aren't big/popular.

        4 votes
      3. 45930
        Link Parent
        Seems like in the long term, the realistic ability to moderate content will go up against people's desire to interact with strangers online. If the content available doesn't meet people's desires,...

        Seems like in the long term, the realistic ability to moderate content will go up against people's desire to interact with strangers online. If the content available doesn't meet people's desires, they will just join a smaller platform or abandon the idea of social media.

        I for one, would not stop using youtube, just because there is 9/11 conspiracy content available. I'm sure that feature is aimed at salvaging ad dollars, not me. I guess youtube is on track to shudder eventually from the supply side if they can't get this right. Basically, today's social media isn't sustainable. We're still in the "fake it" stage. Everyone is losing money or users. No one has the secret sauce yet. Maybe we are back on the path to more niche forums with community modding and community-funded hosting.

        2 votes
  2. hamstergeddon
    Link
    Could it be that the system was picking up on Islamophobic fearmongering in stream comments like you might see on 9/11-related videos? Alt right was pretty quick to blame muslims, immigrants, etc....

    Could it be that the system was picking up on Islamophobic fearmongering in stream comments like you might see on 9/11-related videos? Alt right was pretty quick to blame muslims, immigrants, etc. so I woulnd't be surprised if the kinds of things being said were very similar.

    2 votes