14 votes

Topic deleted by author

9 comments

  1. rkcr
    Link
    Apparently already unbanned, according to the updated article.

    Apparently already unbanned, according to the updated article.

    13 votes
  2. [8]
    knocklessmonster
    Link
    Thumbing through their channel, I think I know what the issue is. Jeff Holiday, a skeptic I follow, has had similar issues with videos, IIRC, because he posts their content in his video, but...

    Thumbing through their channel, I think I know what the issue is. Jeff Holiday, a skeptic I follow, has had similar issues with videos, IIRC, because he posts their content in his video, but YouTube doesn't have a way to tell criticism from original content when it does these checks for problematic videos. He posts videos about really left-field, and unfortunately increasingly mainstream, conspiracies and pseudoscience and has been flagged because of some of the content he's criticized.

    The way they run their channel seems to be, from their most recent uploads, an annotation with the unmodified segment in the video. The Algorithm doesn't have a way to go "Hey, this flagged content is here for education" and nuked a channel rife with it.

    On Reddit people were screaming about a right wing conspiracy, and YouTube protecting its bottom line, but I'd wager from the way Right Wing Watch's videos are formatted, they could have the same thing happen again, but at least have a fast track to resolution.

    11 votes
    1. [3]
      wycy
      Link Parent
      After a couple false positive flags on a given channel, YouTube needs a way to tag a channel as being already-vetted and protected from automatic takedowns.

      After a couple false positive flags on a given channel, YouTube needs a way to tag a channel as being already-vetted and protected from automatic takedowns.

      8 votes
      1. Bullmaestro
        Link Parent
        I think for partnered channels above a certain threshold YouTube should actually switch to a manual verification process and actually employ people to investigate/make decisions. Even large...

        I think for partnered channels above a certain threshold YouTube should actually switch to a manual verification process and actually employ people to investigate/make decisions.

        Even large creators are fucked over by the algorithm.

        6 votes
      2. knocklessmonster
        Link Parent
        I think they do this for high-profile channels, at least, when something is seen to obviously be wrong, but it should definitely be a thing if you successfully appeal a couple times.

        I think they do this for high-profile channels, at least, when something is seen to obviously be wrong, but it should definitely be a thing if you successfully appeal a couple times.

        3 votes
    2. [2]
      RNG
      Link Parent
      This makes sense, as often the way ban circumvention works for many users is to consistently attempt to reupload content to other accounts. I'd imagine consistently having clips of banned content...

      This makes sense, as often the way ban circumvention works for many users is to consistently attempt to reupload content to other accounts. I'd imagine consistently having clips of banned content tripped something somewhere.

      I doubt a human was in the loop from beginning to end. You'd kinda hope that channels over a certain number of users would at least involve some human rubber stamping the action prior to a permanent ban...

      5 votes
      1. knocklessmonster
        Link Parent
        I don't think a human needs to be involved at every step, but they should be involved early, and we should have easy access to one. Use the AI to narrow your moderation list to known offenders,...

        I don't think a human needs to be involved at every step, but they should be involved early, and we should have easy access to one. Use the AI to narrow your moderation list to known offenders, but have a system in place to moderate the AI decisions. And help those moderators, because that job will suck.

        3 votes
    3. [2]
      Bullmaestro
      Link Parent
      It's the pain of automation. Machine learning is in its infancy and no algorithm using present technology can distinguish criticism from problematic content, as you mentioned. Expect this problem...

      It's the pain of automation. Machine learning is in its infancy and no algorithm using present technology can distinguish criticism from problematic content, as you mentioned.

      Expect this problem to get worse until technology either improves enough that fair decisions can be made, or the law and/or the market catches up to what YouTube have been doing.

      4 votes
      1. knocklessmonster
        Link Parent
        That's pretty much it. It is very much an issue of "who watches the watchman?" We can't trust these systems to work as well, and they should operate in conjunction with human moderators by...

        Machine learning is in its infancy and no algorithm using present technology can distinguish criticism from problematic content, as you mentioned.

        That's pretty much it.

        It is very much an issue of "who watches the watchman?" We can't trust these systems to work as well, and they should operate in conjunction with human moderators by default. Hopefully the law can be changed to reflect this, but that would require legislators who understand these systems and their implications.

        3 votes