10 votes

Researchers who built similar system explain why Apple's CSAM scanning system is dangerous

3 comments

  1. [3]
    MetArtScroll
    Link
    Also, the research shows that in addition to the high number of false positives the system would generate (where some may say that it is justified since CSAM is unacceptable), it is extremely easy...

    [The] system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

    Also, the research shows that in addition to the high number of false positives the system would generate (where some may say that it is justified since CSAM is unacceptable), it is extremely easy to slightly alter a really infringing image or video thus generating a false negative.

    4 votes
    1. vord
      Link Parent
      Effective against the masses, trivially bypassed for any who research for 5 minutes. I guess they could try to inflict shutdowns of information via DMCA for violating Apple's copyright...

      Effective against the masses, trivially bypassed for any who research for 5 minutes. I guess they could try to inflict shutdowns of information via DMCA for violating Apple's copyright protections.

      This seems more and more like yet another survielance/censorship tool deloyed on the premise of 'protecting the children'.

      Protecting the children is a social issue, not a technical one. I doubt a technical system will ever exist to stop grooming children to accept abuse later in life.

      3 votes
    2. NaraVara
      Link Parent
      According to Apple's white paper the odds of a false positive match is around 1 in 1 trillion. And that's only to hit the threshold of (I believe) 30 matches before it gets passed to an human...

      Also, the research shows that in addition to the high number of false positives the system would generate

      According to Apple's white paper the odds of a false positive match is around 1 in 1 trillion. And that's only to hit the threshold of (I believe) 30 matches before it gets passed to an human reviewer.

      it is extremely easy to slightly alter a really infringing image or video thus generating a false negative.

      Not really. There are fudge factors built in to counter the most accessible forms of this.

      Also they're public about which databases they're using, and they are audited by organizations with the authority to validate what it's for. And on top of that, even after being escalated the human reviewer would need to agree that it's infringing.

      3 votes