14 votes

A dad took photos of his naked toddler for the doctor. Google flagged him as a criminal.

8 comments

  1. pallas
    Link
    Arguably, this story has little to do with child abuse imagery at all. The more general type of story here, which seems increasingly common with Google, and the real damage being done, seems to...
    • Exemplary

    Arguably, this story has little to do with child abuse imagery at all. The more general type of story here, which seems increasingly common with Google, and the real damage being done, seems to come from the conflict between Google positioning itself as integral and indispensable to daily life and document storage that can and should be relied upon, and also having a policy of proactive and unilateral complete banning of individuals over often opaque reasons, often seeming to rate its judgement, and its sense of truth, over those of state bodies. These stories pop up from time to time, with various reasons for the bans: sometimes there isn't even any known reason.

    Google heavily pushes itself as a wide-ranging, indispensable, and reliable suite of services. It encourages users to keep huge amounts of priceless personal data only on Google servers, and makes regularly backing them up elsewhere difficult for average users. It encourages users to have a Google-controlled online identity, not just through email addresses that are ultimately Google-controlled, but through encouraging security practices like OAUTH that, whatever the ostensible intentions, in practice usually end up causing third-party, unrelated services to require using Google's services (eg, Tailscale's only recently corrected horrible use of it), and through MFA methods that end up relying on continued Google account access, like email-based MFA and MFA credentials backed up to Google servers. Through account-requiring sharing services, it encourages users to compel others to use Google services as well, with things like shared calendars, documents, files, and photographs. It ties basic features on phones, even through third parties, to its services: as far as I can tell, if you have an Android phone other than a Samsung, using it for contactless payments is only possible via Google, because of agreements between Google and banks, and through SafetyNet, accessing basic banking services requires using Google-blessed operating systems and allowing a certain level of access to Google. At a broad level, by encouraging use of Recaptcha, even accessing basic public services in many countries practically requires agreeing to Google's terms of use, especially as Covid seems to have provided an excuse for what appears to be becoming the often permanent suspension or limitation of many non-online and in-person access methods. As an example, in order to pay my California property taxes online, on a state website, I must agree to Google's terms of service, and Google is, in its sole discretion, able to decide whether I should be allowed to pay online or not.

    But then, at the same time, Google seems to assert their right to unilaterally and immediately stop doing all business with an individual, for any reason, and with no real recourse. This might make sense for a normal business, but it comes into conflict with an entire business model built around the suggestion that this isn't a possibility, and that using Google services is both safe and necessary for comfortable everyday life. Thanks to the poor individual protections and lopsided legal system in the US, it appears that they are able to position their largely algorithmically-based and business-protecting views of incidents as being above actual state views. Here, Google's comments to the NY Times appear to be saying that they believe these individuals are guilty of child sexual abuse, despite state investigations deciding otherwise. This would appear to be clearly defamatory, and yet there is no consequence. Similarly, they are able to have policies that essentially cut off all access to personal data, with no consequence. Legal protections in the EU would presumably prevent the latter through GDPR's right of access, and I think would also go a reasonable way in creating consequences for the former, both through defamation law, the GDPR, and right of human review, but in the US, it appears that Google is ultimately the arbiter of these matters. And Google, as with other large online services, seems to be able to label as human review workers paid simply to approve algorithmic decisions.

    Frustratingly, in these stories, all too often the user is vaguely blamed for having done something not quite right, or people are vaguely advised to avoid activities that might cause problems for Google's algorithmic judgements. Here, in order to avoid triggering Google's algorithms and NCMEC's participation, it appears that some organizations make statements advising parents not to follow advice of their pediatricians: even if the pediatricians' advice might be problematic, and it would be safer to build more secure, dedicated systems for these sorts of medical communications, it seems dangerous to suggest that users should prioritize Google's algorithmic review over their children's doctor, and it is advice that seems to ignore that fact that in-person access to medical services is being made increasingly difficult. And there seems to be no advice at all about recovering from the situation of having Google decide, contrary to the state decision, that you are a criminal, and be confident enough to stand behind this classification publicly in the New York Times.

    Alternatively, these stories are given as examples of why people should avoid using large internet services, but this often overlooks just how difficult and limiting that can be. The article seems to give examples of consequences that can largely be seen as having been preventable: had the user kept their data backed up elsewhere, for example, or had they not relied on Google's email being an important part of their identity. But not all consequences of being banned from Google services, or declining to use them, seem so avoidable. In academia, many universities throughout the world have Google or Microsoft services as integral to their IT systems, and I expect many business do as well. Many organizations also use Google services informally, for example, for collaborating on documents, or for scheduling through Google Calendars. Socially, many organizations exclusively rely on large internet company services to communicate, and many people only use walled-garden communication systems, effectively limiting the ability of people who don't use them to form connections except amongst certain tech-and-privacy-inclined cultures.

    So, if you're a business that uses Google Calendar to schedule shifts, and one of your employees is suddenly unable to use Google Calendar, isn't it easier to fire them? If you have an employee who is no longer allowed to have a Google account, or Microsoft account, is it worth changing your entire email, document storage, and meeting system, or build accommodations for them, just to keep them? If you're a company that contributes heavily to open source projects, and Microsoft's algorithms decide to ban one of your programmers from Github, how are they going to continue to be a productive employee for you? And if an employee won't use these services, why would you hire them? If you're a bank, why deal with customers who don't have Google-or-Apple approved smartphones, when you can rely on those companies for security at the cost of excluding some edge cases?

    It's a real concern, and likely does need some regulation: if your business model is built around trying to make use of your services practically necessary to be part of society, then you should not be able to turn around and arbitrarily ban people from using those services, regardless of whatever fine print you might write to absolve yourself of responsibility. And if you decide that you, not the state, should be able to pass judgement on people, then should you not also be subject to the sorts of regulations a state making such decisions would be?

    16 votes
  2. [6]
    kwyjibo
    Link
    Kind of strange reading this as I had a moment of hesitancy yesterday when I was scanning some old family photos and came across the photo of the first bath my mother and I gave to my brother. I...

    Kind of strange reading this as I had a moment of hesitancy yesterday when I was scanning some old family photos and came across the photo of the first bath my mother and I gave to my brother. I ended up scanning and uploading the photo to iCloud but the fact that I did think about this highly unlikely thing that might label me as something I'm not and lock me out of my digital life, which is largely on Apple's platform, was bothersome. Apple has yet to scan photos for CSAM but I have no doubt that it will be implemented soon when the backlash from last year dies down.

    I already back my important stuff up to other mediums (using the 3-2-1 rule), but in a situation like this, I'd guess getting locked out of your accounts or even having your data permanently deleted will be the least of your problems.

    I'm of the opinion that scanning photos for CSAM is a mistake but Alex Stemos, whose opinion I value highly, has been engaging with people on his twitter account for the contrary and having been worked as a CSO of Facebook, I appreciate his insight given the scale he was responsible for. (His long back and forth with Matthew Green features the writer of the article Kashmir Hill as well.)

    6 votes
    1. [5]
      skybrian
      Link Parent
      Without getting into CSAM specifics, I think scanning for bad stuff is often necessary but canceling the account is too harsh, given the possibility of false positives. The way I think about it is...

      Without getting into CSAM specifics, I think scanning for bad stuff is often necessary but canceling the account is too harsh, given the possibility of false positives. The way I think about it is that if a bank decides they don’t want you anymore as a customer, you will (I presume) still get your money back. There should be something similar for digital files.

      2 votes
      1. [4]
        stu2b50
        Link Parent
        I'm not sure if that analogy entirely works, though. If the Bank suspects you are a perpetrator of fraud, or a money launderer, or any other manner of financial criminal, then absolutely will NOT...

        I'm not sure if that analogy entirely works, though. If the Bank suspects you are a perpetrator of fraud, or a money launderer, or any other manner of financial criminal, then absolutely will NOT give your money back, and will freeze your account pending investigation from law enforcement. That even happens to entire nation states, like one very prominent one right now, who have their assets frozen or even confiscated.

        5 votes
        1. [3]
          skybrian
          Link Parent
          Yes, I was thinking of situations where it’s not a law enforcement matter but for whatever reason they don’t want the risk. Still, this article is about situations where law enforcement cleared...

          Yes, I was thinking of situations where it’s not a law enforcement matter but for whatever reason they don’t want the risk.

          Still, this article is about situations where law enforcement cleared the customer. So it’s not law enforcement anymore, but they still don’t get their files back. I think in a similar situation, a bank account would be unfrozen?

          4 votes
          1. pallas
            Link Parent
            What's bizarre about this story is that it appears law enforcement cleared the customer, but Google appears to disagree with law enforcement. Somewhat similarly, it appears that the only comments...

            Still, this article is about situations where law enforcement cleared the customer. So it’s not law enforcement anymore, but they still don’t get their files back. I think in a similar situation, a bank account would be unfrozen?

            What's bizarre about this story is that it appears law enforcement cleared the customer, but Google appears to disagree with law enforcement.

            As for Mark, Ms. Lilley, at Google, said that reviewers had not detected a rash or redness in the photos he took and that the subsequent review of his account turned up a video from six months earlier that Google also considered problematic, of a young child lying in bed with an unclothed woman.

            A Google spokeswoman said the company stands by its decisions, even though law enforcement cleared the two men.

            Somewhat similarly, it appears that the only comments from the NCMEC on the topic is that detecting the images is "an example of the system working as it should.”

            On the one hand, from a data perspective, yes, it would seem that if the US had something like the GDPR, they would have access to the data. But on the other, it seems astonishing that Google is willing to all but state that they consider these men to be guilty of sharing child abuse images, despite the state disagreeing.

            4 votes
          2. stu2b50
            Link Parent
            That is true. It's an area where laws and conventions could change in the future - we treat money as a very "sacred" thing. Barring special cases like bankruptcy or illegal activity, it's...

            That is true. It's an area where laws and conventions could change in the future - we treat money as a very "sacred" thing. Barring special cases like bankruptcy or illegal activity, it's unacceptable for holders of money to not return it. Perhaps in the future the same could be true of personal data.

            2 votes
  3. teaearlgraycold
    Link
    I can understand situations where you have a pixel-by-pixel copy (or a cropped version, etc.) of a known image of child abuse. But going off of computer vision classification of novel images seems...

    I can understand situations where you have a pixel-by-pixel copy (or a cropped version, etc.) of a known image of child abuse. But going off of computer vision classification of novel images seems like it should have brought up situations like this during the design phase.

    1 vote