66 votes

Google decides to pull up the ladder on the open internet, pushes for unconstitutional regulatory proposals

12 comments

  1. [2]
    bugsmith
    Link
    Summary of the article: Google recently announced a new policy framework around protecting children and teens online, pushing for legislative models like California's Age Appropriate Design Code....

    Summary of the article:

    • Google recently announced a new policy framework around protecting children and teens online, pushing for legislative models like California's Age Appropriate Design Code.

    • The California law was recently found to be unconstitutional, yet Google is still advocating for a similar approach through legislation.

    • Google's proposed model includes requirements like age assurance, greater parental surveillance, and impact assessments - things that are manageable for large companies but create compliance nightmares for smaller competitors.

    • While presented as an alternative to banning teens from social media, Google is actually pushing for regulations like age verification that will make it difficult for all but the largest tech giants to allow teens online.

    • Google benefited from an open internet to become successful but is now undermining it through supporting restrictive regulatory approaches.

    • Google appears to be "pulling up the ladder" behind itself knowing it can comply with new rules while disadvantaging startups and smaller competitors.

    • Similar moves by companies like Facebook previously backfired and undermined internet openness.

    • These types of laws will inevitably be expanded and weaponized against the companies themselves.

    • Google had previously fought more for internet openness but is now clearly showing it has no intention of being a "friend" to an open internet.

    49 votes
    1. raze2012
      Link Parent
      Yes, ban teens from X (not the website). that's been a 100% foolproof plan since the dawn of time and teens have never found ways to get around that, ever. You can mitigate minors, but when is...

      While presented as an alternative to banning teens from social media

      Yes, ban teens from X (not the website). that's been a 100% foolproof plan since the dawn of time and teens have never found ways to get around that, ever.

      You can mitigate minors, but when is society going to take responsibility, sit down, and communicate the allure and dangers of stuff teens aren't fully ready for? adolescent is Latin for "coming to maturity", so it should be the exact time to prepare them for stuff they will have access to in maturity. Not lock it in some vault to never be open until they hit that magic number.


      slight tangent aside, if I had a nickle everytime we tried to close something down with "for the safety of children" as a scapegoat I could probably lobby myself to keep things open. I'm honestly surprised the most that there was this California’s Age Appropriate Design Code that went through to begin with (it's now blocked, 9 month before it was to go into effect).

      27 votes
  2. [8]
    skybrian
    Link
    the article is highly editorialized. Here’s Google’s proposal. An excerpt: It strikes me as advice to legislators not to do anything dumb. Maybe they won’t listen, but I think the fault would be...

    the article is highly editorialized. Here’s Google’s proposal. An excerpt:

    Where required, age assurance – which can range from declaration to inference and verification – should be risk-based, preserving users’ access to information and services, and respecting their privacy. Where legislation mandates age assurance, it should do so through a workable, interoperable standard that preserves the potential for anonymous or pseudonymous experiences. It should avoid requiring collection or processing of additional personal information, treating all users like children, or impinging on the ability of adults to access information. More data-intrusive methods (such as verification with “hard identifiers” like government IDs) should be limited to high-risk services (e.g., alcohol, gambling, or pornography) or age correction. Moreover, age assurance requirements should permit online services to explore and adapt to improved technological approaches. In particular, requirements should enable new, privacy-protective ways to ensure users are at least the required age before engaging in certain activities. Finally, because age assurance technologies are novel, imperfect, and evolving, requirements should provide reasonable protection from liability for good-faith efforts to develop and implement improved solutions in this space.

    It strikes me as advice to legislators not to do anything dumb. Maybe they won’t listen, but I think the fault would be with the legislature?

    I don’t see any particular reason why smaller Internet services who need it couldn’t outsource age “assurance,” much like happens with captchas.

    8 votes
    1. [6]
      sparksbet
      Link Parent
      I personally would be quite suspicious of what "inference and verification" entails -- it's all well and good to claim it should respect users' privacy, but is that really possible? Even if they...

      I personally would be quite suspicious of what "inference and verification" entails -- it's all well and good to claim it should respect users' privacy, but is that really possible? Even if they throw away whatever data they use for "inference" immediately without using it for anything else, I'd be more skeptical of what data they're gathering to make such inference in the first place. Especially since they're suggesting use of "hard identifiers" like government IDs for age correction, which is presumably for adults who are incorrectly classified as children based on this "inference and verification" process. I'm not sure it's quite so reasonable to insist people for whom this process is most flawed (almost always already-oppressed minorities) submit their government ID to either the site they're using or whatever company is providing them with "age assurance" in order to use normal social media like other adults).

      Note that Google also advocates to protect those developing and implementing "age assurance" technology from liability -- as someone who works in machine learning (though not in this domain), this is by far the most worrisome part of this statement for me. We've seen in the past how dangerous and racially-biased facial recognition technology is, and many of those working in this space will be developing or implementing variations of that same technology for this task. I don't think it's safe or ethical to give such companies protection from liability because these technologies are "novel, imperfect, and evolving". That's honestly pretty dangerous advice to give legislators imo.

      15 votes
      1. [5]
        skybrian
        Link Parent
        It depends on what you mean by privacy. In the case of Google's and Cloudflare's captchas, they don't share whatever data they used, just the outcome. So, the relying website gets very little...

        It depends on what you mean by privacy. In the case of Google's and Cloudflare's captchas, they don't share whatever data they used, just the outcome. So, the relying website gets very little information.

        How dangerous facial recognition is depends on what it's used for. Apple's Face ID might not always work, but it doesn't seem dangerous.

        I think the reason they call it "age assurance" is that it's not meant to be foolproof. Some people will get in who shouldn't, and often that's okay if it's not too many. It's not hard security.

        1 vote
        1. [4]
          raze2012
          Link Parent
          when "inference" is involved, that's honestly more dangerous. Google can simply guess or decide that some face or other aggregate data doesn't "look 18" and have oodles of false negatives. And...

          In the case of Google's and Cloudflare's captchas, they don't share whatever data they used, just the outcom

          when "inference" is involved, that's honestly more dangerous. Google can simply guess or decide that some face or other aggregate data doesn't "look 18" and have oodles of false negatives. And vice versa.

          How dangerous facial recognition is depends on what it's used for.

          I argue that it's more dangerous the farther away from you it is. Facials scans stored in your phone cache isn't so bad because your phone being stolen already has dozens of privacy ramifications.

          my face stored in the cloud is a data breach waiting to happen, and one you can't just change your "password" for.

          I think the reason they call it "age assurance" is that it's not meant to be foolproof. Some people will get in who shouldn't, and often that's okay if it's not too many. It's not hard security.

          If it's not foolproof and not that big a deal, why make a whole government bill around it? Who is punished and who is protected in this case? And for what? I feel the proposals read even from Google's own statement fail to address these core points.

          It claims that:

          It should avoid requiring collection or processing of additional personal information, treating all users like children, or impinging on the ability of adults to access information

          but then also says

          Moreover, age assurance requirements should permit online services to explore and adapt to improved technological approaches. In particular, requirements should enable new, privacy-protective ways to ensure users are at least the required age before engaging in certain activities.

          giving the more obvious "certain activities" as examples, but we don't know how far this could spread. The edge cases are always the most important and least explored factors for someone whose goal is to get a bill/proposal to pass. And being "exploratory" runs counter to the idea that it is important to not "imping on the abilities of adults to access information".

          What's the fallback for those inevitable false negatives? Give up their hard Identification like Meta has been doing for years?

          1 vote
          1. [3]
            skybrian
            Link Parent
            I don't know what they're going to do. My guess that the idea is to err on the side of letting people in. The US already has pretty strict laws applying to services meant for children (COPPA) so...

            I don't know what they're going to do. My guess that the idea is to err on the side of letting people in.

            The US already has pretty strict laws applying to services meant for children (COPPA) so the question is whether new laws would be better or worse.

            One way this comes up is when uploading a video to YouTube. If you check the box saying your video is "meant for children" then YouTube disables leaving comments on that video.

            2 votes
            1. raze2012
              Link Parent
              Yeah, that COPPA stuff on Youtube is exactly what makes me scutinize such "inference" from Google. The number of South Park or Family guy clips automatically marked as "For Kids" should be enough...

              Yeah, that COPPA stuff on Youtube is exactly what makes me scutinize such "inference" from Google. The number of South Park or Family guy clips automatically marked as "For Kids" should be enough to show how well these inferences work. And those are the two most famous "not for kids" pieces of animation, the least edge of edge cases (they can certainly identify or self report the work being used, so it's not an issue of "oh the algorithms don't know it's South Park").

              Of course, thats not my primary issue with new laws (not being able to comment on random FG clips). Just a particular pet peeve you reminded me of.

              3 votes
            2. sparksbet
              Link Parent
              ...after Google got in trouble for deliberately advertising to children without regard for the law. I don't think it's fair to assess the scenario based on an assumption that they'll err on the...

              One way this comes up is when uploading a video to YouTube. If you check the box saying your video is "meant for children" then YouTube disables leaving comments on that video.

              ...after Google got in trouble for deliberately advertising to children without regard for the law.

              I don't think it's fair to assess the scenario based on an assumption that they'll err on the side of letting people in vs keeping people out when we don't have any clear evidence of either (the only existing solution I'm aware of admits it doesn't work consistently for those under 21 iirc, but I'll admit to not really researching options in depth there). Google's advice is not flexible in this regard and doesn't advocate for erring on the side of caution in either direction. Plus it advocates shielding both those who develop such software and those who implement it from liability, which does not seem to be a move that incentivizes their being careful about not causing harm.

              1 vote
    2. HeroesJourneyMadness
      Link Parent
      That was my thought too. Could be more fear mongering and pandering to lazy bad parents and the religious right. It could also be Google is willing to put forward some simple free toolkit that...

      That was my thought too. Could be more fear mongering and pandering to lazy bad parents and the religious right. It could also be Google is willing to put forward some simple free toolkit that moves things forward nicely… or back doors a new revenue stream. Who knows. It’s always a giant pain in the butt to try and get down to some semblance of confidence in what the “right” answer is on these things. I’ll wait for more sources and some media names I have heard weigh in.

      2 votes
  3. [2]
    BusAlderaan
    Link
    The author of this article is making the claim that Google is "Pulling the ladder up behind them" without provide any actual evidence that it is. The only evidence of this narrative is that they...

    The author of this article is making the claim that Google is "Pulling the ladder up behind them" without provide any actual evidence that it is. The only evidence of this narrative is that they believe it, that's it.

    Look, I'm wary of the tech giants abuses as any informed internet user. But arguing against age verification using the argument that Google is just trying to make it harder for jo shmo Doogle creator by requiring they integrate age verification code feels like a truly terrible take.

    Personally, I'm for a robust discussion about how we regulate the internet to make it safer and I'm not above discussing legislation that delineates our physical freedoms and our digital freedoms. People pretend that age verification is a moral evil and refuse to even consider it, but I personally think it's just as toxic to refuse acknowledging that some drunk white dudes wrote the Constitution centuries before the internet or cars or automatic weapons. Hell, they wrote it before women or minorities had rights. Maybe, just maybe, we should be having open discussion about changing how we manage a world that looks nothing like it did back then.

    4 votes
    1. HeroesJourneyMadness
      Link Parent
      Crud. I’m going to have to look into this more. I initially read the article in reader view which obscured the author. For whatever reason I didn’t see it was techdirt. Mike Masnick is stellar...

      Crud. I’m going to have to look into this more. I initially read the article in reader view which obscured the author. For whatever reason I didn’t see it was techdirt.

      Mike Masnick is stellar old-guard tech reporting and open standards advocate. If HE is saying it, then there’s more than likely something there. Crap crap crap.

      1 vote