I have a gut feeling that age verification has more to do with de-anonymizing the internet than it does anything else. It's just a first step towards forcing real names and unique IDs into every...
I have a gut feeling that age verification has more to do with de-anonymizing the internet than it does anything else. It's just a first step towards forcing real names and unique IDs into every online service.
This will probably create a barrier of entry for new companies and communities to form too. This will build a moat around established companies and protect their profits from having to deal with pesky competition.
I assumed it was because of all the bots. Bunch of people flooded the entire internet with bot armies and ruined it so now we cant have nice things like accounts with no identification.
I assumed it was because of all the bots. Bunch of people flooded the entire internet with bot armies and ruined it so now we cant have nice things like accounts with no identification.
It really depends on the use case, but as an alternative to age verification I assume we are talking about services that require accounts? Email or SMS verification? Requiring a small one time...
It really depends on the use case, but as an alternative to age verification I assume we are talking about services that require accounts?
Email or SMS verification?
Requiring a small one time payment per account?
Blocking disposable or low reputation email providers?
Delayed or multiple step signups?
Invite only accounts?
CAPTCHA?
Honeypots?
I think a lot of services that are infested with bots aren’t trying very hard to stop it because it benefits some metrics.
Digg.com is shutting down (for a while), supposedly because of bots. Maybe it's harder than you think to solve the problem, or maybe they aren't telling the truth about why they are shutting down....
Digg.com is shutting down (for a while), supposedly because of bots.
Maybe it's harder than you think to solve the problem, or maybe they aren't telling the truth about why they are shutting down.
We faced an unprecedented bot problem
When the Digg beta launched, we immediately noticed posts from SEO spammers noting that Digg still carried meaningful Google link authority. Within hours, we got a taste of what we'd only heard rumors about. The internet is now populated, in meaningful part, by sophisticated AI agents and automated accounts. We knew bots were part of the landscape, but we didn't appreciate the scale, sophistication, or speed at which they'd find us. We banned tens of thousands of accounts. We deployed internal tooling and industry-standard external vendors. None of it was enough. When you can't trust that the votes, the comments, and the engagement you're seeing are real, you've lost the foundation a community platform is built on.
This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.
I don't think they started having a significant bot problem until they dropped the invite only requirement. I suspect they dropped that because they "needed more growth". They didn't really...
I don't think they started having a significant bot problem until they dropped the invite only requirement. I suspect they dropped that because they "needed more growth". They didn't really provide any details or post mortem on what they actually tried to stop the bots.
I've implemented bot protections at my work that have resulted in 80%+ reduced in bot traffic/interactions.
These laws are being penned by internet giants for sure, but I think most of it is about removing the current liability they have for children on platform.
These laws are being penned by internet giants for sure, but I think most of it is about removing the current liability they have for children on platform.
Do they actually have liability? Virtually every successful fine or lawsuit related to children on these platforms is due to data collection. They could just.. not collect the data from everyone?
Do they actually have liability? Virtually every successful fine or lawsuit related to children on these platforms is due to data collection. They could just.. not collect the data from everyone?
So I just did some admittedly very light searching around. It doesn't appear that a single company that has been fined, has actually paid the fine, or implemented significant changes due to this?...
So I just did some admittedly very light searching around. It doesn't appear that a single company that has been fined, has actually paid the fine, or implemented significant changes due to this? Maybe it's "to come", but it seems pretty toothless so far?
For example they fined 4chan. I don't think 4chan has paid or changed ANYTHING as a result.
There is a chicken and egg question there though: who/what was the driving force that pushed through unpopular and extremely technically flawed legislation like the OSA to create that liability in...
There is a chicken and egg question there though: who/what was the driving force that pushed through unpopular and extremely technically flawed legislation like the OSA to create that liability in the first place?
I don’t think the OSA is particularly unpopular. Unpopular amongst people that spend time on the internet, but not on the general public. It’s seen as regulation taking power away from US, eg...
I don’t think the OSA is particularly unpopular. Unpopular amongst people that spend time on the internet, but not on the general public.
It’s seen as regulation taking power away from US, eg foreign, corporate entities.
It’s a confusing one, because the same people simultaneously say they support age checks, say they don’t think they’ll work, say they’ll lead to data breaches, say it’ll lead to government...
The vast majority of the conversation I’ve seen around it is on the age verification side, too, not really on curtailing the power of foreign big tech in the same way that, say, GDPR is seen. Maybe that’s partly my bubble, but the polling does seem to focus that way too.
Perhaps calling it unpopular was oversimplifying, but I do still think the broader question stands: people are at best confused, and more broadly somewhere between sceptical and hostile to what they understand of the OSA - so what’s driving the spike in very similar laws in multiple countries right now when that confusion and/or hostility suggests it’s not a matter of the people demanding it?
If we are talking about protecting children and making sure they are not exposed to harmful content my naive approach to that would be expanded social security net, increasing funding for...
If we are talking about protecting children and making sure they are not exposed to harmful content my naive approach to that would be expanded social security net, increasing funding for education and reforming the education system as the start.
Less overworked and hopeless parents are better able to engage with their kids. Someone in the habit of critical thought is better able to decide for themselves if something would be harmful to them. I am not a politician though.
I saw a comment about this topic on reddit that was something like "If we're so worried about protecting children then we should first arrest the known predators that were exposed in the Epstein...
I saw a comment about this topic on reddit that was something like "If we're so worried about protecting children then we should first arrest the known predators that were exposed in the Epstein scandal".
I've been following this a lot since I have a young kid (not old enough to have a device or be online yet) and I think communication and education is the best path forward for the end-user. I have...
I've been following this a lot since I have a young kid (not old enough to have a device or be online yet) and I think communication and education is the best path forward for the end-user. I have huge doubts this will come to pass though because there is no money in this path so there is no way to lobby for that position. At least that's how it feels here in Ontario/Canada.
I do, however, believe the platforms need to be responsible for the content users create and distribute on them. I think you can do this without a huge surveillance stack and de-anonymization tools aimed at knowing me and my kid and everything we do online. Again, this is profit-prohibitive for them since it requires having real moderation teams and processes that are taken seriously.
Age verification is being done badly because the Internet standards don't exist to do it well. It should be trivial for parents to give their kids devices with child locks turned on. The browsers...
Age verification is being done badly because the Internet standards don't exist to do it well. It should be trivial for parents to give their kids devices with child locks turned on. The browsers on these devices could tell websites that they have a child lock on, and websites could act accordingly. This of course requires cooperation, but cooperation should be available - most porn sites probably don't want kids to be there? It solves 90% of the problem without invading anyone's privacy.
This should just be a config parameter. When setting up a website, you should be able to configure what it should do when getting requests from child-locked devices. Outsourcing to sketchy third-party firms to do age verification shouldn't be necessary.
For non-cooperating websites, child-locked devices can use whitelists or blacklists to handle the rest, like they often do now. Also, governments can pass laws saying what businesses should do and hold the bigger businesses accountable for configuring their website correctly.
Then the question is what to do about devices without child locks turned on getting into the hands of kids, but this should be a lot easier for parents to deal with. It's not going to be done perfectly - sometimes kids will have a non-child-locked device stashed somewhere. But perfect enforcement isn't necessary to change community norms.
I think we are in dire need of a system that : Can prove you're an actual citizen from a country (to prevent botting & disinformation) Prove you're above some age Doesn't provide any other...
I think we are in dire need of a system that :
Can prove you're an actual citizen from a country (to prevent botting & disinformation)
Prove you're above some age
Doesn't provide any other information
I don't think that's a particularly hard problem to solve technically if you have a trustworthy government that isn't corrupt to the core by the oligarchy. It should be clear to us that the issue is the corrupt government and not having some safety on the internet in principle. Just lobbying against safety without that context seems a bit dangerous to me.
I have a gut feeling that age verification has more to do with de-anonymizing the internet than it does anything else. It's just a first step towards forcing real names and unique IDs into every online service.
This will probably create a barrier of entry for new companies and communities to form too. This will build a moat around established companies and protect their profits from having to deal with pesky competition.
I assumed it was because of all the bots. Bunch of people flooded the entire internet with bot armies and ruined it so now we cant have nice things like accounts with no identification.
There are much easier and less invasive ways of dealing with bots
What would you suggest?
It really depends on the use case, but as an alternative to age verification I assume we are talking about services that require accounts?
Email or SMS verification?
Requiring a small one time payment per account?
Blocking disposable or low reputation email providers?
Delayed or multiple step signups?
Invite only accounts?
CAPTCHA?
Honeypots?
I think a lot of services that are infested with bots aren’t trying very hard to stop it because it benefits some metrics.
Digg.com is shutting down (for a while), supposedly because of bots.
Maybe it's harder than you think to solve the problem, or maybe they aren't telling the truth about why they are shutting down.
Here is a Tildes post about it
I don't think they started having a significant bot problem until they dropped the invite only requirement. I suspect they dropped that because they "needed more growth". They didn't really provide any details or post mortem on what they actually tried to stop the bots.
I've implemented bot protections at my work that have resulted in 80%+ reduced in bot traffic/interactions.
These laws are being penned by internet giants for sure, but I think most of it is about removing the current liability they have for children on platform.
They can afford to cover the liability. These companies are richer than God.
It's about control, imo
It can be both.
Which they're more concerned about probably varies by company sure.
Do they actually have liability? Virtually every successful fine or lawsuit related to children on these platforms is due to data collection. They could just.. not collect the data from everyone?
Yes? In the UK, for example, they are liable for massive fines and even being blocked from operating in the UK under the OSA: https://www.gov.uk/government/publications/online-safety-act-explainer/online-safety-act-explainer
So I just did some admittedly very light searching around. It doesn't appear that a single company that has been fined, has actually paid the fine, or implemented significant changes due to this? Maybe it's "to come", but it seems pretty toothless so far?
For example they fined 4chan. I don't think 4chan has paid or changed ANYTHING as a result.
There is a chicken and egg question there though: who/what was the driving force that pushed through unpopular and extremely technically flawed legislation like the OSA to create that liability in the first place?
I don’t think the OSA is particularly unpopular. Unpopular amongst people that spend time on the internet, but not on the general public.
It’s seen as regulation taking power away from US, eg foreign, corporate entities.
It’s a confusing one, because the same people simultaneously say they support age checks, say they don’t think they’ll work, say they’ll lead to data breaches, say it’ll lead to government censorship, and say they’re not willing to go through the age verification process themselves: https://www.ipsos.com/en-uk/britons-back-online-safety-acts-age-checks-are-sceptical-effectiveness-and-unwilling-share-id
The vast majority of the conversation I’ve seen around it is on the age verification side, too, not really on curtailing the power of foreign big tech in the same way that, say, GDPR is seen. Maybe that’s partly my bubble, but the polling does seem to focus that way too.
Perhaps calling it unpopular was oversimplifying, but I do still think the broader question stands: people are at best confused, and more broadly somewhere between sceptical and hostile to what they understand of the OSA - so what’s driving the spike in very similar laws in multiple countries right now when that confusion and/or hostility suggests it’s not a matter of the people demanding it?
If we are talking about protecting children and making sure they are not exposed to harmful content my naive approach to that would be expanded social security net, increasing funding for education and reforming the education system as the start.
Less overworked and hopeless parents are better able to engage with their kids. Someone in the habit of critical thought is better able to decide for themselves if something would be harmful to them. I am not a politician though.
I saw a comment about this topic on reddit that was something like "If we're so worried about protecting children then we should first arrest the known predators that were exposed in the Epstein scandal".
I've been following this a lot since I have a young kid (not old enough to have a device or be online yet) and I think communication and education is the best path forward for the end-user. I have huge doubts this will come to pass though because there is no money in this path so there is no way to lobby for that position. At least that's how it feels here in Ontario/Canada.
I do, however, believe the platforms need to be responsible for the content users create and distribute on them. I think you can do this without a huge surveillance stack and de-anonymization tools aimed at knowing me and my kid and everything we do online. Again, this is profit-prohibitive for them since it requires having real moderation teams and processes that are taken seriously.
Age verification is being done badly because the Internet standards don't exist to do it well. It should be trivial for parents to give their kids devices with child locks turned on. The browsers on these devices could tell websites that they have a child lock on, and websites could act accordingly. This of course requires cooperation, but cooperation should be available - most porn sites probably don't want kids to be there? It solves 90% of the problem without invading anyone's privacy.
This should just be a config parameter. When setting up a website, you should be able to configure what it should do when getting requests from child-locked devices. Outsourcing to sketchy third-party firms to do age verification shouldn't be necessary.
For non-cooperating websites, child-locked devices can use whitelists or blacklists to handle the rest, like they often do now. Also, governments can pass laws saying what businesses should do and hold the bigger businesses accountable for configuring their website correctly.
Then the question is what to do about devices without child locks turned on getting into the hands of kids, but this should be a lot easier for parents to deal with. It's not going to be done perfectly - sometimes kids will have a non-child-locked device stashed somewhere. But perfect enforcement isn't necessary to change community norms.
I think we are in dire need of a system that :
I don't think that's a particularly hard problem to solve technically if you have a trustworthy government that isn't corrupt to the core by the oligarchy. It should be clear to us that the issue is the corrupt government and not having some safety on the internet in principle. Just lobbying against safety without that context seems a bit dangerous to me.