I'm definitely not doing this. And it sounds like avoiding it isn't going to be too painful for my personal needs? There is no scenario where any of the Discords I'm in or would ever be in are...
I'm definitely not doing this. And it sounds like avoiding it isn't going to be too painful for my personal needs?
Users who aren’t verified as adults will not be able to access age-restricted servers and channels, won’t be able to speak in Discord’s livestream-like “stage” channels, and will see content filters for any content Discord detects as graphic or sensitive. They will also get warning prompts for friend requests from potentially unfamiliar users, and DMs from unfamiliar users will be automatically filtered into a separate inbox.
There is no scenario where any of the Discords I'm in or would ever be in are worth sending my ID or face scan to a 3rd party company for.
I'm concerned it won't stop there. I wonder who is responsible for determining when a server is 'adult' oriented and what the criteria are? What are the consequences for mislabeling a server as...
I'm concerned it won't stop there. I wonder who is responsible for determining when a server is 'adult' oriented and what the criteria are? What are the consequences for mislabeling a server as not adult oriented if Discord later determines it is? Point being that I could see this being similar to reddit or any other hierarchy situation where people at the very top dictate the terms and everyone underneath them has to carry them out even if the way of carrying them out is nonsense. It could be every discord server owner just labels their server as 'adult' to avoid onerous rules or consequences, or perhaps Discord will only just tag a server as adult oriented if it finds them not to be, so then no one is pressured to switch their server over because the worst that can happen is what they would have had to do anyhow. But I suspect that Discord may have to do more than that, because then it's an easy way to bombard their system by spinning up servers not tagged as for 'adult' content and so anyone can view them and it's on Discord to tag them all.
When will it be the case that the criteria where something is 'teen' friendly is a server without profanity or certain levels of violent content etc? Think about movies or video game ratings, where someone decides if you're 16 you can't play GTA or watch a certain movie or such. What is going to make Discord immune from the pressure to apply some arbitrary bullshit rules once they have an identity verification system?
I don't expect any servers that I'm part of would qualify under 'adult' content as they've laid out in this post, so I'll possibly keep using Discord, but I surely hope there's a decent alternative because I don't have any intention of letting them store my facial characteristics in their database somewhere or my ID where the information can get leaked out and be used for any other purposes.
As these things usually are, It's vague. (Link | Discord) (Link | BBC] Personally, I know that both of the Cigar Club groups I'm in, and my OSINT and SOCMINT groups will very likely be hit with...
I wonder who is responsible for determining when a server is 'adult' oriented and what the criteria are?
Servers must be classified as age-restricted if the community is organized around age-restricted themes or if the majority of the server’s content is focused on 18+ content.
How does the age inference model work?
We leverage an advanced machine learning model developed at Discord to predict whether a user falls into a particular age group based on patterns of user behavior and several other signals associated with their account on Discord. We only use these signals to assign users to an age group when our confidence level is high; when it isn't, users go through our standard age assurance flow to confirm their age. We do not use your message content in the age estimation model.
"For most adults, age verification won't be required, as Discord's age inference model uses account information such as account tenure, device and activity data, and aggregated, high-level patterns across Discord communities," said Badalich.
Discord faced criticism in October after official ID photos of around 70,000 users were potentially leaked after a firm which helped it verify ages was hacked.
Personally, I know that both of the Cigar Club groups I'm in, and my OSINT and SOCMINT groups will very likely be hit with this. The use of AI to determine user age based on device behavior feels counterintuitive. So let's say I look up a Minecraft building tutorial, am I suddenly needing to be worried about my status as an adult on discord?
Hah, that would be funny. My Minecraft account is 15 years old, meaning that unless I started playing when I was under three years old it is very unlikely that it belongs to someone under age. But...
So let's say I look up a Minecraft building tutorial
Hah, that would be funny. My Minecraft account is 15 years old, meaning that unless I started playing when I was under three years old it is very unlikely that it belongs to someone under age.
But yeah, it is extremely vague and far from fool proof.
Three sounds incredibly young to do anything of substance but I believe it. Four/five is what I'd imagine some kids might be exposed to minecraft. We could probably ping in some experts from the...
Three sounds incredibly young to do anything of substance but I believe it. Four/five is what I'd imagine some kids might be exposed to minecraft. We could probably ping in some experts from the tildes minecraft server and ask them when their kids started playing as well :D
I don't think he was getting much of substance done tbf, but I also often don't when I play Minecraft so I can hardly criticize. Scared to even try to figure out how old that kid is now.
I don't think he was getting much of substance done tbf, but I also often don't when I play Minecraft so I can hardly criticize. Scared to even try to figure out how old that kid is now.
Honestly it's pretty irritating to me the amount of technology, open source, and security focused communities that have embraced discord with open arms. The platforms model flies in the face of...
Honestly it's pretty irritating to me the amount of technology, open source, and security focused communities that have embraced discord with open arms.
The platforms model flies in the face of everything those communities stand for, and hopefully this is a wakeup call that we should have been advocating for and investing in open source, standard protocols instead.
Ditto. The thing I'm taking issue with here though is just how much this highlights that much of my social life is tied up in and highly dependent on a single company whose policy has long since...
Ditto. The thing I'm taking issue with here though is just how much this highlights that much of my social life is tied up in and highly dependent on a single company whose policy has long since stopped being something I find tolerable. Discord has made it clear to me that it is eager to step down the path of enshittification, yet many communities and friends of mine use it as their primary social space.
I don't want any single company having that much control over my social life, much less one so eager to trample on people's privacy or whose future looks so grim. I don't know what I'm going to do about it just yet, but it's abundantly clear that this is not sustainable.
Unfortunately they already rolled this out in some places and I was affected. There were a few image channels that were not necessarily for adults (Image channels with memes) that I wasn't able to...
Unfortunately they already rolled this out in some places and I was affected. There were a few image channels that were not necessarily for adults (Image channels with memes) that I wasn't able to access anymore. Worst is that most people in them went ahead with the verification so the only option for me was to not see the content anymore or verify myself but I refuse to do so.
I have no problems with the ID verification, it's the AI scraping my server's chat logs that scares me. They're intentionally profiling my friends and family and it's absolutely unacceptable.
I have no problems with the ID verification, it's the AI scraping my server's chat logs that scares me. They're intentionally profiling my friends and family and it's absolutely unacceptable.
The sort of funny thing about this is, there's no proof that it's still you using the account. Granted there's not a lot of incentive for people to transfer over free accounts to other people,...
The sort of funny thing about this is, there's no proof that it's still you using the account. Granted there's not a lot of incentive for people to transfer over free accounts to other people, like if your little brother wants a discord account and you stopped using discord, you don't give your brother your account, he just makes his own. But if there's age gating, now maybe there will be incentives to transfer accounts or sell accounts.
That does make me wonder how this IDing will go down if you change the email on your account, presumably you'd have to verify your age again. Otherwise someone could possibly just make a bunch of discord accounts and verify them with the facial age software and then change the email to some 15 year old's email address.
Edit: One other thing about the direction of age verification, even if you assume the software is perfect and assume the hardware has something built in to authenticate video streams are coming from the hardware so people can't utilize alternative video streams and the stream must be recorded by the front facing cam on the device, and even if you assume that this camera is incapable of being fooled by videos of other people, there's nothing technically stopping anyone from loaning out their face to age verify other people's accounts. Granted I recognize the reality of that is that it's so highly impractical that it wouldn't happen on a widespread level in all likelihood, but the primary way to defeat this would then be to make a face database and even if you don't store actual images or videos of people, if you just break it down into data points of someones face like facial recognition does, then you can tell if one person is using their face to age verify multiple accounts.
I mention this last part because I've seen that question asked in this thread a few times, what incentive would they have to keep the information or create data points of your face etc.? That's an example, albeit with a highly impractical premise in that particular case, but I'm also not an oracle and there could be many other scenarios that I can't think of which would prompt the same motivation to save data points of your face from the face scanning process.
I hadnt even considered the buying/selling of accounts by normal people but yeah that’s a potentially easy bypass. Sadly I think the way this is going, let’s hope your online identity isn’t...
I hadnt even considered the buying/selling of accounts by normal people but yeah that’s a potentially easy bypass. Sadly I think the way this is going, let’s hope your online identity isn’t legally the same as your irl identity. As in, let’s say you unlock an account for a kid who ends up committing a crime online. The way it’s going you might be charged for the crime because that account is using your id. Let’s hope that’s not the case ever because it’d be opening a huge can of worms….
But at the end of the day this whole thing is spying on users with children safety as a cover, si they’ll open the can of worms for the possibility of more data.
In 7 years, they might have incrementally moved to just outright asking to know your ID and keep it on file with the way some of these regulations are trending. :(
In 7 years, they might have incrementally moved to just outright asking to know your ID and keep it on file with the way some of these regulations are trending. :(
Yep. My friend group has been on discord since 2016 and I believe after I made the memes channel not 18+ we should be fine. I canceled my nitro because I don't support the path discord is headed...
Yep. My friend group has been on discord since 2016 and I believe after I made the memes channel not 18+ we should be fine. I canceled my nitro because I don't support the path discord is headed down and I'll be looking for alternatives as time goes on but there's seemingly no rush because it won't effect me much. I quit paying for nitro yearly last year when they switched ceo's and the started IPO rumors started. If something gets in my way, I am definitely going to try garrys mod or vrchat as the face bypass, and I bet you could easily AI generate a drivers license that the software would believe.
I read an interesting book on internet pornography and how addictive it can be for people, especially teens/young adults whose brains are actively developing. The book features a number of...
Exemplary
I read an interesting book on internet pornography and how addictive it can be for people, especially teens/young adults whose brains are actively developing. The book features a number of testimonials from people struggling with pornography addiction, but also included research that was current at the time. I didn't process that 2014 was over a decade ago until I started writing this comment, so it isn't up-to-date research being cited. The book is titled Your Brain on Porn.
I haven't looked in to subsequent research, but I did find how it was an interesting read in framing how different internet pornography is from previous forms of pornography. It also got me thinking about how addictive social media platforms/the internet can be. While they're not praying on the "reproduce" portions of the brain, they still offer that low effort novelty/dopamine that makes it so easy to just keep engaging with the platforms.
I'm not for Discord requiring IDs, since I don't trust them or other platforms with that information and would prefer a privacy first age attestation standard being adopted before platforms attempt to gate off any sort of adult content behind an age gap, but governments are definitely pushing for things like this from what I've seen regarding social media bans/adult content age enforcement laws going into effect. That being said, I thought I'd share this book as there has been some research done on the negatives of internet pornography for people.
It's not just pornography that they're worried about. It's child predators. Social platforms are racing now to get ahead of the looming threat of an outright ban from governments worried about...
It's not just pornography that they're worried about. It's child predators. Social platforms are racing now to get ahead of the looming threat of an outright ban from governments worried about child predation. The public outrage when pedophiles groom children over the internet has reached a point that it is impossible to ignore. Something must be done; at least, that's the increasing sentiment.
That just sounds like right-wing propaganda used to justify ID/face capturing. How is age verification actually going to combat child predators? Adults can still communicate with children in...
That just sounds like right-wing propaganda used to justify ID/face capturing. How is age verification actually going to combat child predators? Adults can still communicate with children in either environment.
I work in the industry and this shit affects my livelihood. I almost lost my job because of this fear. I'm just describing the reality of the work that my colleagues have been putting in....
I work in the industry and this shit affects my livelihood. I almost lost my job because of this fear. I'm just describing the reality of the work that my colleagues have been putting in.
Eventually, only children and trusted adults (parents, relatives, etc) will be able to talk with children. Random adult strangers will not be able to. This will be common across social platforms. This is my belief, anyway.
Ok so the only way to "protect the children" is to literally verify everyone, children included, and keep them in their own pen. What Discord is doing right now is technically useless and won't...
Eventually, only children and trusted adults (parents, relatives, etc) will be able to talk with children.
Ok so the only way to "protect the children" is to literally verify everyone, children included, and keep them in their own pen.
What Discord is doing right now is technically useless and won't protect anyone. It looks like they're just slowly turning up the water heat so that we don't feel once it's boiling and we must all submit our ID.
Must we, though? Isn't there another way? Another service? There is and always will be. There are open source things like Matrix or older XMPP. Some even run IRC to this day. Some people run their...
Must we, though? Isn't there another way? Another service? There is and always will be. There are open source things like Matrix or older XMPP. Some even run IIRC IRC to this day. Some people run their own servers and may be able to host their own community with their own rules.
There is no actual need for big things like Discord is. Its only pro is that almost everyone uses it. Without users it's just another such service.
Yes, there can be obstacles on the way, sometimes not easy ones. But we certainly don't have to hand our IDs to anyone.
I'm a software engineer working on a social platform that is not Discord, but their engineers and our engineers have crossed paths before. There is some overlap in thought, is my guess, as I'm not...
I'm a software engineer working on a social platform that is not Discord, but their engineers and our engineers have crossed paths before. There is some overlap in thought, is my guess, as I'm not in leadership and just drawing my own conclusions.
I am a progressive leftist and also a father, but my child is too young for that to be a concern. In the past I helped a (now) former partner who had a 12-year-old daughter. Her daughter was...
I am a progressive leftist and also a father, but my child is too young for that to be a concern.
In the past I helped a (now) former partner who had a 12-year-old daughter. Her daughter was seduced by a man in his 40s from another state via Facebook. She sent him naked pictures. Probably videos as well.
When we discovered what was happening, her daughter was about to go to a bus terminal to get on an interstate bus and meet the man almost 1200 kilometers (or 746 miles) away. A 16 hour trip. We're in Brazil, so our police forces are much smaller and less efficient than in the US. At that point, she would be out of our reach and completely at the mercy of her abuser.
That is, of course, just one case. But it left a lasting impression on me. Enough to not be so critical of attempts to protect minors from online exposure. Even if a policy like that is only effective 5% of the time, 5% is probably a lot already.
The problem is that the stricter measures like that are to attempt to protect kids from things like grooming, the more they have other effects that can harm people. There are, of course, plenty of...
The problem is that the stricter measures like that are to attempt to protect kids from things like grooming, the more they have other effects that can harm people. There are, of course, plenty of legitimate situations where kids being able to talk privately with people who aren't their parents is not negative (I and a lot of my friends were only able to safely talk to even peers about being queer online when I was a teen), but we also need to weigh the efficacy of these policies against their privacy implications for both kids and adults. It doesn't make sense to say "well, if it's even a little bit effective, it's worth it", because sometimes what is required to make it catch even a fraction of a percent of cases involves incredibly disproportionate invasions of privacy.
Discord's current ID implementation as described doesn't actually seem to block pretty much any of the routes through which someone would get groomed on Discord. Like absolutely not in any way. It might prevent your kid from accidentally seeing porn on Discord, but that's a completely different thing with very little connection to how at risk of grooming they are. So we're not actually making anyone safer here, and it's in exchange for a massive violation of all users' privacy. The idea that any harm could ever come to a child doesn't automatically stack the scales against the verifiable harm that the measures purportedly to protect that child cause.
Not being friended by strangers or able to receive messages from people outside approved channels would pretty much take away opportunities for private, side channel conversations between adults...
Not being friended by strangers or able to receive messages from people outside approved channels would pretty much take away opportunities for private, side channel conversations between adults and kids.
This doesn't actually do any of that, though. From Discord's own press release: Age-gated spaces just means Discord's existing flagging of certain channels, threads, and servers as nsfw. Adults...
This doesn't actually do any of that, though. From Discord's own press release:
Content Filters: Discord users will need to be age-assured as adults in order to unblur sensitive content or turn off the setting. Age-gated Spaces – Only users who are age-assured as adults will be able to access age-restricted channels, servers, and app commands. Message Request Inbox: Direct messages from people a user may not know are routed to a separate inbox by default, and access to modify this setting is limited to age-assured adult users. Friend Request Alerts: People will receive warning prompts for friend requests from users they may not know. Stage Restrictions: Only age-assured adults may speak on stage in servers.
Age-gated spaces just means Discord's existing flagging of certain channels, threads, and servers as nsfw. Adults and children can still communicate in other channels perfectly fine, and it being in public doesn't necessarily prevent grooming from at least starting. And the restrictions on Friend and Message requests don't prevent a kid from accepting a friend request from another user whatsoever, merely displaying a warning for Friend Requests from people they may not know and shuffling the Message Request to a separate inbox that they can still access to accept the message. I have the Message Request feature they describe turned on for my account already (it's an optional setting for now) because it's helpful to protect against spammers and scammers, but it's not going to prevent a kid from getting DM-ed by an adult they already met and started to befriend in a server they're both in -- which is how grooming actually happens on Discord.
These measures simply don't actually block the ways grooming tends to happen on chat apps like these -- and it would be difficult to do so without making Discord extremely non-functional for non-verified users. I don't think Discord should take stricter measures here, because I ideologically disagree with totally restricting teens from the ability to talk to strangers online due to risk of grooming, as I think it's counterproductive. But if you are looking to stop grooming by restricting teens' access to strangers, these measures do almost nothing to even address that problem, much less to address it effectively.
It's not "not being friended by strangers" it's "strangers friend requests go into a other folder" which they will be accustomed to looking in as so will their initial friend requests from their...
It's not "not being friended by strangers" it's "strangers friend requests go into a other folder" which they will be accustomed to looking in as so will their initial friend requests from their friends.
I don't have an issue with protecting children (and agree with you, in fact. They are very vulnerable and easy to manipulate), but I strongly disagree with basically every method that governments...
I don't have an issue with protecting children (and agree with you, in fact. They are very vulnerable and easy to manipulate), but I strongly disagree with basically every method that governments are implementing. There are much more secure ways to do it and we should focus on those, in my opinion.
I would disagree that it’s only right-wing propaganda justifying this. These policies appear to be supported throughout the political spectrum. Although, I would agree it’s companies pushing this
I would disagree that it’s only right-wing propaganda justifying this. These policies appear to be supported throughout the political spectrum. Although, I would agree it’s companies pushing this
It seems to me that possibly the greatest predators to children are billionaires and their toys. The scale of harm done already and that will be done just by LLM alone I can only imagine, social...
It seems to me that possibly the greatest predators to children are billionaires and their toys. The scale of harm done already and that will be done just by LLM alone I can only imagine, social media and various content algorithms even ignoring unrestrained communication between prototypical child sexual predators. Societies and cultures growing more distant and ever more dependent on tech billionaires platforms, I can think of no greater threat to children and adults alike.
Are you saying that LLMs are grooming children? Billionaires are bad, but there are only 3,279 billionaires globally which is not nearly enough people to explain the tens of thousands of minors...
Are you saying that LLMs are grooming children? Billionaires are bad, but there are only 3,279 billionaires globally which is not nearly enough people to explain the tens of thousands of minors abused every year.
I'm saying that LLMs are warping their minds and causing serious harm, up to and including death. Notably when I said billionaires and their toys being the greatest predators to children, I did...
I'm saying that LLMs are warping their minds and causing serious harm, up to and including death. Notably when I said billionaires and their toys being the greatest predators to children, I did not say sexual predators.
Just as an example and starting point of what I mean, and I think it's going to get much worse. And those are just the known cases enough to put on Wikipedia. There are obviously going to be more than just the known cases. Then expand that to social media platforms and you can't easily quantify the harm caused because often the harm caused through social media has another person on the other end, so you can say it's not the platform it's the person, but the platforms are carefully constructed in ways to create certain forms of contact between people for engagement, so I won't give them that leeway.
That's not to discount the harm caused by protoypical child sexual predators, but the scale at which that harm is happening is completely dwarfed by the scale at which billionaires and their reach extends. Hundreds of millions, possibly billions of people, are impacted by those 3,279 billionaires. Of course I don't actually exclude hundred millionaires or such either, and some billionaires like Bezo's ex-wife aren't the prototypical psychopath billionaire either, so I don't really care about the exact specific number of people, billionaire is more of a stand-in for extremely rich assholes who use their wealth to exert power over others.
I'm not in disagreement with you, but whenever news of a child predator chatting with a child on a messaging/social platform hits, the general reaction is "how dare the platform not have caught...
I'm not in disagreement with you, but whenever news of a child predator chatting with a child on a messaging/social platform hits, the general reaction is "how dare the platform not have caught this". The conversation isn't about the relatively low incident rate. It's not about educating parents to look after their children more. It's about blaming the platform. And again, I'm not saying platforms can't be more responsible. It's just that if the platform is confronted by this, the logical solution from their POV is to build age controls into their platforms.
I don't believe the main issue is pornography. Since Discord is a chat program, I would be way more concerned with grooming and sex crimes targeting minors.
I don't believe the main issue is pornography. Since Discord is a chat program, I would be way more concerned with grooming and sex crimes targeting minors.
Discord press release link for reference. Personally I have no interest in doing that myself regardless of what reassurances they provide. At present I don't think there's anything I'd be missing...
Key privacy protections of Discord’s age-assurance approach include:
On-device processing: Video selfies for facial age estimation never leave a user’s device.
Quick deletion: Identity documents submitted to our vendor partners are deleted quickly— in most cases, immediately after age confirmation.
Straightforward verification: In most cases, users complete the process once and their Discord experience adapts to their verified age group. Users may be asked to use multiple methods only when more information is needed to assign an age group.
Private status: A user’s age verification status cannot be seen by other users.
Personally I have no interest in doing that myself regardless of what reassurances they provide. At present I don't think there's anything I'd be missing out on if I didn't, but I imagine people will find workarounds to using their own face before long.
Even so, Discord was the one to hire them and make the initial demands for the IDs. It doesn't matter that it was a third party that got breached. That third party only had that information...
Even so, Discord was the one to hire them and make the initial demands for the IDs. It doesn't matter that it was a third party that got breached. That third party only had that information because Discord hired them in the first place.
They say in the article they stopped using that vendor for age verification, but the damage is already done. Discord has already proven to have had incredibly faulty judgment once, and with something vital. A government-issued ID being leaked is so much worse than a credit card. Given Discord's size and popularity, and the sensitivity of the information, they should have vetted the vendor's security and data storage practices much more thoroughly.
It's like a museum hiring a third party security company to provide guards and security equipment for some special temporary exhibit, who then fail to detect and stop a guy breaking into the exhibit before running off with priceless art and artifacts. The owners of the stolen pieces don't say, "oh, that museum did nothing wrong, it was all the security company." The museum holds equal responsibility because ultimately, it was THEIR responsibility to keep the exhibit safe. THEY made that promise to the owners, not the security company.
Likewise, Discord promised its users their data would be safe and deleted right away. Who got hacked is irrelevant, because the fact the breach happened at all shows that Discord failed to ensure the data would be deleted like they promised.
So ultimately: yeah, people are going to blame Discord because they did fail at some stage.
That’s a separate thing. What they promised would automatically be deleted is the automated age verification, which as of now has never been hacked. Not that it’s existed for very long, but...
That’s a separate thing. What they promised would automatically be deleted is the automated age verification, which as of now has never been hacked. Not that it’s existed for very long, but nonetheless, that commitment was kept.
Essentially, if you get denied by the automatic system, you can make a support ticket and say, hey, the machine was wrong, let me prove that to a human. And then the human will ask you to send over a picture of your ID.
No one ever promised that this would delete your ID. It’s bespoke service, not an official pathway.
Not to mention it was Zendesk that was hacked. Using Zendesk isn’t a sign of poor judgement, it’s by far the most common IT ticketing software. If you’re going to avoid any company that uses Zendesk you may as well just avoid ever emailing customer support.
Do you have a source on Zendesk being the one breached? I'm not asking to be snarky or combating, I'm genuinely asking because I'm finding sources (namely Discord itself) claiming it's a company...
Do you have a source on Zendesk being the one breached? I'm not asking to be snarky or combating, I'm genuinely asking because I'm finding sources (namely Discord itself) claiming it's a company called 5CA that was breached. I initially only found articles referencing 5CA (if they named the vendor at all), and after I saw your comments mentioning Zendesk I had to dig a bit to find some.
Based on this site, the leak was first attributed to Zendesk and then Discord publicly named 5CA as the vendor. Both 5CA and Zendesk naturally denied being the ones hacked. And... That's essentially all I can really confirm.
Honestly, it's a bit confusing trying to sort out where, exactly, the breach happened. A lot of the articles are from right when the breach was announced, or after Discord named 5CA. And with today's announcement, there's even more articles to sift through. This is one of the few more in-depth writeups I can find from after October, with a focus on 5CA, but I have no clue how accurate or reputable that particular site (or writer?) is. There was a failure at some point, and the fact it's not clear where that failure point was doesn't really sit well with me given how major of a breach this is. If you have some a source that goes more in-depth, I'd really appreciate it.
That aside, to address your first point, I'll just quote the relevant bit from the Discord blog post I shared:
Privacy-protecting process. Discord and k-ID do not permanently store personal identity documents or users’ video selfies. Images of a user's identity documents and ID match selfies are deleted directly after their age group is confirmed, and the video selfie used for facial age estimation never leaves their device.
Succinct and direct promise to users that they will delete images of identity documents and ID match selfies. I'll grant you that it specifies "Discord and k-ID", which does leave some leeway to argue they never promised the same of third party vendors. They also don't use the word "immediately" or any language to specify when it would be deleted. Which is arguably just as bad, because that bullet point in their statement implies an explicit promise that they'll secure our privacy while leaving Discord leeway to avoid or minimize accountability precisely in cases like this.
Just... Discord really hasn't done anything to convince me (and others) that they've taken measures to ensure this never happens again. After that breach, I and many others just don't want to risk giving our government-issued IDs to any private companies over the internet. It really doesn't matter who was hacked, just that it happened to Discord users once and we don't want it to happen to us.
I just wanted to reinforce your point and weigh in on this one. I’m in Australia and I guess my Discord account got flagged as “potentially not an adult” because I was asked to verify my age a few...
Privacy-protecting process. Discord and k-ID do not permanently store personal identity documents or users’ video selfies. Images of a user's identity documents and ID match selfies are deleted directly after their age group is confirmed, and the video selfie used for facial age estimation never leaves their device.
I just wanted to reinforce your point and weigh in on this one. I’m in Australia and I guess my Discord account got flagged as “potentially not an adult” because I was asked to verify my age a few months ago.
I’ve already had my ID picked up in Optus’ horrendous data breach a few years back, so I’m careful about not giving my ID to companies, even ones I should ostensibly be able to trust, via any digital means. If any phone company needs my ID ever again, I’m happy to physically walk into a store and have it verified manually, but I’m not comfortable with it being digitally stored given how clearly that leads to breaches.
So when Discord popped up offering these two options — give them my ID or try the AI video-selfie feature — I decided to test out the claims of “no video leaves your device” and “on-device processing”. To do this, I used my phone, and navigated through the prompts until it asked for camera access. Turned on airplane mode and accepted — oops something went wrong. Followed the prompts multiple times, turning on airplane mode at various different stages, including waiting until the next page had loaded, but ultimately there was no point at which I could cut the internet connection and still have the system verify me.
This tells me with confidence that a live internet connection is required for the age recognition feature, which means the claim of “on-device processing” is dubious at best, but likely an outright lie. It also means it’s impossible for anyone to verify the claim that the video selfie “doesn’t leave the device”.
It’s so disappointing because I genuinely don’t think it would be particularly difficult to design a system which did the same thing without having data leave your device, but they’re not even bothering to pretend that’s what happens.
Edited my second paragraph to be a bit less needlessly combative.
Thanks for testing airplane mode. That's great. I suppose to be completely aware of what leaves your device you'd have to install a custom root certificate and use a MITM proxy , and hope they...
Thanks for testing airplane mode. That's great.
I suppose to be completely aware of what leaves your device you'd have to install a custom root certificate and use a MITM proxy , and hope they don't pin certificates or it gets a lot harder.
someone shared this on a dev server i'm on https://age-verifier.kibty.town/ (i'd never run that in a million years, so not sure if i should even be linking it here lmao) but they seem to have...
but they seem to have reversed the current process and yeah, its basically sending metadata about your face to their servers for them to do some integrity checks on
e: seems like its already patched but i assume theyre doing the same steps but in a more tamper proof way
Notably, that wording also doesn't specify more that no data at all about the 'video selfies' is transmitted, just that the video itself doesn't leave the device. This leaves room for them to...
Succinct and direct promise to users that they will delete images of identity documents and ID match selfies. I'll grant you that it specifies "Discord and k-ID", which does leave some leeway to argue they never promised the same of third party vendors. They also don't use the word "immediately" or any language to specify when it would be deleted. Which is arguably just as bad, because that bullet point in their statement implies an explicit promise that they'll secure our privacy while leaving Discord leeway to avoid or minimize accountability precisely in cases like this.
Notably, that wording also doesn't specify more that no data at all about the 'video selfies' is transmitted, just that the video itself doesn't leave the device. This leaves room for them to create data points about your face from the video, send those data points, and then reconstruct your face from those data points. I have seen other wording mentioned in other articles that supposedly is more specific that no data will be transmitted other than the approximate age the on-device detection comes up with, but that requires more time and expertise for people to dive into all the various policies involved to determine what exactly is covered and what is left for interpretation.
Even then, policies are one thing, actualities and consequences are another thing. They can say they won't transmit anything but that doesn't mean it's actually true, and as we've become incredibly familiar with in the past 10 years or so, the consequences for lying and harming others are basically so little as to be none at all.
Can you explain what this looks like? Any sufficiently detailed description sounds like a compressed photo to me, and they're explicitly not sending those. Fully collecting everyone's personal...
This leaves room for them to create data points about your face from the video, send those data points, and then reconstruct your face from those data points.
Can you explain what this looks like? Any sufficiently detailed description sounds like a compressed photo to me, and they're explicitly not sending those.
Fully collecting everyone's personal data isn't something every company does. Discord isn't Facebook. What are the incentives to lie and deal with storing this data?
Monetisation. AI training. Which is maybe the same thing. The fact that they could do anything and have it be easier to ask forgiveness than permission. Also history. How many times do we have to...
What are the incentives to lie and deal with storing this data?
Monetisation. AI training. Which is maybe the same thing. The fact that they could do anything and have it be easier to ask forgiveness than permission. Also history. How many times do we have to be lied to? I'm sure they'll be on the level /this/ time. As opposed to all the other times.
More significantly, a great many people actually just post selfies and pictures of their daily lives to Discord, where it will then live on Discord’s servers. The selfie scan is probably more...
More significantly, a great many people actually just post selfies and pictures of their daily lives to Discord, where it will then live on Discord’s servers. The selfie scan is probably more secure than the normal activities people are doing on the platform.
I think the main problem is going to be for people who are kind of borderline, so anyone in that transition period from about 16 to 22 where face scans won’t be able to confirm your age for sure, your linked accounts probably won’t be old enough to make a definitive call one way or another, and your posting behavior probably isn’t that indistinguishable from someone underaged. But, like they said, this is maybe 20% of their users at most. And most of them likely wouldn’t notice the restriction unless they’re trying to access an NSFW channel, and those NSFW distinctions are placed by server admins rather than Discord itself.
Basically I think people are catastrophizing a lot here, and the actual additional risk exposure the vast majority of users are being exposed to is basically negligible. That said, it probably is healthy for more self-hosted and open alternatives to Discord to exist so I don’t really feel like pushing back on people actively seeking those out. I think it was actually a bad fit for a lot of what people were trying to do with it and it’ll be good if there is some branching out to more purpose-built spaces to set up communities for different things. I am glad the energy seems to be directed towards self-hosted and open platforms now rather than trying to decamp to Reddit or Digg or whatever.
In the quoted part of the parent comment I was responding to, there is no explicit mention that compressed photos aren't being sent. It says does not permanently store video selfies first. So if...
Can you explain what this looks like? Any sufficiently detailed description sounds like a compressed photo to me, and they're explicitly not sending those.
In the quoted part of the parent comment I was responding to, there is no explicit mention that compressed photos aren't being sent.
Privacy-protecting process. Discord and k-ID do not permanently store personal identity documents or users’ video selfies.
It says does not permanently store video selfies first. So if you were to stop here, then it's only ruling out the original video taken. Now let's keep going to see what else is ruled out.
Images of a user's identity documents and ID match selfies are deleted directly after their age group is confirmed
This could be read as "Images of....ID match selfies are deleted directly after" which would include images of the selfie process or it could be read as "ID match selfies are deleted directly after", which the latter could simply only be reinforcing the first sentence that video selfies won't be kept, says nothing about any still images of the selfie process. The images part of this statement is only explicitly clear that its referring to identity documents, not the selfie.
and the video selfie used for facial age estimation never leaves their device
Again, only reinforces that the video itself never leaves the device.
Furthermore, what it looks like even if you completely remove the idea that they may take still frames from the video, it looks a lot like how LLMs reproduce text that they were trained on, but supposedly none of that is copies of the original information. How can an LLM produce images of Will Smith or anyone else, but they don't hold any actual copyrighted data? How can any facial matching system match your face to other images of you? Because they can break your face down into data points and then compare the data points. Or think of the technology that some companies were using to try to assist law enforcement by using genetic material and other factors to generate an image of what a potential suspect or missing person might look like, and while I wouldn't say many of those were accurate, they were working on many assumptions and less specific material. The point of mentioning that is more so that it's something companies have already done, attempt to reconstruct someone's face and what they look like with technology based on various data points.
Fully collecting everyone's personal data isn't something every company does. Discord isn't Facebook. What are the incentives to lie and deal with storing this data?
Discord may not have any incentive, even if you assume they have no incentive, this doesn't account for the other parties they are contracting. Even if you assume none of them have any incentive, mistakes happen, malicious actors exist. If Discord wants to personally back the whole process they can guarantee me a couple million dollar payout and then everyone else each their own million dollar payouts and that is prioritized above all other debt or payments owed by Discord to other entities if it doesn't work the way they say it works, and I'm sure their investors, their insurance company etc. will lose their minds over that type of guarantee because they will be all but assured to lose money. That's how you know it's bullshit, if all the people behind it have to put their money where their mouth is, they would all go running for the hills.
Does anyone have good alternatives? I am very happy with the discord I've built for my friends, but this is ridiculous. Also sorry, I've commented a bit but I don't know my way around tagging a post.
Does anyone have good alternatives? I am very happy with the discord I've built for my friends, but this is ridiculous.
Also sorry, I've commented a bit but I don't know my way around tagging a post.
A lot of FOSS communities have switched to Matrix protocol chats - I host an unfederated server that also allows calls via a LiveKit service, and it works quite well. The mobile clients leave...
A lot of FOSS communities have switched to Matrix protocol chats - I host an unfederated server that also allows calls via a LiveKit service, and it works quite well. The mobile clients leave something to be desired though.
It depends on what features you'd want in an alternative. I've heard self hosting stoatchat (formerly Revolt, GitHub) is the "next closest thing". Never set it up/tried it out myself, though,...
It depends on what features you'd want in an alternative. I've heard self hosting stoatchat (formerly Revolt, GitHub) is the "next closest thing". Never set it up/tried it out myself, though, can't verify firsthand.
I like IRC, but unless it changed since I last used it years ago, the way it works is too different from Discord to function as a viable replacement for most people. The messages are tied to the...
I like IRC, but unless it changed since I last used it years ago, the way it works is too different from Discord to function as a viable replacement for most people. The messages are tied to the sessions. You can't see previous message history from before your session, logging off will typically wipe all the conversation you were present for, and people can't send you a message when you're offline. Some IRC clients can store the messages, but it still limits what you can see to things you were present for.
It's useful for conversation in real time, but many people use Discord to share information and updates. I'm on many fandom servers where people consult older conversations for writing reference, some servers for news on manga translations, a server with my high school friends to coordinate hangouts or share updates (one person's phone just never got texts from mine for some reason), and a couple servers used to coordinate workers on large-scale creative projects. IRC just doesn't work for those cases.
IRC v3 addresses this, the name of the capability is "chathistory". The big caveat is that most of the big networks (libera.chat etc) don't support this yet. A bouncer like Soju can cover the gap....
IRC v3 addresses this, the name of the capability is "chathistory". The big caveat is that most of the big networks (libera.chat etc) don't support this yet. A bouncer like Soju can cover the gap.
For setting up your own server Ergo is a great option that gives you chathistory and many other goodies that go a long way towards turning IRC into a first class modern chat experience. Always-on, multi-client and push notifications in particular.
On the client side Halloy (GUI), Senpai (terminal), Goguma (mobile) and Gamja (web) are some modern clients that can take advantage of chathistory and other features.
I moved my private friends/family network to Ergo over a year ago and it has worked out great. I am pretty much the only one who knows anything about IRC, for everyone else it's just another chat app.
IRCv3 has been in development for well over a decade and barely has seen adaptation other than the most basic things. Chat history is I think half a decade old, which says a lot about the rate of...
The big caveat is that most of the big networks (libera.chat etc) don't support this yet.
IRCv3 has been in development for well over a decade and barely has seen adaptation other than the most basic things. Chat history is I think half a decade old, which says a lot about the rate of adaption. I remember IRCCloud being one of the bigger drivers behind v3, but their blog has been quiet since 2022.
Having said that, if you want try and win people over to the IRC side of things I honestly wouldn't recommend any of the software in your list. I'd go for a service like IRCCloud or a self hosted web client like the lounge which does chat history as well.
But even then I'd still not recommend that people switch back to IRC. Don't get me wrong, I like idling on #tildes with the folks there. And I really used to be a die hard IRC user to the point of having written this thing just so I didn't need to switch to discord.
But, IRC simply isn't an ecosystem that fits with what people expect these days. Heck, even over a decade ago it already didn't fit what most people wanted out of a chat service. I remember trying to get a channel going for various communities and subreddits over the years and never really succeeding. As soon as we put up a discord link in the subreddit people started flooding in simply because of how much lower the barreer of entry is compared to IRC and still very much is.
Yes it is fine for technical people if they sit down for a moment to figure it all out. But that in itself already says something about how accessible IRC is compared to something like Discord. But the comment you replied to specifically was talking about most people, not just (slightly) technical people. Which is extremely easy to forget as well, because once you have figured out IRC it doesn't seem all that difficult. Causing a lot of IRC users to suffer from the curse of knowledge in discussions like these.
My subreddit had an IRC channel -- dead by the end -- through like the late 2010s. I can't remember why, but I have definitely been on an IRC server post-pandemic. Even if just briefly.
My subreddit had an IRC channel -- dead by the end -- through like the late 2010s.
I can't remember why, but I have definitely been on an IRC server post-pandemic. Even if just briefly.
There's Stoat (formerly called Revolt) It's way less mature than Discord and not very popular, but it's the closest to Discord that I know of. As far as I know, it's pretty secure and trusted and...
It's way less mature than Discord and not very popular, but it's the closest to Discord that I know of. As far as I know, it's pretty secure and trusted and is open source.
It literally just got hit with some flack for the maintainer dealing with AI commits (that have since been reverted). IDK if I'd consider that a full dealbreaker, but it's something to keep in...
It literally just got hit with some flack for the maintainer dealing with AI commits (that have since been reverted). IDK if I'd consider that a full dealbreaker, but it's something to keep in mind for those who care.
I've been trying to get my own instance of Stoat up for hours today and haven't been able to get through it. I wish there was a self-hosted alternative that has a decent feature set, mobile apps,...
I've been trying to get my own instance of Stoat up for hours today and haven't been able to get through it. I wish there was a self-hosted alternative that has a decent feature set, mobile apps, and isn't 14 different containers.
Yeah, sadly there aren't any good alternatives if you're looking for a discord-like experience. Honestly, I wish a company like Proton would release a Discord alternative. I would happily pay for...
Yeah, sadly there aren't any good alternatives if you're looking for a discord-like experience.
Honestly, I wish a company like Proton would release a Discord alternative. I would happily pay for a privacy focused option.
My question is why does anyone feel like they need a “discord-like experience?” Discord does a couple of things well and a lot of different things kind of poorly. It’s main perk is the network...
My question is why does anyone feel like they need a “discord-like experience?” Discord does a couple of things well and a lot of different things kind of poorly. It’s main perk is the network effect of lots of people already having accounts, a big bot ecosystem to extend functionalities, and there being a bunch of integrations to link other accounts. You’ll never get any of that off Discord because it depends on the network effect. The closest way to replicate it is something that can be accessed via browser. But voice calls are doable with other apps, video chats and streams are too. You can do a big chat room with a whole bunch of tools about as well as Discord allows and Discord’s implementations of threading and forums are pretty bad.
It would behoove people to think through what they actually want to do with their “Discord alternative” and try to find a service suited to that rather than just replacing Discord. Discord was kind of cobbled together on top of a Teamspeak + groupchat base. I think the ease of setting it up made a lot of people form their communities around Discord’s UI and feature-set rather than thinking through what kind of tool they actually wanted to be their community hub or groupchat.
I mean, functionally, Discord is a really good piece of software. It has tons of features that work well and make the experience of using it quite nice. Everything with voice, video, screen...
I mean, functionally, Discord is a really good piece of software. It has tons of features that work well and make the experience of using it quite nice. Everything with voice, video, screen streaming, and text chat is pretty good, in my opinion, and I dont know of any other app that does them all as well.
The "forums" implementation is terrible, I agree. But as a communication app for a small group of friends, it's top-notch. I want all the text, voice, streaming, and video in a single app.
Discord isn't perfect, but it is good chat software that offers a lot of things that various groups need. I remember when Discord was the new player and people were switching to it from things...
Discord isn't perfect, but it is good chat software that offers a lot of things that various groups need. I remember when Discord was the new player and people were switching to it from things like Skype groups, and even then when it had fewer features, it was a huge improvement over the competition. While the network effect is obviously in place now, I think it's foolish to dismiss the actual utility of the software, which remains pretty good at what it was designed to do and has pretty much no competition with all the same major features done well enough to merit switching. And I'm in plenty of groups small enough that the network effect isn't a factor -- just how much effort it would require to set something new up. Nothing else is remotely as appealing compared to Discord as Discord was compared to Skype.
Because Discord is very convenient. It has private chat, group chat, voice chat, video chat, streaming all in one program working on multiple platforms.
My question is why does anyone feel like they need a “discord-like experience?”
Because Discord is very convenient. It has private chat, group chat, voice chat, video chat, streaming all in one program working on multiple platforms.
For me personally, one of the big issues really is just moving the communities that have been built exclusively on Discord. People will be more likely to move to a similar platform. So if Discord...
For me personally, one of the big issues really is just moving the communities that have been built exclusively on Discord. People will be more likely to move to a similar platform. So if Discord disappears, I have no idea where we'd all go.
At bare minimum, I think most communities would want: fairly easy installation and setup; real-time text-based chat; the ability to DM people; message history that isn't session-specific; and to be able to send images.
Yeah... Looking for alternatives with even just those features is surprisingly limited. It gets harder when you want more of Discord's features like voice chat or screen sharing. We can at least break those up a bit among various other programs, but... It's going to be a very annoying transition for people who use all those features on Discord.
Matrix actually has everything on your list. An idle, always-on style voicechat room is available as well, though the UX isn't nearly as polished as Discords (who've spent years making theirs...
Matrix actually has everything on your list. An idle, always-on style voicechat room is available as well, though the UX isn't nearly as polished as Discords (who've spent years making theirs better).
Many servers have open sign-ups as well, meaning you can try a non-overloaded server before you decide if you even want to try hosting your own:
I've been hosting my own for years now, and found it fairly straightforward / set'n'forget on Debian. Feel free to send me a DM if you'd like a registration code (as I do not have open registration) to play around with it too.
Every time I'm forced to use Matrix for anything I come away sour and salty. Lost history, encryption key required validation between apps that never work. Slow. Bad searching. Desynced chats. If...
Every time I'm forced to use Matrix for anything I come away sour and salty. Lost history, encryption key required validation between apps that never work. Slow. Bad searching. Desynced chats. If the only choices were Matrix or IRC over RFC 2549, I'd be choosing RFC 2549.
You might have to elaborate on streaming, but it has both Zoom-style (actively invite to, and join, a call) and Discord-style (the room is always there, you can drop in and out as you please)...
You might have to elaborate on streaming, but it has both Zoom-style (actively invite to, and join, a call) and Discord-style (the room is always there, you can drop in and out as you please) voice and video calls / rooms.
I don't like some aspects of it, and it isn't at proper feature parity yet (for example, I don't see the list of active people in a voice-room, assuming that feature even exists), but the speed of improvement is better than some might have you believe
Just did a brief check, and yes, seems you can share your screen alongside a video call. "High quality" can be a difficult endorsement to make though, with some caveats being we were both...
Just did a brief check, and yes, seems you can share your screen alongside a video call. "High quality" can be a difficult endorsement to make though, with some caveats being we were both (including the server) basically on LAN, we only played with it briefly, we weren't stressing any desktop or server, and my definition of high quality may not meet yours (though it certainly seemed crisp enough)
If you're leery of pushing people without being able to effectively try it yourself first, you can join one of the open servers and we can arrange a test call (Matrix handle in bio) at some point on the 17th/18th. If you're going to go through YouTube or similar for reviews, make sure they're not too stale - the platform's been maturing at a fair clip, and anything beyond about a year generally wouldn't make my list :)
I may test it out in the future, but I'm not quite ready to schedule a test call yet (though thanks for the offer!) And yeah, the last time I heard Matrix suggested as an alternative it didn't...
I may test it out in the future, but I'm not quite ready to schedule a test call yet (though thanks for the offer!) And yeah, the last time I heard Matrix suggested as an alternative it didn't have voice chat at all, so it clearly has matured in the meantime.
You mentioned hosting, which unfortunately might be a dealbreaker. I think I might be the most technically competent person of my friends groups, but self hosting or home server stuff is currently...
You mentioned hosting, which unfortunately might be a dealbreaker. I think I might be the most technically competent person of my friends groups, but self hosting or home server stuff is currently beyond my ability, and will likely be beyond my available time and energy to learn for the foreseeable future.
When you say “join another server” could my friends and I create our own small instance within someone else’s server, preferably private and invite-only with my friends? Or would it just be finding a space within a larger combined server where anyone can drop in or drop out? I know different services use different language to mean different things, so I’m trying to understand this Matrix platform and how much or how little it can translate from my existing experiences with Discord.
Self-hosting isn't a hard requirement, and there seem to be a reasonable number of public servers with fairly open registrations [0]. It's not the infinite choice one finds with Mastodon /...
Self-hosting isn't a hard requirement, and there seem to be a reasonable number of public servers with fairly open registrations [0]. It's not the infinite choice one finds with Mastodon / Fediverse servers, but decision paralysis isn't always a selling point.
You could probably create a couple rooms in whichever server you join, mark them invite-only (and optionally "from this server"-only), then create what Matrix calls a space (IE, a collection of rooms grouped together) with all your invite-only rooms joined together. That way you can send a single invite code to your friends for the space, and it should allow your friends to join every room in the space, and have them grouped neatly together. You should be able to adjust power / permissions levels from there
And yes, Discord calling random groups a "server" has grated on me for years, and does not make jumping in to (or out of) Discord easier as people attempt to navigate the language used
Okay excellent, glad to hear there’s enough similarity in features for the ability to whip up an isolated group of channels etc. I’m no network wizard but the concept of a “server” these days —...
That way you can send a single invite code to your friends for the space, and it should allow your friends to join every room in the space, and have them grouped neatly together. You should be able to adjust power / permissions levels from there
Okay excellent, glad to hear there’s enough similarity in features for the ability to whip up an isolated group of channels etc.
And yes, Discord calling random groups a "server" has grated on me for years
I’m no network wizard but the concept of a “server” these days — for every service that uses the word! — seems so far removed from what a server actually is, especially with all these cloud services and mirrored local/nearby versions for lower latency. (is this called “edge hosting” or something like that?)
Self hosting is very much optional for Matrix, and if you really want your own optionally siloed instance, there are providers who offer paid hosting. The network is federated, so any account on a...
Self hosting is very much optional for Matrix, and if you really want your own optionally siloed instance, there are providers who offer paid hosting. The network is federated, so any account on a host who has that turned on can communicate with any other server's users, connect to any public rooms there.
Most homeservers with public registration (like matrix.org) allow users to start private rooms that are invite-only, and rooms can be aliased across servers. There is a currently-beta feature called Spaces that basically bring the community management features of Discord servers, as well.
my guess is that a good deal of us will be verified without identification.
However, some users may not have to go through either form of age verification. Discord is also rolling out an age inference model that analyzes metadata like the types of games a user plays, their activity on Discord, and behavioral signals like signs of working hours or the amount of time they spend on Discord.
my guess is that a good deal of us will be verified without identification.
I’d expect the opposite honestly - I think that’s more a fig leaf to try reduce negative feedback like when reddit promised to explore custom css on new reddit. Especially given that I expect...
I’d expect the opposite honestly - I think that’s more a fig leaf to try reduce negative feedback like when reddit promised to explore custom css on new reddit. Especially given that I expect people who do not choose to share their activity with discord are overrepresented here
There’s no need to speculate, they’ve already implemented this in the UK and I think basically every grow adult from there I know has said they never had to submit anything.
There’s no need to speculate, they’ve already implemented this in the UK and I think basically every grow adult from there I know has said they never had to submit anything.
It might not change anything yet. But since Discord is showing no restraint on automatically adding account restrictions to existing accounts, they could easily do the same with servers. They...
It might not change anything yet. But since Discord is showing no restraint on automatically adding account restrictions to existing accounts, they could easily do the same with servers. They could just make a sweeping change for any non-community server to be considered 18+, because those have less moderation.
#meta #offtopic Not a huge deal, we all just do our best, and @mycketforvirrad the silent hero will come and add/remove tags. Take a look at the topic log in the sidebar of this thread (or any...
#meta #offtopic
Also sorry, I've commented a bit but I don't know my way around tagging a post.
Not a huge deal, we all just do our best, and @mycketforvirrad the silent hero will come and add/remove tags. Take a look at the topic log in the sidebar of this thread (or any thread) to get a feel for how the tags are commonly used and organized.
Probably the most important tags are those to do with things like US politics, so that people who don't want to have them in their feed can filter them out by tag.
The natural conclusion of the internet being transformed into a digital shopping mall and DMV. Can't wait for all the phishers to get copies of everybody's faces and ID's. Maybe I should register...
The natural conclusion of the internet being transformed into a digital shopping mall and DMV.
Can't wait for all the phishers to get copies of everybody's faces and ID's. Maybe I should register discird.com.....
I recently looked into TeamSpeak and oh my gosh they’ve come such a long way! TS6 even looks like it’s 90% of the way to being feature matched for Discord, or at least considering the features...
I recently looked into TeamSpeak and oh my gosh they’ve come such a long way! TS6 even looks like it’s 90% of the way to being feature matched for Discord, or at least considering the features that I personally care about. It’s probably the one that I’m going to try to push for among my friends, if there’s ever a preference for leaving discord
I'm wondering how adept these age verification tools are, and if AI is actually a good use-case for bypassing them without scanning your own face. Has anybody experimented with using AI generated...
I'm wondering how adept these age verification tools are, and if AI is actually a good use-case for bypassing them without scanning your own face. Has anybody experimented with using AI generated faces to trick these tools?
I read that, in the past, some age verification tools have been tricked by things as simple as screenshots of posed faces in Garry's Mod. I assume that those simple tricks have been patched out, but I imagine AI generated faces would be much more difficult to detect.
The primary check is behavioral analysis so I suspect if you’ve ever talked about doing taxes or having a job with normal working hours it’ll pass you by default. It also checks against linked...
The primary check is behavioral analysis so I suspect if you’ve ever talked about doing taxes or having a job with normal working hours it’ll pass you by default. It also checks against linked accounts, do if you’ve linked it to Steam, PS+, Spotify, or anything like that with an account that’s more than around 14 years old you probably also just pass by default.
That wasn't specified before they updated their press release. It's a bit better than a face/ID scan, but I'm still not a fan. Though, I imagine they've already been doing this behind the scenes...
That wasn't specified before they updated their press release. It's a bit better than a face/ID scan, but I'm still not a fan.
Though, I imagine they've already been doing this behind the scenes for years anyway.
I used Stoat for awhile back when it was still Revolt. Nutshell: Overall, it is pretty good, and very close to Discord in most ways. Caveats and negatives: It is being developed and improved at a...
I used Stoat for awhile back when it was still Revolt.
Nutshell: Overall, it is pretty good, and very close to Discord in most ways.
Caveats and negatives: It is being developed and improved at a very slow pace ... it is often a little bit buggy ... it does occasionally go down, and/or get bogged down from user load (and I expect that issue will become much worse after Discord implements this) ... and finally, the various ways in which it is not quite as feature-rich as Discord are minor annoyances, but over time, they can become frustrating.
Technically, I still have a small "friends-and-family" server on the platform, but we-all moved to a self-hosted instance of Matrix/Element over a year ago. Matrix is less like Discord, takes more of an adjustment, but overall, we are happier on it now, than we were on Stoat.
So real question, what's stopping an AI generated avatar from fooling this system somehow? Also are they going to be cross referencing ID information with the federal government? How are they...
So real question, what's stopping an AI generated avatar from fooling this system somehow?
Also are they going to be cross referencing ID information with the federal government? How are they going to verify IDs? That sounds very expensive to have to cross reference and verify ID information with federal databases for millions of users accurately and quickly, so it's probably just going to check your name and birthdate and try to match whatever photo to the user and leave it at that.
Because to me it sounds like if you want to remain anonymous you can just photoshop a fake ID or use an AI generated avatar. And yes, I know the legality of photoshoping and submitting a false and fabricated ID, but A) How will Discord know it's false? and B)How will they be able to connect the account to you to know that it's a fake ID, and C) is Discord really going to expend the resources reporting it as opposed to simply shutting down the account, and D) is the government realistically going to expend the resources going after people photoshopping IDs and submitting them to Discord of all places?
I feel like it's going to be a trivial task for Gen Z coders to craft up some way of fooling a system like this with a high degree of success with relatively low risk.
And at that point it makes it all just seem like a dazzling waste of money and resources.
This was a good reminder to cancel my Nitro subscription. Not exactly sure where the communities I am in will migrate to, but it's definitely top-of-mind for everyone now. There's a fairly strong...
This was a good reminder to cancel my Nitro subscription.
Not exactly sure where the communities I am in will migrate to, but it's definitely top-of-mind for everyone now. There's a fairly strong sentiment of not giving in to the requirements, so I don't necessarily see us sticking around on the platform.
I do think the dystopia will follow us eventually wherever we go, for the most part, but who knows
Yeah and often dystopia is bipartisan so I have zero hopes about it. Especially where I'm at (the US). The entire structure is a flawed-from-the-beginning dumpster fire built on a crumbling...
Yeah and often dystopia is bipartisan so I have zero hopes about it. Especially where I'm at (the US). The entire structure is a flawed-from-the-beginning dumpster fire built on a crumbling foundation of atrocity. With basically zero politicians that even remotely represent me or my interests. The main parties are both complicit in the horrors, historically and presently, internally and internationally, and the entire tech industry is ready to grovel at their feet at a moment's notice to do their bidding. I'm not saying electoral politics doesn't matter, but I am saying that it's completely f'd. Gonna take a lot more than a ballot box to fix this nation
If your server or channels in your server are marked as nsfw, you will not be able to view/enter them. If any media is sent that discord's automated tools detect as NSFW, it will be filtered out...
If your server or channels in your server are marked as nsfw, you will not be able to view/enter them.
If any media is sent that discord's automated tools detect as NSFW, it will be filtered out (discord displays a warning about it).
Recently discovered https://spacebar.chat - a community re-building of discord, with the aim to make it backwards compatible with the discord ecosystem. They have a lot of ground to cover, but...
Recently discovered https://spacebar.chat - a community re-building of discord, with the aim to make it backwards compatible with the discord ecosystem. They have a lot of ground to cover, but it's probably the most promising open source community alternative I've seen so far.
This sounds interesting. At least one of my friend groups is looking to move, so I'm looking at alternatives. How's this? Is it a sort of private server, or another chat app that can just connect...
This sounds interesting. At least one of my friend groups is looking to move, so I'm looking at alternatives. How's this? Is it a sort of private server, or another chat app that can just connect to Discord servers?
I'll admit I haven't thought about this much, or researched it, but we are definitely going to need some kind of age verification eventually to do certain things on the internet. In the US, seems...
I'll admit I haven't thought about this much, or researched it, but we are definitely going to need some kind of age verification eventually to do certain things on the internet. In the US, seems like a state government could provide a trusted API for this since they know if you have a REAL ID. But I wouldn't give that to Discord anyway because I don't need them to know who I am and "teen appropriate" would be fine for me. Assuming they are actually able to properly police any adult content.
It's certainly needed on some level, but it seems we always come up with the absolute dumbest possible solutions, at least in the US anyhow, though some headlines about how other countries handles...
It's certainly needed on some level, but it seems we always come up with the absolute dumbest possible solutions, at least in the US anyhow, though some headlines about how other countries handles similar things makes me think it's not exclusive to the US.
I don't even necessarily understand how we arrive to the dumbest possible solutions. On the one hand, I want to keep it somewhat simple and say it's because voters are tech illiterate, the voting system sucks, we elect old tech illiterate people, voters vote based on their gut and emotion and not by logic and evidence or expert advice, so it's easy to go from that basis and draw a straight line that says tech illiterate voters support policies that sound good to them and mandate people have to give over their government issued ID over the internet to various entities to prove their age, because it's similar to how the process works to use your ID to verify your age in person and that satisfies their 'go with my gut' feelings.
However I don't know if I really believe that. I somehow feel that wealthy people and corporations actually are backing this, as most of what goes through the government is based on money, and corporations and rich people have all the money, so then I think there must be some incentive for them to push for the dumbest solutions possible, yet I can't quite piece together what it is.
The reality is that no one needs to know our identity, they only need to know our age. So why is it that we're coming up with solutions that constantly have us forking over our identity? Now there's some solutions that could create problems where people attempt to use 'age credentials' of someone else to verify themselves, and that's where identity verification may come back around, but I'm not simply just going to accept solutions that start with identity verification until we've exhausted that there's no way to do age verification while leaving identity out of it.
Just to imagine a solution to this, separate from any justified cynicism about why many solutions have nefarious intent: The state knows who I am and my age because I pay taxes and/or drive a...
Just to imagine a solution to this, separate from any justified cynicism about why many solutions have nefarious intent:
The state knows who I am and my age because I pay taxes and/or drive a vehicle
I have an online account with the state for taxes and/or other state id purposes
in that account, there could be a page that generates a one time key that is unique to me (but doesn’t externally identify me) and indicates that I am above a certain age
I could provide that key to the 3rd party system and it could use it to verify my age without knowing anything else about me.
This doesn't work if the key can't be tied back to your identity because then the generated keys could be sold. If the keys can be tied to your identity, then a government actor could subpoena...
This doesn't work if the key can't be tied back to your identity because then the generated keys could be sold. If the keys can be tied to your identity, then a government actor could subpoena corporations and get an idea of what services you're using; not something people here would like, I imagine. To mitigate the black market of keys, you need to tie it to hardware attestation with in-person verification and short TTL for keys.
Apple/Google need to build out support for this to have a chance at privacy-preserving vouching.
I don't like any solution that requires me to send any personal information. It's just setting up the perfect system for tons of powerful actors to misuse your data (whether that be corporations...
I don't like any solution that requires me to send any personal information. It's just setting up the perfect system for tons of powerful actors to misuse your data (whether that be corporations selling, leaking, and analyzing it, or the government creating profiles on us with nefarious purposes).
Outside of the big social media platforms, I'm not sure I agree that there is much benefit to age verification anyway.
The only age gating I would be comfortable with is some sort of standardized router level age-gating. Where incoming connections send include a flag indicating minimum age, and the verification is handled locally. And for cellular devices, it could be handled on the device as well.
I don't see the point. The benefits are minimal and there is no solution that is effective and doesn't harm personal freedoms and right to privacy. Simple parental supervision is the only logical...
I don't see the point. The benefits are minimal and there is no solution that is effective and doesn't harm personal freedoms and right to privacy. Simple parental supervision is the only logical way to protect children from harm on the Internet but the people advocating for age verification aren't willing to do that...
https://xcancel.com/mitsufoppie/status/2020957345728110668 Lolz and I managed to verify age using https://github.com/xyzeva/k-id-age-verifier so I’ll leave that here.
I don’t see the “despite leaking 70,000” ids in the original title. If it was an editorial choice, ultimately I don’t think that was their problem. The software they used for support tickets had a...
I don’t see the “despite leaking 70,000” ids in the original title. If it was an editorial choice, ultimately I don’t think that was their problem. The software they used for support tickets had a security breach because of poor credentialing hygiene.
It’s a must that automated ID checking can fallback to customer support. But falling back to customer support means customer support needs the documents, and that is an opportunity for data to leak, since humans are infamously bad at security.
But it’s much better than to not have any recourse in case the systems perform poorly.
If as a business you choose to partner with an organisation with leaky data practices, it's absolutely your responsibility. You don't get to wash your hands of your poor choices and throw your...
I don’t see the “despite leaking 70,000” ids in the original title. If it was an editorial choice, ultimately I don’t think that was their problem. The software they used for support tickets had a security breach because of poor credentialing hygiene.
If as a business you choose to partner with an organisation with leaky data practices, it's absolutely your responsibility. You don't get to wash your hands of your poor choices and throw your carefully selected partner under the bus, in an attempt to deflect your responsibility for handling your customer's data.
It was a zendesk issue. Zendesk is big enough that I don’t really see it as an issue, or a moral failing, to use them for support tickets. Sometimes stuff happens. Even AWS has its outages....
It was a zendesk issue. Zendesk is big enough that I don’t really see it as an issue, or a moral failing, to use them for support tickets. Sometimes stuff happens. Even AWS has its outages.
Essentially, I don’t see a reason why that would not be a one-off. It’s not like they used shady software or had poor data practices. They used probably the biggest support tickets/IT software suite which had an exploit at that time.
Phew, that's okay then. Who knows what is in use at the new place, and the new policy announced has weasel words in it too. We'll delete your data, except where we won't. Kind of thing. Great.
Phew, that's okay then. Who knows what is in use at the new place, and the new policy announced has weasel words in it too.
We'll delete your data, except where we won't. Kind of thing. Great.
Presumably they’re still using zendesk. But that’s different from what they’re talking about in the post. The first step is that they use auth0 or whatever to verify your identity with automated...
Presumably they’re still using zendesk. But that’s different from what they’re talking about in the post.
The first step is that they use auth0 or whatever to verify your identity with automated systems. This is where the premise that your data is immediately deleted is.
If you get rejected, then you can file a support ticket, and as part of that you’d have to send a picture of your ID. No guarantee there, I mean if nothing else the bitmap will have to land on the support operator’s computer so they can look at it to begin with.
If you don’t trust zendesk, then if you never file a ticket it’ll never get to that step to begin with.
Didn't you just say the reason in the prior comment? So if humans are infamously bad at security, and they still have the same system that puts the infamously bad humans in a place to fuck up...
Essentially, I don’t see a reason why that would not be a one-off.
Didn't you just say the reason in the prior comment?
But falling back to customer support means customer support needs the documents, and that is an opportunity for data to leak, since humans are infamously bad at security.
So if humans are infamously bad at security, and they still have the same system that puts the infamously bad humans in a place to fuck up their security, then why wouldn't it happen again?
Two reasons: One, you have to take proactive effort to get on this pipeline. By default, you won’t be in customer support. You have to be the one to initiate that. Two, yeah, humans are leaky. I...
Two reasons:
One, you have to take proactive effort to get on this pipeline. By default, you won’t be in customer support. You have to be the one to initiate that.
Two, yeah, humans are leaky. I don’t particularly assume any privacy with any support tickets I file. This is what it is. It could be age privacy, could be a payment issue. If I’m concerned about the information being out there, I wouldn’t involve humans in IT unless the need was great.
I find the distinction to be somewhat meaningless. Whether your data gets leaked in that pipeline because you proactively engaged with it or not, Discord is barring you from using their platform...
One, you have to take proactive effort to get on this pipeline. By default, you won’t be in customer support. You have to be the one to initiate that.
I find the distinction to be somewhat meaningless. Whether your data gets leaked in that pipeline because you proactively engaged with it or not, Discord is barring you from using their platform unless you risk your identity being leaked by them or their contracted partners. Even if I agree with the free market notion that they can do what they want and it's their right to bar whoever they want for whatever reason, it doesn't make them immune from criticism. Even if only a small portion of users end up in that scenario, all users who care about their privacy and identity are well within reason to criticize the process even if they're fortunate enough to not be subjected to it.
Why is it meaningless? If the point is to try and predict whether or not it'll happen again, as well as evaluate Discord's "promise" of deleting data, it makes the situation completely different....
Why is it meaningless? If the point is to try and predict whether or not it'll happen again, as well as evaluate Discord's "promise" of deleting data, it makes the situation completely different.
The automated system has not been hacked, did not have a data breach, and there is yet no indication that the commitment to delete the data afterwards was false.
it doesn't make them immune from criticism
It does not. I am not saying it does. I am criticizing the criticism, which is also something that people making criticism are not immune to.
Even if I agree with the free market notion that they can do what they want and it's their right to bar whoever they want for whatever reason
I feel like there's a misconception. Discord does not want to do this. They are complying with legal regulations. Discord, as a capitalistic money-making entity, would love nothing more than if more children went on their platform and spent money with their parent's credit card while gooning together. There is zero benefit to them to do any of this.
Any alternatives people find - if they get large enough, they'll have to do the same thing. It only changes at the ballot box.
I think they are responsible for figuring out how to make sure it doesn’t happen again, which may or may not mean switching vendors, depending on what ZenDesk does to make sure it never happens...
I think they are responsible for figuring out how to make sure it doesn’t happen again, which may or may not mean switching vendors, depending on what ZenDesk does to make sure it never happens again.
They’re also responsible for implementing this new age verification system, including vetting any vendors they use. How much does the previous incident really reflect on that if they’re using different vendors?
I do think some skepticism about how a new, complicated system is implemented is warranted.
I'm definitely not doing this. And it sounds like avoiding it isn't going to be too painful for my personal needs?
There is no scenario where any of the Discords I'm in or would ever be in are worth sending my ID or face scan to a 3rd party company for.
I'm concerned it won't stop there. I wonder who is responsible for determining when a server is 'adult' oriented and what the criteria are? What are the consequences for mislabeling a server as not adult oriented if Discord later determines it is? Point being that I could see this being similar to reddit or any other hierarchy situation where people at the very top dictate the terms and everyone underneath them has to carry them out even if the way of carrying them out is nonsense. It could be every discord server owner just labels their server as 'adult' to avoid onerous rules or consequences, or perhaps Discord will only just tag a server as adult oriented if it finds them not to be, so then no one is pressured to switch their server over because the worst that can happen is what they would have had to do anyhow. But I suspect that Discord may have to do more than that, because then it's an easy way to bombard their system by spinning up servers not tagged as for 'adult' content and so anyone can view them and it's on Discord to tag them all.
When will it be the case that the criteria where something is 'teen' friendly is a server without profanity or certain levels of violent content etc? Think about movies or video game ratings, where someone decides if you're 16 you can't play GTA or watch a certain movie or such. What is going to make Discord immune from the pressure to apply some arbitrary bullshit rules once they have an identity verification system?
I don't expect any servers that I'm part of would qualify under 'adult' content as they've laid out in this post, so I'll possibly keep using Discord, but I surely hope there's a decent alternative because I don't have any intention of letting them store my facial characteristics in their database somewhere or my ID where the information can get leaked out and be used for any other purposes.
As these things usually are, It's vague.
(Link | Discord)
(Link | BBC]
Personally, I know that both of the Cigar Club groups I'm in, and my OSINT and SOCMINT groups will very likely be hit with this. The use of AI to determine user age based on device behavior feels counterintuitive. So let's say I look up a Minecraft building tutorial, am I suddenly needing to be worried about my status as an adult on discord?
Hah, that would be funny. My Minecraft account is 15 years old, meaning that unless I started playing when I was under three years old it is very unlikely that it belongs to someone under age.
But yeah, it is extremely vague and far from fool proof.
to be fair, I babysat a three or four year old who played Minecraft 😄
Three sounds incredibly young to do anything of substance but I believe it. Four/five is what I'd imagine some kids might be exposed to minecraft. We could probably ping in some experts from the tildes minecraft server and ask them when their kids started playing as well :D
I don't think he was getting much of substance done tbf, but I also often don't when I play Minecraft so I can hardly criticize. Scared to even try to figure out how old that kid is now.
Honestly it's pretty irritating to me the amount of technology, open source, and security focused communities that have embraced discord with open arms.
The platforms model flies in the face of everything those communities stand for, and hopefully this is a wakeup call that we should have been advocating for and investing in open source, standard protocols instead.
Ditto. The thing I'm taking issue with here though is just how much this highlights that much of my social life is tied up in and highly dependent on a single company whose policy has long since stopped being something I find tolerable. Discord has made it clear to me that it is eager to step down the path of enshittification, yet many communities and friends of mine use it as their primary social space.
I don't want any single company having that much control over my social life, much less one so eager to trample on people's privacy or whose future looks so grim. I don't know what I'm going to do about it just yet, but it's abundantly clear that this is not sustainable.
Unfortunately they already rolled this out in some places and I was affected. There were a few image channels that were not necessarily for adults (Image channels with memes) that I wasn't able to access anymore. Worst is that most people in them went ahead with the verification so the only option for me was to not see the content anymore or verify myself but I refuse to do so.
I would actually love it if a bunch of Discord's annoying "features" were disabled for me...
I have no problems with the ID verification, it's the AI scraping my server's chat logs that scares me. They're intentionally profiling my friends and family and it's absolutely unacceptable.
My account is like 10-11 years at this point. In 7 years, would I still need to provide proof or is the fact my account is old enough to vote enough??
The sort of funny thing about this is, there's no proof that it's still you using the account. Granted there's not a lot of incentive for people to transfer over free accounts to other people, like if your little brother wants a discord account and you stopped using discord, you don't give your brother your account, he just makes his own. But if there's age gating, now maybe there will be incentives to transfer accounts or sell accounts.
That does make me wonder how this IDing will go down if you change the email on your account, presumably you'd have to verify your age again. Otherwise someone could possibly just make a bunch of discord accounts and verify them with the facial age software and then change the email to some 15 year old's email address.
Edit: One other thing about the direction of age verification, even if you assume the software is perfect and assume the hardware has something built in to authenticate video streams are coming from the hardware so people can't utilize alternative video streams and the stream must be recorded by the front facing cam on the device, and even if you assume that this camera is incapable of being fooled by videos of other people, there's nothing technically stopping anyone from loaning out their face to age verify other people's accounts. Granted I recognize the reality of that is that it's so highly impractical that it wouldn't happen on a widespread level in all likelihood, but the primary way to defeat this would then be to make a face database and even if you don't store actual images or videos of people, if you just break it down into data points of someones face like facial recognition does, then you can tell if one person is using their face to age verify multiple accounts.
I mention this last part because I've seen that question asked in this thread a few times, what incentive would they have to keep the information or create data points of your face etc.? That's an example, albeit with a highly impractical premise in that particular case, but I'm also not an oracle and there could be many other scenarios that I can't think of which would prompt the same motivation to save data points of your face from the face scanning process.
I hadnt even considered the buying/selling of accounts by normal people but yeah that’s a potentially easy bypass. Sadly I think the way this is going, let’s hope your online identity isn’t legally the same as your irl identity. As in, let’s say you unlock an account for a kid who ends up committing a crime online. The way it’s going you might be charged for the crime because that account is using your id. Let’s hope that’s not the case ever because it’d be opening a huge can of worms….
But at the end of the day this whole thing is spying on users with children safety as a cover, si they’ll open the can of worms for the possibility of more data.
In 7 years, they might have incrementally moved to just outright asking to know your ID and keep it on file with the way some of these regulations are trending. :(
Yep. My friend group has been on discord since 2016 and I believe after I made the memes channel not 18+ we should be fine. I canceled my nitro because I don't support the path discord is headed down and I'll be looking for alternatives as time goes on but there's seemingly no rush because it won't effect me much. I quit paying for nitro yearly last year when they switched ceo's and the started IPO rumors started. If something gets in my way, I am definitely going to try garrys mod or vrchat as the face bypass, and I bet you could easily AI generate a drivers license that the software would believe.
I don’t understand why seeing pornography is so harmful that we need our privacy invaded like this.
I read an interesting book on internet pornography and how addictive it can be for people, especially teens/young adults whose brains are actively developing. The book features a number of testimonials from people struggling with pornography addiction, but also included research that was current at the time. I didn't process that 2014 was over a decade ago until I started writing this comment, so it isn't up-to-date research being cited. The book is titled Your Brain on Porn.
I haven't looked in to subsequent research, but I did find how it was an interesting read in framing how different internet pornography is from previous forms of pornography. It also got me thinking about how addictive social media platforms/the internet can be. While they're not praying on the "reproduce" portions of the brain, they still offer that low effort novelty/dopamine that makes it so easy to just keep engaging with the platforms.
I'm not for Discord requiring IDs, since I don't trust them or other platforms with that information and would prefer a privacy first age attestation standard being adopted before platforms attempt to gate off any sort of adult content behind an age gap, but governments are definitely pushing for things like this from what I've seen regarding social media bans/adult content age enforcement laws going into effect. That being said, I thought I'd share this book as there has been some research done on the negatives of internet pornography for people.
It's not just pornography that they're worried about. It's child predators. Social platforms are racing now to get ahead of the looming threat of an outright ban from governments worried about child predation. The public outrage when pedophiles groom children over the internet has reached a point that it is impossible to ignore. Something must be done; at least, that's the increasing sentiment.
That just sounds like right-wing propaganda used to justify ID/face capturing. How is age verification actually going to combat child predators? Adults can still communicate with children in either environment.
I work in the industry and this shit affects my livelihood. I almost lost my job because of this fear. I'm just describing the reality of the work that my colleagues have been putting in.
Eventually, only children and trusted adults (parents, relatives, etc) will be able to talk with children. Random adult strangers will not be able to. This will be common across social platforms. This is my belief, anyway.
Ok so the only way to "protect the children" is to literally verify everyone, children included, and keep them in their own pen.
What Discord is doing right now is technically useless and won't protect anyone. It looks like they're just slowly turning up the water heat so that we don't feel once it's boiling and we must all submit our ID.
Must we, though? Isn't there another way? Another service? There is and always will be. There are open source things like Matrix or older XMPP. Some even run
IIRCIRC to this day. Some people run their own servers and may be able to host their own community with their own rules.There is no actual need for big things like Discord is. Its only pro is that almost everyone uses it. Without users it's just another such service.
Yes, there can be obstacles on the way, sometimes not easy ones. But we certainly don't have to hand our IDs to anyone.
EDIT: One minor but important edit - It's -> Its
EDIT2: IRC
Can I assume that by "IIRC" you mean IRC?
And yes, people do still use it, even here on tildes. Feel free to come say hi to us on libera.chat/#tildes!
Of course! Sorry for misspelling, I will fix it.
Pardon, which industry is it you're a part of? (I can't figure it out from this comment chain)
I'm a software engineer working on a social platform that is not Discord, but their engineers and our engineers have crossed paths before. There is some overlap in thought, is my guess, as I'm not in leadership and just drawing my own conclusions.
I am a progressive leftist and also a father, but my child is too young for that to be a concern.
In the past I helped a (now) former partner who had a 12-year-old daughter. Her daughter was seduced by a man in his 40s from another state via Facebook. She sent him naked pictures. Probably videos as well.
When we discovered what was happening, her daughter was about to go to a bus terminal to get on an interstate bus and meet the man almost 1200 kilometers (or 746 miles) away. A 16 hour trip. We're in Brazil, so our police forces are much smaller and less efficient than in the US. At that point, she would be out of our reach and completely at the mercy of her abuser.
That is, of course, just one case. But it left a lasting impression on me. Enough to not be so critical of attempts to protect minors from online exposure. Even if a policy like that is only effective 5% of the time, 5% is probably a lot already.
The problem is that the stricter measures like that are to attempt to protect kids from things like grooming, the more they have other effects that can harm people. There are, of course, plenty of legitimate situations where kids being able to talk privately with people who aren't their parents is not negative (I and a lot of my friends were only able to safely talk to even peers about being queer online when I was a teen), but we also need to weigh the efficacy of these policies against their privacy implications for both kids and adults. It doesn't make sense to say "well, if it's even a little bit effective, it's worth it", because sometimes what is required to make it catch even a fraction of a percent of cases involves incredibly disproportionate invasions of privacy.
Discord's current ID implementation as described doesn't actually seem to block pretty much any of the routes through which someone would get groomed on Discord. Like absolutely not in any way. It might prevent your kid from accidentally seeing porn on Discord, but that's a completely different thing with very little connection to how at risk of grooming they are. So we're not actually making anyone safer here, and it's in exchange for a massive violation of all users' privacy. The idea that any harm could ever come to a child doesn't automatically stack the scales against the verifiable harm that the measures purportedly to protect that child cause.
Not being friended by strangers or able to receive messages from people outside approved channels would pretty much take away opportunities for private, side channel conversations between adults and kids.
This doesn't actually do any of that, though. From Discord's own press release:
Age-gated spaces just means Discord's existing flagging of certain channels, threads, and servers as nsfw. Adults and children can still communicate in other channels perfectly fine, and it being in public doesn't necessarily prevent grooming from at least starting. And the restrictions on Friend and Message requests don't prevent a kid from accepting a friend request from another user whatsoever, merely displaying a warning for Friend Requests from people they may not know and shuffling the Message Request to a separate inbox that they can still access to accept the message. I have the Message Request feature they describe turned on for my account already (it's an optional setting for now) because it's helpful to protect against spammers and scammers, but it's not going to prevent a kid from getting DM-ed by an adult they already met and started to befriend in a server they're both in -- which is how grooming actually happens on Discord.
These measures simply don't actually block the ways grooming tends to happen on chat apps like these -- and it would be difficult to do so without making Discord extremely non-functional for non-verified users. I don't think Discord should take stricter measures here, because I ideologically disagree with totally restricting teens from the ability to talk to strangers online due to risk of grooming, as I think it's counterproductive. But if you are looking to stop grooming by restricting teens' access to strangers, these measures do almost nothing to even address that problem, much less to address it effectively.
It's not "not being friended by strangers" it's "strangers friend requests go into a other folder" which they will be accustomed to looking in as so will their initial friend requests from their friends.
I don't have an issue with protecting children (and agree with you, in fact. They are very vulnerable and easy to manipulate), but I strongly disagree with basically every method that governments are implementing. There are much more secure ways to do it and we should focus on those, in my opinion.
I would disagree that it’s only right-wing propaganda justifying this. These policies appear to be supported throughout the political spectrum. Although, I would agree it’s companies pushing this
That's because it is. But when that right wing propaganda leads to legislation being passed, companies do sort of have to comply.
It seems to me that possibly the greatest predators to children are billionaires and their toys. The scale of harm done already and that will be done just by LLM alone I can only imagine, social media and various content algorithms even ignoring unrestrained communication between prototypical child sexual predators. Societies and cultures growing more distant and ever more dependent on tech billionaires platforms, I can think of no greater threat to children and adults alike.
Are you saying that LLMs are grooming children? Billionaires are bad, but there are only 3,279 billionaires globally which is not nearly enough people to explain the tens of thousands of minors abused every year.
I'm saying that LLMs are warping their minds and causing serious harm, up to and including death. Notably when I said billionaires and their toys being the greatest predators to children, I did not say sexual predators.
https://en.wikipedia.org/wiki/Deaths_linked_to_chatbots
Just as an example and starting point of what I mean, and I think it's going to get much worse. And those are just the known cases enough to put on Wikipedia. There are obviously going to be more than just the known cases. Then expand that to social media platforms and you can't easily quantify the harm caused because often the harm caused through social media has another person on the other end, so you can say it's not the platform it's the person, but the platforms are carefully constructed in ways to create certain forms of contact between people for engagement, so I won't give them that leeway.
That's not to discount the harm caused by protoypical child sexual predators, but the scale at which that harm is happening is completely dwarfed by the scale at which billionaires and their reach extends. Hundreds of millions, possibly billions of people, are impacted by those 3,279 billionaires. Of course I don't actually exclude hundred millionaires or such either, and some billionaires like Bezo's ex-wife aren't the prototypical psychopath billionaire either, so I don't really care about the exact specific number of people, billionaire is more of a stand-in for extremely rich assholes who use their wealth to exert power over others.
I'm not in disagreement with you, but whenever news of a child predator chatting with a child on a messaging/social platform hits, the general reaction is "how dare the platform not have caught this". The conversation isn't about the relatively low incident rate. It's not about educating parents to look after their children more. It's about blaming the platform. And again, I'm not saying platforms can't be more responsible. It's just that if the platform is confronted by this, the logical solution from their POV is to build age controls into their platforms.
I don't believe the main issue is pornography. Since Discord is a chat program, I would be way more concerned with grooming and sex crimes targeting minors.
Discord press release link for reference.
Personally I have no interest in doing that myself regardless of what reassurances they provide. At present I don't think there's anything I'd be missing out on if I didn't, but I imagine people will find workarounds to using their own face before long.
All of those protections are horse shit. This is the same company that got hacked in October.
The customer support agency that was receiving the emails of ID document scans got hacked, Discord itself wasn't. I imagine they've been fired.
Even so, Discord was the one to hire them and make the initial demands for the IDs. It doesn't matter that it was a third party that got breached. That third party only had that information because Discord hired them in the first place.
They say in the article they stopped using that vendor for age verification, but the damage is already done. Discord has already proven to have had incredibly faulty judgment once, and with something vital. A government-issued ID being leaked is so much worse than a credit card. Given Discord's size and popularity, and the sensitivity of the information, they should have vetted the vendor's security and data storage practices much more thoroughly.
People currently have little reason to trust Discord's promise that any photos, whether of our faces or IDs, will be deleted. They promised that it wouldn't be stored last time, but obviously that was false.
It's like a museum hiring a third party security company to provide guards and security equipment for some special temporary exhibit, who then fail to detect and stop a guy breaking into the exhibit before running off with priceless art and artifacts. The owners of the stolen pieces don't say, "oh, that museum did nothing wrong, it was all the security company." The museum holds equal responsibility because ultimately, it was THEIR responsibility to keep the exhibit safe. THEY made that promise to the owners, not the security company.
Likewise, Discord promised its users their data would be safe and deleted right away. Who got hacked is irrelevant, because the fact the breach happened at all shows that Discord failed to ensure the data would be deleted like they promised.
So ultimately: yeah, people are going to blame Discord because they did fail at some stage.
That’s a separate thing. What they promised would automatically be deleted is the automated age verification, which as of now has never been hacked. Not that it’s existed for very long, but nonetheless, that commitment was kept.
Essentially, if you get denied by the automatic system, you can make a support ticket and say, hey, the machine was wrong, let me prove that to a human. And then the human will ask you to send over a picture of your ID.
No one ever promised that this would delete your ID. It’s bespoke service, not an official pathway.
Not to mention it was Zendesk that was hacked. Using Zendesk isn’t a sign of poor judgement, it’s by far the most common IT ticketing software. If you’re going to avoid any company that uses Zendesk you may as well just avoid ever emailing customer support.
Do you have a source on Zendesk being the one breached? I'm not asking to be snarky or combating, I'm genuinely asking because I'm finding sources (namely Discord itself) claiming it's a company called 5CA that was breached. I initially only found articles referencing 5CA (if they named the vendor at all), and after I saw your comments mentioning Zendesk I had to dig a bit to find some.
Based on this site, the leak was first attributed to Zendesk and then Discord publicly named 5CA as the vendor. Both 5CA and Zendesk naturally denied being the ones hacked. And... That's essentially all I can really confirm.
Honestly, it's a bit confusing trying to sort out where, exactly, the breach happened. A lot of the articles are from right when the breach was announced, or after Discord named 5CA. And with today's announcement, there's even more articles to sift through. This is one of the few more in-depth writeups I can find from after October, with a focus on 5CA, but I have no clue how accurate or reputable that particular site (or writer?) is. There was a failure at some point, and the fact it's not clear where that failure point was doesn't really sit well with me given how major of a breach this is. If you have some a source that goes more in-depth, I'd really appreciate it.
That aside, to address your first point, I'll just quote the relevant bit from the Discord blog post I shared:
Succinct and direct promise to users that they will delete images of identity documents and ID match selfies. I'll grant you that it specifies "Discord and k-ID", which does leave some leeway to argue they never promised the same of third party vendors. They also don't use the word "immediately" or any language to specify when it would be deleted. Which is arguably just as bad, because that bullet point in their statement implies an explicit promise that they'll secure our privacy while leaving Discord leeway to avoid or minimize accountability precisely in cases like this.
Just... Discord really hasn't done anything to convince me (and others) that they've taken measures to ensure this never happens again. After that breach, I and many others just don't want to risk giving our government-issued IDs to any private companies over the internet. It really doesn't matter who was hacked, just that it happened to Discord users once and we don't want it to happen to us.
I just wanted to reinforce your point and weigh in on this one. I’m in Australia and I guess my Discord account got flagged as “potentially not an adult” because I was asked to verify my age a few months ago.
I’ve already had my ID picked up in Optus’ horrendous data breach a few years back, so I’m careful about not giving my ID to companies, even ones I should ostensibly be able to trust, via any digital means. If any phone company needs my ID ever again, I’m happy to physically walk into a store and have it verified manually, but I’m not comfortable with it being digitally stored given how clearly that leads to breaches.
So when Discord popped up offering these two options — give them my ID or try the AI video-selfie feature — I decided to test out the claims of “no video leaves your device” and “on-device processing”. To do this, I used my phone, and navigated through the prompts until it asked for camera access. Turned on airplane mode and accepted — oops something went wrong. Followed the prompts multiple times, turning on airplane mode at various different stages, including waiting until the next page had loaded, but ultimately there was no point at which I could cut the internet connection and still have the system verify me.
This tells me with confidence that a live internet connection is required for the age recognition feature, which means the claim of “on-device processing” is dubious at best, but likely an outright lie. It also means it’s impossible for anyone to verify the claim that the video selfie “doesn’t leave the device”.
It’s so disappointing because I genuinely don’t think it would be particularly difficult to design a system which did the same thing without having data leave your device, but they’re not even bothering to pretend that’s what happens.
Edited my second paragraph to be a bit less needlessly combative.
Thanks for testing airplane mode. That's great.
I suppose to be completely aware of what leaves your device you'd have to install a custom root certificate and use a MITM proxy , and hope they don't pin certificates or it gets a lot harder.
someone shared this on a dev server i'm on
https://age-verifier.kibty.town/
(i'd never run that in a million years, so not sure if i should even be linking it here lmao)
but they seem to have reversed the current process and yeah, its basically sending metadata about your face to their servers for them to do some integrity checks on
e: seems like its already patched but i assume theyre doing the same steps but in a more tamper proof way
Notably, that wording also doesn't specify more that no data at all about the 'video selfies' is transmitted, just that the video itself doesn't leave the device. This leaves room for them to create data points about your face from the video, send those data points, and then reconstruct your face from those data points. I have seen other wording mentioned in other articles that supposedly is more specific that no data will be transmitted other than the approximate age the on-device detection comes up with, but that requires more time and expertise for people to dive into all the various policies involved to determine what exactly is covered and what is left for interpretation.
Even then, policies are one thing, actualities and consequences are another thing. They can say they won't transmit anything but that doesn't mean it's actually true, and as we've become incredibly familiar with in the past 10 years or so, the consequences for lying and harming others are basically so little as to be none at all.
Can you explain what this looks like? Any sufficiently detailed description sounds like a compressed photo to me, and they're explicitly not sending those.
Fully collecting everyone's personal data isn't something every company does. Discord isn't Facebook. What are the incentives to lie and deal with storing this data?
Monetisation. AI training. Which is maybe the same thing. The fact that they could do anything and have it be easier to ask forgiveness than permission. Also history. How many times do we have to be lied to? I'm sure they'll be on the level /this/ time. As opposed to all the other times.
More significantly, a great many people actually just post selfies and pictures of their daily lives to Discord, where it will then live on Discord’s servers. The selfie scan is probably more secure than the normal activities people are doing on the platform.
I think the main problem is going to be for people who are kind of borderline, so anyone in that transition period from about 16 to 22 where face scans won’t be able to confirm your age for sure, your linked accounts probably won’t be old enough to make a definitive call one way or another, and your posting behavior probably isn’t that indistinguishable from someone underaged. But, like they said, this is maybe 20% of their users at most. And most of them likely wouldn’t notice the restriction unless they’re trying to access an NSFW channel, and those NSFW distinctions are placed by server admins rather than Discord itself.
Basically I think people are catastrophizing a lot here, and the actual additional risk exposure the vast majority of users are being exposed to is basically negligible. That said, it probably is healthy for more self-hosted and open alternatives to Discord to exist so I don’t really feel like pushing back on people actively seeking those out. I think it was actually a bad fit for a lot of what people were trying to do with it and it’ll be good if there is some branching out to more purpose-built spaces to set up communities for different things. I am glad the energy seems to be directed towards self-hosted and open platforms now rather than trying to decamp to Reddit or Digg or whatever.
In the quoted part of the parent comment I was responding to, there is no explicit mention that compressed photos aren't being sent.
It says does not permanently store video selfies first. So if you were to stop here, then it's only ruling out the original video taken. Now let's keep going to see what else is ruled out.
This could be read as "Images of....ID match selfies are deleted directly after" which would include images of the selfie process or it could be read as "ID match selfies are deleted directly after", which the latter could simply only be reinforcing the first sentence that video selfies won't be kept, says nothing about any still images of the selfie process. The images part of this statement is only explicitly clear that its referring to identity documents, not the selfie.
Again, only reinforces that the video itself never leaves the device.
Furthermore, what it looks like even if you completely remove the idea that they may take still frames from the video, it looks a lot like how LLMs reproduce text that they were trained on, but supposedly none of that is copies of the original information. How can an LLM produce images of Will Smith or anyone else, but they don't hold any actual copyrighted data? How can any facial matching system match your face to other images of you? Because they can break your face down into data points and then compare the data points. Or think of the technology that some companies were using to try to assist law enforcement by using genetic material and other factors to generate an image of what a potential suspect or missing person might look like, and while I wouldn't say many of those were accurate, they were working on many assumptions and less specific material. The point of mentioning that is more so that it's something companies have already done, attempt to reconstruct someone's face and what they look like with technology based on various data points.
Discord may not have any incentive, even if you assume they have no incentive, this doesn't account for the other parties they are contracting. Even if you assume none of them have any incentive, mistakes happen, malicious actors exist. If Discord wants to personally back the whole process they can guarantee me a couple million dollar payout and then everyone else each their own million dollar payouts and that is prioritized above all other debt or payments owed by Discord to other entities if it doesn't work the way they say it works, and I'm sure their investors, their insurance company etc. will lose their minds over that type of guarantee because they will be all but assured to lose money. That's how you know it's bullshit, if all the people behind it have to put their money where their mouth is, they would all go running for the hills.
Does anyone have good alternatives? I am very happy with the discord I've built for my friends, but this is ridiculous.
Also sorry, I've commented a bit but I don't know my way around tagging a post.
A lot of FOSS communities have switched to Matrix protocol chats - I host an unfederated server that also allows calls via a LiveKit service, and it works quite well. The mobile clients leave something to be desired though.
It depends on what features you'd want in an alternative. I've heard self hosting stoatchat (formerly Revolt, GitHub) is the "next closest thing". Never set it up/tried it out myself, though, can't verify firsthand.
Internet Relay Chat. It's been around for decades and will outlive Discord.
I like IRC, but unless it changed since I last used it years ago, the way it works is too different from Discord to function as a viable replacement for most people. The messages are tied to the sessions. You can't see previous message history from before your session, logging off will typically wipe all the conversation you were present for, and people can't send you a message when you're offline. Some IRC clients can store the messages, but it still limits what you can see to things you were present for.
It's useful for conversation in real time, but many people use Discord to share information and updates. I'm on many fandom servers where people consult older conversations for writing reference, some servers for news on manga translations, a server with my high school friends to coordinate hangouts or share updates (one person's phone just never got texts from mine for some reason), and a couple servers used to coordinate workers on large-scale creative projects. IRC just doesn't work for those cases.
IRC v3 addresses this, the name of the capability is "chathistory". The big caveat is that most of the big networks (libera.chat etc) don't support this yet. A bouncer like Soju can cover the gap.
For setting up your own server Ergo is a great option that gives you chathistory and many other goodies that go a long way towards turning IRC into a first class modern chat experience. Always-on, multi-client and push notifications in particular.
On the client side Halloy (GUI), Senpai (terminal), Goguma (mobile) and Gamja (web) are some modern clients that can take advantage of chathistory and other features.
I moved my private friends/family network to Ergo over a year ago and it has worked out great. I am pretty much the only one who knows anything about IRC, for everyone else it's just another chat app.
IRCv3 has been in development for well over a decade and barely has seen adaptation other than the most basic things. Chat history is I think half a decade old, which says a lot about the rate of adaption. I remember IRCCloud being one of the bigger drivers behind v3, but their blog has been quiet since 2022.
Having said that, if you want try and win people over to the IRC side of things I honestly wouldn't recommend any of the software in your list. I'd go for a service like IRCCloud or a self hosted web client like the lounge which does chat history as well.
But even then I'd still not recommend that people switch back to IRC. Don't get me wrong, I like idling on #tildes with the folks there. And I really used to be a die hard IRC user to the point of having written this thing just so I didn't need to switch to discord.
But, IRC simply isn't an ecosystem that fits with what people expect these days. Heck, even over a decade ago it already didn't fit what most people wanted out of a chat service. I remember trying to get a channel going for various communities and subreddits over the years and never really succeeding. As soon as we put up a discord link in the subreddit people started flooding in simply because of how much lower the barreer of entry is compared to IRC and still very much is.
Yes it is fine for technical people if they sit down for a moment to figure it all out. But that in itself already says something about how accessible IRC is compared to something like Discord. But the comment you replied to specifically was talking about most people, not just (slightly) technical people. Which is extremely easy to forget as well, because once you have figured out IRC it doesn't seem all that difficult. Causing a lot of IRC users to suffer from the curse of knowledge in discussions like these.
Should out to #Tildes on Libera! We're up to 12 users! 13 if you include ChanServ!
IRC still exists? Christ, that's a trip.
My ISP does customer support on it.
What ISP, out of curiosity?
Ah I'd rather not say. It's kind of a niche ISP.
Why? A business using IRC for tech support is fascinating.
It’d be close to doxxing myself.
My subreddit had an IRC channel -- dead by the end -- through like the late 2010s.
I can't remember why, but I have definitely been on an IRC server post-pandemic. Even if just briefly.
With web-based clients the server can host as well it's pretty easy to use too.
There's Stoat (formerly called Revolt)
It's way less mature than Discord and not very popular, but it's the closest to Discord that I know of. As far as I know, it's pretty secure and trusted and is open source.
It literally just got hit with some flack for the maintainer dealing with AI commits (that have since been reverted). IDK if I'd consider that a full dealbreaker, but it's something to keep in mind for those who care.
edit: conaider to consider
I don't like that, but at least it's open source so people can see exactly what's being committed.
I've been trying to get my own instance of Stoat up for hours today and haven't been able to get through it. I wish there was a self-hosted alternative that has a decent feature set, mobile apps, and isn't 14 different containers.
Yeah, sadly there aren't any good alternatives if you're looking for a discord-like experience.
Honestly, I wish a company like Proton would release a Discord alternative. I would happily pay for a privacy focused option.
My question is why does anyone feel like they need a “discord-like experience?” Discord does a couple of things well and a lot of different things kind of poorly. It’s main perk is the network effect of lots of people already having accounts, a big bot ecosystem to extend functionalities, and there being a bunch of integrations to link other accounts. You’ll never get any of that off Discord because it depends on the network effect. The closest way to replicate it is something that can be accessed via browser. But voice calls are doable with other apps, video chats and streams are too. You can do a big chat room with a whole bunch of tools about as well as Discord allows and Discord’s implementations of threading and forums are pretty bad.
It would behoove people to think through what they actually want to do with their “Discord alternative” and try to find a service suited to that rather than just replacing Discord. Discord was kind of cobbled together on top of a Teamspeak + groupchat base. I think the ease of setting it up made a lot of people form their communities around Discord’s UI and feature-set rather than thinking through what kind of tool they actually wanted to be their community hub or groupchat.
I mean, functionally, Discord is a really good piece of software. It has tons of features that work well and make the experience of using it quite nice. Everything with voice, video, screen streaming, and text chat is pretty good, in my opinion, and I dont know of any other app that does them all as well.
The "forums" implementation is terrible, I agree. But as a communication app for a small group of friends, it's top-notch. I want all the text, voice, streaming, and video in a single app.
Discord isn't perfect, but it is good chat software that offers a lot of things that various groups need. I remember when Discord was the new player and people were switching to it from things like Skype groups, and even then when it had fewer features, it was a huge improvement over the competition. While the network effect is obviously in place now, I think it's foolish to dismiss the actual utility of the software, which remains pretty good at what it was designed to do and has pretty much no competition with all the same major features done well enough to merit switching. And I'm in plenty of groups small enough that the network effect isn't a factor -- just how much effort it would require to set something new up. Nothing else is remotely as appealing compared to Discord as Discord was compared to Skype.
Because Discord is very convenient. It has private chat, group chat, voice chat, video chat, streaming all in one program working on multiple platforms.
For me personally, one of the big issues really is just moving the communities that have been built exclusively on Discord. People will be more likely to move to a similar platform. So if Discord disappears, I have no idea where we'd all go.
At bare minimum, I think most communities would want: fairly easy installation and setup; real-time text-based chat; the ability to DM people; message history that isn't session-specific; and to be able to send images.
Yeah... Looking for alternatives with even just those features is surprisingly limited. It gets harder when you want more of Discord's features like voice chat or screen sharing. We can at least break those up a bit among various other programs, but... It's going to be a very annoying transition for people who use all those features on Discord.
Matrix actually has everything on your list. An idle, always-on style voicechat room is available as well, though the UX isn't nearly as polished as Discords (who've spent years making theirs better).
Many servers have open sign-ups as well, meaning you can try a non-overloaded server before you decide if you even want to try hosting your own:
https://servers.joinmatrix.org/
I've been hosting my own for years now, and found it fairly straightforward / set'n'forget on Debian. Feel free to send me a DM if you'd like a registration code (as I do not have open registration) to play around with it too.
Every time I'm forced to use Matrix for anything I come away sour and salty. Lost history, encryption key required validation between apps that never work. Slow. Bad searching. Desynced chats. If the only choices were Matrix or IRC over RFC 2549, I'd be choosing RFC 2549.
Does Matrix have video calls or streaming?
You might have to elaborate on streaming, but it has both Zoom-style (actively invite to, and join, a call) and Discord-style (the room is always there, you can drop in and out as you please) voice and video calls / rooms.
I don't like some aspects of it, and it isn't at proper feature parity yet (for example, I don't see the list of active people in a voice-room, assuming that feature even exists), but the speed of improvement is better than some might have you believe
By streaming, I mean high-quality screensharing with audio alongside an audio or video call.
Just did a brief check, and yes, seems you can share your screen alongside a video call. "High quality" can be a difficult endorsement to make though, with some caveats being we were both (including the server) basically on LAN, we only played with it briefly, we weren't stressing any desktop or server, and my definition of high quality may not meet yours (though it certainly seemed crisp enough)
If you're leery of pushing people without being able to effectively try it yourself first, you can join one of the open servers and we can arrange a test call (Matrix handle in bio) at some point on the 17th/18th. If you're going to go through YouTube or similar for reviews, make sure they're not too stale - the platform's been maturing at a fair clip, and anything beyond about a year generally wouldn't make my list :)
I may test it out in the future, but I'm not quite ready to schedule a test call yet (though thanks for the offer!) And yeah, the last time I heard Matrix suggested as an alternative it didn't have voice chat at all, so it clearly has matured in the meantime.
You mentioned hosting, which unfortunately might be a dealbreaker. I think I might be the most technically competent person of my friends groups, but self hosting or home server stuff is currently beyond my ability, and will likely be beyond my available time and energy to learn for the foreseeable future.
When you say “join another server” could my friends and I create our own small instance within someone else’s server, preferably private and invite-only with my friends? Or would it just be finding a space within a larger combined server where anyone can drop in or drop out? I know different services use different language to mean different things, so I’m trying to understand this Matrix platform and how much or how little it can translate from my existing experiences with Discord.
Self-hosting isn't a hard requirement, and there seem to be a reasonable number of public servers with fairly open registrations [0]. It's not the infinite choice one finds with Mastodon / Fediverse servers, but decision paralysis isn't always a selling point.
You could probably create a couple rooms in whichever server you join, mark them invite-only (and optionally "from this server"-only), then create what Matrix calls a space (IE, a collection of rooms grouped together) with all your invite-only rooms joined together. That way you can send a single invite code to your friends for the space, and it should allow your friends to join every room in the space, and have them grouped neatly together. You should be able to adjust power / permissions levels from there
And yes, Discord calling random groups a "server" has grated on me for years, and does not make jumping in to (or out of) Discord easier as people attempt to navigate the language used
[0] https://servers.joinmatrix.org/
Okay excellent, glad to hear there’s enough similarity in features for the ability to whip up an isolated group of channels etc.
I’m no network wizard but the concept of a “server” these days — for every service that uses the word! — seems so far removed from what a server actually is, especially with all these cloud services and mirrored local/nearby versions for lower latency. (is this called “edge hosting” or something like that?)
Self hosting is very much optional for Matrix, and if you really want your own optionally siloed instance, there are providers who offer paid hosting. The network is federated, so any account on a host who has that turned on can communicate with any other server's users, connect to any public rooms there.
Most homeservers with public registration (like matrix.org) allow users to start private rooms that are invite-only, and rooms can be aliased across servers. There is a currently-beta feature called Spaces that basically bring the community management features of Discord servers, as well.
my guess is that a good deal of us will be verified without identification.
I’d expect the opposite honestly - I think that’s more a fig leaf to try reduce negative feedback like when reddit promised to explore custom css on new reddit. Especially given that I expect people who do not choose to share their activity with discord are overrepresented here
we’ll see how it rolls out. IRC servers wouldn’t ever do this. :)
There’s no need to speculate, they’ve already implemented this in the UK and I think basically every grow adult from there I know has said they never had to submit anything.
Unless that discord server is marked as adult content (and why would you self report in that case), why would this change anything?
It might not change anything yet. But since Discord is showing no restraint on automatically adding account restrictions to existing accounts, they could easily do the same with servers. They could just make a sweeping change for any non-community server to be considered 18+, because those have less moderation.
#meta #offtopic
Not a huge deal, we all just do our best, and @mycketforvirrad the silent hero will come and add/remove tags. Take a look at the topic log in the sidebar of this thread (or any thread) to get a feel for how the tags are commonly used and organized.
Probably the most important tags are those to do with things like US politics, so that people who don't want to have them in their feed can filter them out by tag.
The natural conclusion of the internet being transformed into a digital shopping mall and DMV.
Can't wait for all the phishers to get copies of everybody's faces and ID's. Maybe I should register discird.com.....
Time to go back to old school Ventrillo or Teamspeak or something ancient and self-hosted.
I recently looked into TeamSpeak and oh my gosh they’ve come such a long way! TS6 even looks like it’s 90% of the way to being feature matched for Discord, or at least considering the features that I personally care about. It’s probably the one that I’m going to try to push for among my friends, if there’s ever a preference for leaving discord
Paywall bypass: https://archive.is/PqusV
I'm wondering how adept these age verification tools are, and if AI is actually a good use-case for bypassing them without scanning your own face. Has anybody experimented with using AI generated faces to trick these tools?
I read that, in the past, some age verification tools have been tricked by things as simple as screenshots of posed faces in Garry's Mod. I assume that those simple tricks have been patched out, but I imagine AI generated faces would be much more difficult to detect.
If a human can't tell a difference, I'm not sure how any model would be able to either. We run into the same situation with LLM text.
The primary check is behavioral analysis so I suspect if you’ve ever talked about doing taxes or having a job with normal working hours it’ll pass you by default. It also checks against linked accounts, do if you’ve linked it to Steam, PS+, Spotify, or anything like that with an account that’s more than around 14 years old you probably also just pass by default.
That wasn't specified before they updated their press release. It's a bit better than a face/ID scan, but I'm still not a fan.
Though, I imagine they've already been doing this behind the scenes for years anyway.
I used Stoat for awhile back when it was still Revolt.
Nutshell: Overall, it is pretty good, and very close to Discord in most ways.
Caveats and negatives: It is being developed and improved at a very slow pace ... it is often a little bit buggy ... it does occasionally go down, and/or get bogged down from user load (and I expect that issue will become much worse after Discord implements this) ... and finally, the various ways in which it is not quite as feature-rich as Discord are minor annoyances, but over time, they can become frustrating.
Technically, I still have a small "friends-and-family" server on the platform, but we-all moved to a self-hosted instance of Matrix/Element over a year ago. Matrix is less like Discord, takes more of an adjustment, but overall, we are happier on it now, than we were on Stoat.
So real question, what's stopping an AI generated avatar from fooling this system somehow?
Also are they going to be cross referencing ID information with the federal government? How are they going to verify IDs? That sounds very expensive to have to cross reference and verify ID information with federal databases for millions of users accurately and quickly, so it's probably just going to check your name and birthdate and try to match whatever photo to the user and leave it at that.
Because to me it sounds like if you want to remain anonymous you can just photoshop a fake ID or use an AI generated avatar. And yes, I know the legality of photoshoping and submitting a false and fabricated ID, but A) How will Discord know it's false? and B)How will they be able to connect the account to you to know that it's a fake ID, and C) is Discord really going to expend the resources reporting it as opposed to simply shutting down the account, and D) is the government realistically going to expend the resources going after people photoshopping IDs and submitting them to Discord of all places?
I feel like it's going to be a trivial task for Gen Z coders to craft up some way of fooling a system like this with a high degree of success with relatively low risk.
And at that point it makes it all just seem like a dazzling waste of money and resources.
I heard rumors that opening up GMod and taking a picture of one of the HL2 characters works. Can't verify it myself, though.
This was a good reminder to cancel my Nitro subscription.
Not exactly sure where the communities I am in will migrate to, but it's definitely top-of-mind for everyone now. There's a fairly strong sentiment of not giving in to the requirements, so I don't necessarily see us sticking around on the platform.
I do think the dystopia will follow us eventually wherever we go, for the most part, but who knows
In this case the dystopia is mostly because of legal regulations, so they'll only stop when they change at the ballot box.
Yeah and often dystopia is bipartisan so I have zero hopes about it. Especially where I'm at (the US). The entire structure is a flawed-from-the-beginning dumpster fire built on a crumbling foundation of atrocity. With basically zero politicians that even remotely represent me or my interests. The main parties are both complicit in the horrors, historically and presently, internally and internationally, and the entire tech industry is ready to grovel at their feet at a moment's notice to do their bidding. I'm not saying electoral politics doesn't matter, but I am saying that it's completely f'd. Gonna take a lot more than a ballot box to fix this nation
Yeah. Maybe this will be the event that launches some competition? I can only hope.
As owner of a medium sized gaming server... If I choose not to age verify like this will I be locked out of necessary features?
@asinine
If your server or channels in your server are marked as nsfw, you will not be able to view/enter them.
If any media is sent that discord's automated tools detect as NSFW, it will be filtered out (discord displays a warning about it).
You also can't speak in stage channels.
That's about it.
Ok thanks then it shouldn't impact me.
"Hobo farming" doesn't exactly sound workplace appropriate. 🤨
Am I a hobo who farms or a farmer of hobos? It's like a rorschach test.
And I'm a psychopath apparently.
Recently discovered https://spacebar.chat - a community re-building of discord, with the aim to make it backwards compatible with the discord ecosystem. They have a lot of ground to cover, but it's probably the most promising open source community alternative I've seen so far.
This sounds interesting. At least one of my friend groups is looking to move, so I'm looking at alternatives. How's this? Is it a sort of private server, or another chat app that can just connect to Discord servers?
Proprietary, for profit, software FTW yet again.
I'll admit I haven't thought about this much, or researched it, but we are definitely going to need some kind of age verification eventually to do certain things on the internet. In the US, seems like a state government could provide a trusted API for this since they know if you have a REAL ID. But I wouldn't give that to Discord anyway because I don't need them to know who I am and "teen appropriate" would be fine for me. Assuming they are actually able to properly police any adult content.
It's certainly needed on some level, but it seems we always come up with the absolute dumbest possible solutions, at least in the US anyhow, though some headlines about how other countries handles similar things makes me think it's not exclusive to the US.
I don't even necessarily understand how we arrive to the dumbest possible solutions. On the one hand, I want to keep it somewhat simple and say it's because voters are tech illiterate, the voting system sucks, we elect old tech illiterate people, voters vote based on their gut and emotion and not by logic and evidence or expert advice, so it's easy to go from that basis and draw a straight line that says tech illiterate voters support policies that sound good to them and mandate people have to give over their government issued ID over the internet to various entities to prove their age, because it's similar to how the process works to use your ID to verify your age in person and that satisfies their 'go with my gut' feelings.
However I don't know if I really believe that. I somehow feel that wealthy people and corporations actually are backing this, as most of what goes through the government is based on money, and corporations and rich people have all the money, so then I think there must be some incentive for them to push for the dumbest solutions possible, yet I can't quite piece together what it is.
The reality is that no one needs to know our identity, they only need to know our age. So why is it that we're coming up with solutions that constantly have us forking over our identity? Now there's some solutions that could create problems where people attempt to use 'age credentials' of someone else to verify themselves, and that's where identity verification may come back around, but I'm not simply just going to accept solutions that start with identity verification until we've exhausted that there's no way to do age verification while leaving identity out of it.
Just to imagine a solution to this, separate from any justified cynicism about why many solutions have nefarious intent:
This doesn't work if the key can't be tied back to your identity because then the generated keys could be sold. If the keys can be tied to your identity, then a government actor could subpoena corporations and get an idea of what services you're using; not something people here would like, I imagine. To mitigate the black market of keys, you need to tie it to hardware attestation with in-person verification and short TTL for keys.
Apple/Google need to build out support for this to have a chance at privacy-preserving vouching.
The key could expire in a few minutes like a two factor code
And you would just sell a service that automates generates a key code right before use.
I don't like any solution that requires me to send any personal information. It's just setting up the perfect system for tons of powerful actors to misuse your data (whether that be corporations selling, leaking, and analyzing it, or the government creating profiles on us with nefarious purposes).
Outside of the big social media platforms, I'm not sure I agree that there is much benefit to age verification anyway.
The only age gating I would be comfortable with is some sort of standardized router level age-gating. Where incoming connections send include a flag indicating minimum age, and the verification is handled locally. And for cellular devices, it could be handled on the device as well.
I don't see the point. The benefits are minimal and there is no solution that is effective and doesn't harm personal freedoms and right to privacy. Simple parental supervision is the only logical way to protect children from harm on the Internet but the people advocating for age verification aren't willing to do that...
https://xcancel.com/mitsufoppie/status/2020957345728110668
Lolz and I managed to verify age using https://github.com/xyzeva/k-id-age-verifier so I’ll leave that here.
I don’t see the “despite leaking 70,000” ids in the original title. If it was an editorial choice, ultimately I don’t think that was their problem. The software they used for support tickets had a security breach because of poor credentialing hygiene.
It’s a must that automated ID checking can fallback to customer support. But falling back to customer support means customer support needs the documents, and that is an opportunity for data to leak, since humans are infamously bad at security.
But it’s much better than to not have any recourse in case the systems perform poorly.
If as a business you choose to partner with an organisation with leaky data practices, it's absolutely your responsibility. You don't get to wash your hands of your poor choices and throw your carefully selected partner under the bus, in an attempt to deflect your responsibility for handling your customer's data.
It was a zendesk issue. Zendesk is big enough that I don’t really see it as an issue, or a moral failing, to use them for support tickets. Sometimes stuff happens. Even AWS has its outages.
Essentially, I don’t see a reason why that would not be a one-off. It’s not like they used shady software or had poor data practices. They used probably the biggest support tickets/IT software suite which had an exploit at that time.
Phew, that's okay then. Who knows what is in use at the new place, and the new policy announced has weasel words in it too.
We'll delete your data, except where we won't. Kind of thing. Great.
Presumably they’re still using zendesk. But that’s different from what they’re talking about in the post.
The first step is that they use auth0 or whatever to verify your identity with automated systems. This is where the premise that your data is immediately deleted is.
If you get rejected, then you can file a support ticket, and as part of that you’d have to send a picture of your ID. No guarantee there, I mean if nothing else the bitmap will have to land on the support operator’s computer so they can look at it to begin with.
If you don’t trust zendesk, then if you never file a ticket it’ll never get to that step to begin with.
Didn't you just say the reason in the prior comment?
So if humans are infamously bad at security, and they still have the same system that puts the infamously bad humans in a place to fuck up their security, then why wouldn't it happen again?
Two reasons:
One, you have to take proactive effort to get on this pipeline. By default, you won’t be in customer support. You have to be the one to initiate that.
Two, yeah, humans are leaky. I don’t particularly assume any privacy with any support tickets I file. This is what it is. It could be age privacy, could be a payment issue. If I’m concerned about the information being out there, I wouldn’t involve humans in IT unless the need was great.
I find the distinction to be somewhat meaningless. Whether your data gets leaked in that pipeline because you proactively engaged with it or not, Discord is barring you from using their platform unless you risk your identity being leaked by them or their contracted partners. Even if I agree with the free market notion that they can do what they want and it's their right to bar whoever they want for whatever reason, it doesn't make them immune from criticism. Even if only a small portion of users end up in that scenario, all users who care about their privacy and identity are well within reason to criticize the process even if they're fortunate enough to not be subjected to it.
Why is it meaningless? If the point is to try and predict whether or not it'll happen again, as well as evaluate Discord's "promise" of deleting data, it makes the situation completely different.
The automated system has not been hacked, did not have a data breach, and there is yet no indication that the commitment to delete the data afterwards was false.
It does not. I am not saying it does. I am criticizing the criticism, which is also something that people making criticism are not immune to.
I feel like there's a misconception. Discord does not want to do this. They are complying with legal regulations. Discord, as a capitalistic money-making entity, would love nothing more than if more children went on their platform and spent money with their parent's credit card while gooning together. There is zero benefit to them to do any of this.
Any alternatives people find - if they get large enough, they'll have to do the same thing. It only changes at the ballot box.
I think they are responsible for figuring out how to make sure it doesn’t happen again, which may or may not mean switching vendors, depending on what ZenDesk does to make sure it never happens again.
They’re also responsible for implementing this new age verification system, including vetting any vendors they use. How much does the previous incident really reflect on that if they’re using different vendors?
I do think some skepticism about how a new, complicated system is implemented is warranted.