I’ve had this conversation too many times online and I’m exhausted from playing the pedophile’s advocate (seriously it’s not a camp I enjoy aligning myself with)… but I still think it’s worth...
I’ve had this conversation too many times online and I’m exhausted from playing the devil’s pedophile’s advocate (seriously it’s not a camp I enjoy aligning myself with)… but I still think it’s worth asking whether fictional CSAM of non-existing “minors” should really be treated by law the same way as real CSAM that only exists by actually victimizing real people.
Think about sociopaths. In any population some n% of people will have that condition. I think most of us who have looked into sociopathy are aware of this, and try not to think about it too much, because we can’t just “purge” all sociopaths from the world, though we might fantasize about that. This is their world too, they have all the same human rights as anyone else. Better to find ways to coexist and help them identify their pathology and manage it, minimizing the damage they can do to others.
I think pedophilia’s probably not that different from sociopathy. These people are here, they live among us, and likely they don’t have a lot of control over the urges they feel. I’d rather extend some empathy about that, and help them channel those impulses in ways that don’t hurt anyone else, than shun and criminalize them for having a mental illness.
This isn’t really a statement about xAI or Grok, I’m just commenting on the assumptions people tend to make about how certain unsavory segments of the populace ought to be treated. I’ve said things to this effect before (maybe not on Tildes, I can’t remember) and got piled on for being, apparently, pro-pedophile. I just feel this is one of those ethical/social issues that people 100 years in the future are going to look back on us as barbarians for not reckoning with sooner. Personally, if generative AI can (even partially) satisfy the demand for content that currently requires a whole absolutely horrific underground economy of child trafficking and abuse… well we should maybe do a more thorough cost/benefit analysis before writing it off entirely.
If you can provide evidence of your idea, which I wouldn't say you've done in this comment, I don't think your viewpoint should be shunned. However, I think this is an inappropriate topic to bring...
If you can provide evidence of your idea, which I wouldn't say you've done in this comment, I don't think your viewpoint should be shunned. However, I think this is an inappropriate topic to bring this rant to. The article involves grok being used to create non-consensual porn of real people. And if it's willing to create images of minors, it's certainly going to be able to do it to real ones as easily as it will for adults. No child deserves to have this happen to them, but it is becoming a real issue with the rapid advancement and minimal regulation of the new technology.
Thanks for the clarification, I missed the point in the article where these are using real people’s likenesses. That’s not victimless. Well I’d like to say this is the first time I’ve climbed up...
Thanks for the clarification, I missed the point in the article where these are using real people’s likenesses. That’s not victimless.
Well I’d like to say this is the first time I’ve climbed up on my soapbox at an inappropriate time, but that wouldn’t be accurate either, lol
Hey @balooga, I respect that you're bringing an empathetic perspective to this - I also want to say I do not think you are pro-pedophile, I'm sorry you've been labeled as such. I appreciate that...
Hey @balooga, I respect that you're bringing an empathetic perspective to this - I also want to say I do not think you are pro-pedophile, I'm sorry you've been labeled as such. I appreciate that you're showing empathy for a group of people that's not easy for which to have empathy. I also agree we need to give this more attention societally to really address the issue at hand.
I do however need to push back against this:
if generative AI can (even partially) satisfy the demand for content that currently requires a whole absolutely horrific underground economy of child trafficking and abuse… well we should maybe do a more thorough cost/benefit analysis before writing it off entirely.
There is simply no empirical evidence that consumption of ai/synthetic-CSAM reduces consumption/production/escalation of urges by would-be predators.
Unfortunately, the evidence suggests the opposite.
I found this open-letter from EPCAT to the EU. In it, they make a few claims with the relevant citations to research with one of the most important imo:
AI-CSAM, is proven to often increase CSAM addiction and even fuel existing fantasies of in-person child sexual abuse.
I also wanted to reference this article by The Wilson Center. It is less related to what you are directly writing about but I think it's worth inclusion for other readers here who might be interested, but basically we also need to consider second-order effects:
CSAM prevention law-enforcement is already stretched really thin
Ai-generated CSAM makes it harder to detect real CSAM. If we allow one but not the other in society we need to be able to consistently and accurately distinguish between them.
There are already issues involving sextortion and blackmail, both of minors and adults, and allowing Ai-CSAM is going to make it much worse.
FYI, ECLAG citing the Stanford study might not be the best support of their argument. There's a game of telephone where ECLAG's position paper has (their boldface): …with footnote 7 citing the...
FYI, ECLAG citing the Stanford study might not be the best support of their argument. There's a game of telephone where ECLAG's position paper has (their boldface):
Similar to pornography, the stimulation arising from watching CSAM,
including AI-CSAM, is proven to often increase CSAM addiction and even fuel existing
fantasies of in-person child sexual abuse.7
…with footnote 7 citing the Stanford paper, which says something similar but in much milder words. The most relevant part I could find:
However, neither the viability nor efficacy of such a practice has been sufficiently studied and many warn that, for some, this material could have an adverse effect—lowering barriers of inhibition or contributing to existing fantasies of real-world abuse.16
…which is much less confident wording. Insufficient study? And who's the "many"? Grumble grumble avoid weasel words next time, but I presume they mean the studies cited by the summary paper that is footnote 16 (my cherrypicking from the relevant section):
The material has been argued to potentially serve as a gateway to contact offending (Maras and Shapiro 2017), as the offender may become desensitized to passive viewing, finding it to be insufficient over time (Schell et al. 2007).
While engaging with abusive material does not inevitably result in contact offending (Henshaw et al. 2015), there are effects to the exposure of such.
Given VCSAM is related in content to CSAM, the ongoing effects of exposure to VCSAM is an important avenue for future research.
So there are real concerns, founded in real pathways by which an increase in AI imagery might increase physical harm to children. But there's also a lot of carefully hedged language (and the cliché of researchers saying more research is needed). ECLAG's wording of "is proven" feels like a stretch when describing the current research, even assuming their conclusion is correct.
I didn't dig any further. But I will say that, after that deep dive, I'm more concerned than I was beforehand re: AI imagery. Most of my increased concern is around the second-order effects, though, and Pandora's box is already open to varying degrees on that front. (And unfortunately that's a larger issue than just CSAM cases—regardless of the crime, seeing is no longer believing when it comes to evaluating evidence.)
In the abstract it knows what a child looks like with clothes on, and it knows what a naked person looks like. It doesn't have to be trained specifically on CSAM to be able to generate it. You...
In the abstract it knows what a child looks like with clothes on, and it knows what a naked person looks like. It doesn't have to be trained specifically on CSAM to be able to generate it. You might as well say "If an AI can effectively generate lifelike images of aliens, WHAT was it trained on?" It can generate images of things outside of the specific images in its dataset because it has some ability to generalize, and both naked adults and clothed children are common in the images.
CSAM was actually found in training data. Stanford Internet Observatory researchers found over 3,000 suspected instances of CSAM in LAION-5B, the dataset used to train Stable Diffusion, with 1,008...
CSAM was actually found in training data. Stanford Internet Observatory researchers found over 3,000 suspected instances of CSAM in LAION-5B, the dataset used to train Stable Diffusion, with 1,008 externally validated. Their report literally states that “having possession of a LAION‐5B dataset populated even in late 2023 implies the possession of thousands of illegal images.”
The problem i see with this argument is that it assumes that an abundant supply of fake CSAM will lead to lower demand for real CSAM. I think this is unfounded and may actually act as a gateway...
The problem i see with this argument is that it assumes that an abundant supply of fake CSAM will lead to lower demand for real CSAM. I think this is unfounded and may actually act as a gateway drug and increase it.
Do you think that movies showing fake murders lead to an increased demand for movies showing real murders? Do "incest" themed adult movies lead to more actual incest?
Do you think that movies showing fake murders lead to an increased demand for movies showing real murders? Do "incest" themed adult movies lead to more actual incest?
I know you've already gotten several replies about this by now, but there's one specific thing I haven't seen mentioned. Suppose AI-generated CSAM is legalized. We're moving into an era where...
I know you've already gotten several replies about this by now, but there's one specific thing I haven't seen mentioned. Suppose AI-generated CSAM is legalized. We're moving into an era where high-end AI generation is becoming increasingly realistic. Normalizing the possession of a near-identical replication of an illegal thing makes it infinitely harder to prosecute individuals who have the authentically illegal thing.
Suppose a man is caught with a hard drive full of terabytes of genuine CSAM. He says "Officer, this is actually all AI generated, so everything here is legal."
Oh good. We've got AI now fully responsible for their own crimes, with no accountability chain. Corporations CEO's need to be liable for any automated law-breaking. Which would put Elon in...
Oh good. We've got AI now fully responsible for their own crimes, with no accountability chain.
Corporations CEO's need to be liable for any automated law-breaking. Which would put Elon in multiple crosshairs immediately.
Yeah, this stuff is so frustrating. It seems like a company's AI product can do literally anything, and the most we hear from the company is "oopsie. We made a booboo. This stuff is really hard!"...
Yeah, this stuff is so frustrating. It seems like a company's AI product can do literally anything, and the most we hear from the company is "oopsie. We made a booboo. This stuff is really hard!" and that's it.
If a random guy was posting CSAM on his Twitter account, he'd have the PD kicking his door down within a couple of hours and be branded as a predator for life.
When a multi billionaire does it, he doesn't even have to give a serious comment when asked about it.
This trend of treating LLMs as wild uncontrollable animals with conscious thoughts has to stop. They're machines that act the way they do because of their programming. The people doing the programing have to be held responsible.
I think it's got to be on the person using the tool. I can't sue Toyota if a drunk driver hurts me or a gun company if someone shoots me. If we held tech companies responsible everytime their any...
I think it's got to be on the person using the tool.
I can't sue Toyota if a drunk driver hurts me or a gun company if someone shoots me.
If we held tech companies responsible everytime their any product was used by a criminal, the government could shut down any company at any time.
Now move your argument into 2026. If a Tesla mows down a child while driving autonomously? If a police drone/spot-dog carrying a gun shoots someone accidentally? If your social media...
Now move your argument into 2026.
If a Tesla mows down a child while driving autonomously?
If a police drone/spot-dog carrying a gun shoots someone accidentally?
If your social media algorithm/chart-bot encourages/promotes suicide/self-harm?
If your mis-information campaign causes a full on government revolt?
It's not about shutting down the govenment. It's about forcing Corporations (who are legal persons) to have legal consequences for their actions.
I would treat that more like a faulty product. It's the difference between someone using their car to hit me, or my own car malfunctioning and injuring me. One is the company's fault, and one is...
I would treat that more like a faulty product.
It's the difference between someone using their car to hit me, or my own car malfunctioning and injuring me. One is the company's fault, and one is the fault of a 3rd party.
Ok, so now we're getting somewhere. Who is responsible when a faulty product starts producing child pornography? And none of this "the company". Who is responsible. Who is at fault, who suffers...
Ok, so now we're getting somewhere. Who is responsible when a faulty product starts producing child pornography? And none of this "the company". Who is responsible. Who is at fault, who suffers the consequences, and who prevents it in the future?
The answer is no one. I'll skip ahead for you. Companies are immortal, and immune from consequences, because they socialize the risk to the share-holders instead of the executive.
The person requesting the 'faulty product' to make CSAM? These AI's just don't start producing images on their own, you have to describe it detail what you want the AI to produce.
The person requesting the 'faulty product' to make CSAM? These AI's just don't start producing images on their own, you have to describe it detail what you want the AI to produce.
Responsibility for accidents in large organizations is often complicated. I think there are some more useful questions to ask: How fast do they fix the problem? What are they doing to ensure that...
Responsibility for accidents in large organizations is often complicated. I think there are some more useful questions to ask:
How fast do they fix the problem?
What are they doing to ensure that nothing similar ever happens again?
A responsible, safety-conscious organization will have processes to drive accident rates towards zero. This often has little to do with figuring out which employee is to blame. Sometimes someone needs to be fired due to malace, but that’s often not the case.
But I have no confidence that X is like that, due to its leadership.
Maybe that's true for some tools, but a social media website is more like a venue (like a concert hall or a zoo) than a tool. Anything that Twitter provides as a built-in feature should be...
Maybe that's true for some tools, but a social media website is more like a venue (like a concert hall or a zoo) than a tool. Anything that Twitter provides as a built-in feature should be (mostly) safe.
If people smuggle something in, that's different; then their responsibility is whether they are providing sufficient security.
Has a human made a statement? It's not clear what it means when "Grok" is posting vs "xAI posts on Grok's account" if that's actually a human being or not in the second case. Afaict the only...
Has a human made a statement? It's not clear what it means when "Grok" is posting vs "xAI posts on Grok's account" if that's actually a human being or not in the second case.
Afaict the only statement the company has made is "Legacy Media Lies".
Just another example of users voluntarily enjoying AI chatbots I suppose.
Right it seems like that but isn't clear and if they are, it's weird that they didn't align their email response to their Twitter messaging, especially with something this severe.
Right it seems like that but isn't clear and if they are, it's weird that they didn't align their email response to their Twitter messaging, especially with something this severe.
I don't like Musk and I've long defended that company owners and C-suite executives (depending on what exactly is being discussed) should be held more responsible for a company's output and its...
I don't like Musk and I've long defended that company owners and C-suite executives (depending on what exactly is being discussed) should be held more responsible for a company's output and its impact in society.
What worries me, though is whether regulating what can and can't be done with AI in this manner is legally consistent. Several big AI models and software are available for free and anyone can run them in a decently powerful home setup (I've done this myself, though I don't currently have anything installed). When some idiot produces one of these images at home, who is responsible? A bunch of open source software maintainers who wrote some python for free? I don't think so. Probably the legal responsability belongs to the creator and/or distributor of the image, unless we were to decide that the whole training a model by infringing on other people's copyright shouldn't be allowed in the first place, which isn't going to happen because laws don't apply to the billionnaires with a vested interest in perpetuating these technologies and everyone else is already addicted to AI.
But then said AI service owners, who can hire expensive lawyers, might argue that their products are general purpose products trained on everything, that anyone can use to produce anything, and that it's ultimately not technologically feasible to perfectly prevent unsavory output. In much the same way that we already have all kinds of neutral carrier protections for all manner of services and infrastructure, isn't it likely that we'll end up seeing the same sort of thing for AI services?
Worse, if the opposite happens - if this results in neutral carrier protections crumbling in other areas - is that something we want?
I think if you're running a public zoo, protecting the public is your problem. (Possibly with some shared blame if park visitors are behaving like idiots.) If someone's pet pitbull is harrassing...
I think if you're running a public zoo, protecting the public is your problem. (Possibly with some shared blame if park visitors are behaving like idiots.) If someone's pet pitbull is harrassing strangers, it's their problem. Blaming the owners doesn't seem inconsistent to me.
I also think it's fair to judge organizations by how they respond to unexpected problems. Do they fix it quickly? Does it stay fixed? Also, compensation for the people affected might be in order.
Edit: not sure how open source software fits into that. I'm a bit more comfortable releasing libraries than apps, since the person releasing the app is more obviously the 'owner.'
Privacy, mostly. When the traffic is encrypted (or should be), there's not a whole lot that they can do. I do think they ought to be doing more about botnets and denial of service attacks, though....
Privacy, mostly. When the traffic is encrypted (or should be), there's not a whole lot that they can do.
I do think they ought to be doing more about botnets and denial of service attacks, though. Cutting off access seems too harsh now that we depend on Internet service so much, but they could start up a firewall and maybe reduce speeds until you fix it.
There are proposals like chatcontrol (all data is required to be sent to the government prior to encryption), which I'm sure you knew was where I was getting - since they're my own primary...
there's not a whole lot that they can do
There are proposals like chatcontrol (all data is required to be sent to the government prior to encryption), which I'm sure you knew was where I was getting - since they're my own primary concern. But even though nothing is more of a zoo than the internet as a whole, I can frame the same question in a myriad ways that don't require accounting for lack of access to users' data, for example:
Should Google be held responsible/liable if some rando uses Maps to plan a terrorist attack, or let's say a jewel heist in a famous parisian museum? Can they reliably tell something is off about the user's behavior patterns, if they put in enough effort?
Note that I accept that it's perfectly possible your answer(s) would be yes, and I'll respect that. Especially when it comes to CSAM, it's a strongly unifying issue - there's a reason why it's a go-to for lawmakers attempting to erode privacy protections. But there's a lot of good that comes out of having carrier neutrality too.
Yeah, I think society is going to have to decide about what's a tool and what's supposed to be a walled garden. I don't think making the entire Internet into a walled garden is practical or...
Yeah, I think society is going to have to decide about what's a tool and what's supposed to be a walled garden. I don't think making the entire Internet into a walled garden is practical or desirable, but there are things that could be done to make the Internet less dangerous.
An example of that: it should be trivial to make an adults-only website. Like, there's a software setting where you set adults_only=true, and that enables some kind of HTTP header, and all mainstream browsers know they need to check if the user is an adult or not. (And sure, with a bit of programming knowledge you could work around it, but it would help.)
Maybe it's not quite that simple, but this probably could have been done decades ago if society and the browser vendors in particular made it a priority.
Yeah, I don't think we'll see that kind of solution because kids would break it pretty much immediately. Speaking as someone who started programming at 8 before the Internet was even a thing (no...
Yeah, I don't think we'll see that kind of solution because kids would break it pretty much immediately. Speaking as someone who started programming at 8 before the Internet was even a thing (no stack overflow much less vibe coding!)
But I agree with you there. Increasingly more, companies are just using invasive third party age verification processors and that's very much a worst of both worlds scenario. I think ideally age verification should function like third party authentication protocols, wherein the client, service provider and authentication provider exchange short lived tokens in a triangle, except the "age provider" should be the government. In the EU the government already knows who everyone is, so no additional trust is required. Let's see how the solution they are purportedly working on ends up functioning.
I do have programming knowledge, and I'm not sure I could break my simple scheme using only an iPad with parental controls on. If there's a reliable way for the device to figure out which websites...
I do have programming knowledge, and I'm not sure I could break my simple scheme using only an iPad with parental controls on. If there's a reliable way for the device to figure out which websites are adult-only, and you don't have programming tools on the device, the most obvious solutions are to install a VPN (which maybe the parental controls don't let you do) or use another website to forward the web pages.
Building a website isn't so hard, but you need somewhere to build it. And if those Internet services are mostly adult-only themselves, it might be hard to find.
If you have outside help, it's a lot easier, though.
Yes, that's what I meant. Each kid doesn't have to figure it out individually, but kids collectively are absolutely capable of figuring it out and helping each other. And many kids love to be in...
Yes, that's what I meant. Each kid doesn't have to figure it out individually, but kids collectively are absolutely capable of figuring it out and helping each other. And many kids love to be in adult spaces just because they're not supposed to.
I still think getting to the point where locked-down devices are mostly harmless would be a step up. Also, cooperating to beat the challenge is a lesson in itself :-)
I still think getting to the point where locked-down devices are mostly harmless would be a step up. Also, cooperating to beat the challenge is a lesson in itself :-)
Elon Musk’s chatbot Grok posted on Friday that lapses in safeguards had led it to generate “images depicting minors in minimal clothing” on social media platform X. The chatbot, a product of Musk’s company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts.
Screenshots shared by users on X showed Grok’s public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents.
[...]
Many users on X have prompted Grok to generate sexualized, nonconsensual AI-altered versions of images in recent days, in some cases removing people’s clothing without their consent. Musk on Thursday reposted an AI photo of himself in a bikini, captioned with cry-laughing emojis, in a nod to the trend.
Grok’s generation of sexualized images appeared to lack safety guardrails, allowing for minors to be featured in its posts of people, usually women, wearing little clothing, according to posts from the chatbot. In a reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said “no system is 100% foolproof,” adding that xAI was prioritising improvements and reviewing details shared by users.
I wish we had an administration with some teeth. I’ll disagree with many commenters here and say that I do not believe corporate executives should be criminalized for AI behavior. I think the...
I wish we had an administration with some teeth. I’ll disagree with many commenters here and say that I do not believe corporate executives should be criminalized for AI behavior. I think the separation of liability between owner and business is fairly important for modern-day commerce. This kind of error can happen from employee sabotage or negligence and the owners shouldn’t be taking on liability for their entire staff.
Still, I would consider this pretty blatant criminal behavior enabled by a corporation. There’s precedent for punishment of corporate criminal behavior: reparative and punitive fines and seizure of all assets used to conduct criminal activity. For something as egregious as generation of CSAM, a strong federal government would be seizing the AI model. This shit is not that hard to prevent (only Grok has this problem and I’m sure bad actors have tested every model for vulnerabilities) and this just shows that xAI is, at best, grossly incompetent, and at worst, choosing to facilitate the production of CSAM for business gain. They clearly cannot be trusted with their technology; police officers will seize delivery vehicles used to deliver illegal drugs. Why not AI models used to produce illegal content?
The world is absurd like that. What amazes me is that people elect this guy that became famous by being an ass on television. Then they act surprised when it turns out this guy is in fact an ass....
The world is absurd like that.
What amazes me is that people elect this guy that became famous by being an ass on television. Then they act surprised when it turns out this guy is in fact an ass.
Or when he was caught on camera saying he grabs women by the pussy and was known to be a creep while owning child and teen beauty pageants. And then again people are surprised when he turns out to be a pedo.
Or that a guy known for tax evasion and shady business practices that only enriches himself might not be the guy you want to "drain the swamp". He loves the swamp so much he can tell Shrek get out of HIS swamp.
Or that a guy that went bankrupt not once, not twice but on six different occasions, might not have the best idea of good economic policy.
In addition to the sexual imagery of underage girls, the women depicted in Grok-generated nonconsensual porn range from some who appear to be private citizens to a slew of celebrities, from famous actresses to the First Lady of the United States. And somehow, that was only the tip of the iceberg.
When we dug through this content, we noticed another stomach-churning variation of the trend: Grok, at the request of users, altering images to depict real women being sexually abused, humiliated, hurt, and even killed.
Much of this material was directed at online models and sex workers […]
On Saturday, X Safety finally posted an official response after nearly a week of backlash over Grok outputs that sexualized real people without consent. Offering no apology for Grok’s functionality, X Safety blamed users for prompting Grok to produce CSAM while reminding them that such prompts can trigger account suspensions and possible legal consequences.
“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety said. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
[...]
X did not immediately respond to Ars’ request to clarify if any updates were made to Grok following the CSAM controversy. Many media outlets weirdly took Grok at its word when the chatbot responded to prompts demanding an apology by claiming that X would be improving its safeguards. But X Safety’s response now seems to contradict the chatbot, which, as Ars noted last week, should never be considered reliable as a spokesperson.
[...]
While some users are focused on how X can hold users responsible for Grok’s outputs when X is the one training the model, others are questioning how exactly X plans to moderate illegal content that Grok seems capable of generating.
To shift away from the moral bankruptcy needed to make a machine that does this and the consequences of it existing; I'm curious on the How of this particular situation. If this is an "emergent"...
To shift away from the moral bankruptcy needed to make a machine that does this and the consequences of it existing; I'm curious on the How of this particular situation.
If this is an "emergent" behaviour, the timing is very convenient. Because in the start of December there was a new light/mid-weight model framework called something like Zee Image. It's one that currently focuses more on realistic people and artificial 'photography". And naturally, it immediately started to be trained for deepfakes. There is already uncensored versions of it along with a full library of LORAs for celebrities, models, athletes and best of all, politicians. (Sidebar: is there anyone I can report these type of things to. I know the bigger model hosting sites draw a line at real people but it's not that hard to find this sort of crap and it's got to be illegal.)
Don't want to speak ill of the lovely AI specialists someone like Elon Musk would hire. But it would be surprised if all those AI researchers are just copying other peoples homework. And in this case they pushed an uncensored version of an open source project. Cant imagine why they would even have an uncensored model in a build, but it seems like an innocent enough mistake.
I started developing this theory when Sora released 6 months after public releases of the Chinese Wan Video Model. Around when people were getting past the models tendency to favor Asian features and there was already a library of different styles and animations. Its not like these companies really care about property rights or intellectual integrity and whats one more thing after strip mining the entire internet.
Yeah, if it’s open source, maybe they used it. Who knows? But there are plenty of other ways for ideas to spread in a fast-moving field. For example, when AI researchers come up with a new...
Yeah, if it’s open source, maybe they used it. Who knows?
But there are plenty of other ways for ideas to spread in a fast-moving field.
For example, when AI researchers come up with a new technique, they will often write a paper about how they did it, because that’s how you make your reputation and it also helps with hiring. There is also sometimes open source code to go along with a paper.
If the technique isn’t published, I expect that the AI labs are watching each other and attempting to work out how other firms did it.
I wouldn’t expect all that much direct copying of proprietary code because they have different systems and rewriting it to work locally is part of understanding it.
6 months is a pretty long time in the ML world and probably long enough to implement another company's designs from scratch based on their published papers on the topic when you're as big as the...
6 months is a pretty long time in the ML world and probably long enough to implement another company's designs from scratch based on their published papers on the topic when you're as big as the companies involved here.
So we're not even that many steps away from calling being opposed to CSAM "woke," are we? They've already been attempting to shift the window on age of consent.
So we're not even that many steps away from calling being opposed to CSAM "woke," are we? They've already been attempting to shift the window on age of consent.
I’ve had this conversation too many times online and I’m exhausted from playing the
devil’spedophile’s advocate (seriously it’s not a camp I enjoy aligning myself with)… but I still think it’s worth asking whether fictional CSAM of non-existing “minors” should really be treated by law the same way as real CSAM that only exists by actually victimizing real people.Think about sociopaths. In any population some n% of people will have that condition. I think most of us who have looked into sociopathy are aware of this, and try not to think about it too much, because we can’t just “purge” all sociopaths from the world, though we might fantasize about that. This is their world too, they have all the same human rights as anyone else. Better to find ways to coexist and help them identify their pathology and manage it, minimizing the damage they can do to others.
I think pedophilia’s probably not that different from sociopathy. These people are here, they live among us, and likely they don’t have a lot of control over the urges they feel. I’d rather extend some empathy about that, and help them channel those impulses in ways that don’t hurt anyone else, than shun and criminalize them for having a mental illness.
This isn’t really a statement about xAI or Grok, I’m just commenting on the assumptions people tend to make about how certain unsavory segments of the populace ought to be treated. I’ve said things to this effect before (maybe not on Tildes, I can’t remember) and got piled on for being, apparently, pro-pedophile. I just feel this is one of those ethical/social issues that people 100 years in the future are going to look back on us as barbarians for not reckoning with sooner. Personally, if generative AI can (even partially) satisfy the demand for content that currently requires a whole absolutely horrific underground economy of child trafficking and abuse… well we should maybe do a more thorough cost/benefit analysis before writing it off entirely.
If you can provide evidence of your idea, which I wouldn't say you've done in this comment, I don't think your viewpoint should be shunned. However, I think this is an inappropriate topic to bring this rant to. The article involves grok being used to create non-consensual porn of real people. And if it's willing to create images of minors, it's certainly going to be able to do it to real ones as easily as it will for adults. No child deserves to have this happen to them, but it is becoming a real issue with the rapid advancement and minimal regulation of the new technology.
Thanks for the clarification, I missed the point in the article where these are using real people’s likenesses. That’s not victimless.
Well I’d like to say this is the first time I’ve climbed up on my soapbox at an inappropriate time, but that wouldn’t be accurate either, lol
Hey @balooga, I respect that you're bringing an empathetic perspective to this - I also want to say I do not think you are pro-pedophile, I'm sorry you've been labeled as such. I appreciate that you're showing empathy for a group of people that's not easy for which to have empathy. I also agree we need to give this more attention societally to really address the issue at hand.
I do however need to push back against this:
There is simply no empirical evidence that consumption of ai/synthetic-CSAM reduces consumption/production/escalation of urges by would-be predators.
Unfortunately, the evidence suggests the opposite.
I found this open-letter from EPCAT to the EU. In it, they make a few claims with the relevant citations to research with one of the most important imo:
This claim by them is cited from this Standford study.
I also wanted to reference this article by The Wilson Center. It is less related to what you are directly writing about but I think it's worth inclusion for other readers here who might be interested, but basically we also need to consider second-order effects:
FYI, ECLAG citing the Stanford study might not be the best support of their argument. There's a game of telephone where ECLAG's position paper has (their boldface):
…with footnote 7 citing the Stanford paper, which says something similar but in much milder words. The most relevant part I could find:
…which is much less confident wording. Insufficient study? And who's the "many"? Grumble grumble avoid weasel words next time, but I presume they mean the studies cited by the summary paper that is footnote 16 (my cherrypicking from the relevant section):
So there are real concerns, founded in real pathways by which an increase in AI imagery might increase physical harm to children. But there's also a lot of carefully hedged language (and the cliché of researchers saying more research is needed). ECLAG's wording of "is proven" feels like a stretch when describing the current research, even assuming their conclusion is correct.
I didn't dig any further. But I will say that, after that deep dive, I'm more concerned than I was beforehand re: AI imagery. Most of my increased concern is around the second-order effects, though, and Pandora's box is already open to varying degrees on that front. (And unfortunately that's a larger issue than just CSAM cases—regardless of the crime, seeing is no longer believing when it comes to evaluating evidence.)
Thanks for this. You make some really solid counterpoints that were worth taking the time to articulate. I appreciate it!
If an AI can effectively generate lifelike images of children, WHAT was it trained on?
In the abstract it knows what a child looks like with clothes on, and it knows what a naked person looks like. It doesn't have to be trained specifically on CSAM to be able to generate it. You might as well say "If an AI can effectively generate lifelike images of aliens, WHAT was it trained on?" It can generate images of things outside of the specific images in its dataset because it has some ability to generalize, and both naked adults and clothed children are common in the images.
CSAM was actually found in training data. Stanford Internet Observatory researchers found over 3,000 suspected instances of CSAM in LAION-5B, the dataset used to train Stable Diffusion, with 1,008 externally validated. Their report literally states that “having possession of a LAION‐5B dataset populated even in late 2023 implies the possession of thousands of illegal images.”
Source: https://purl.stanford.edu/kh752sm9123
The problem i see with this argument is that it assumes that an abundant supply of fake CSAM will lead to lower demand for real CSAM. I think this is unfounded and may actually act as a gateway drug and increase it.
Do you think that movies showing fake murders lead to an increased demand for movies showing real murders? Do "incest" themed adult movies lead to more actual incest?
I know you've already gotten several replies about this by now, but there's one specific thing I haven't seen mentioned. Suppose AI-generated CSAM is legalized. We're moving into an era where high-end AI generation is becoming increasingly realistic. Normalizing the possession of a near-identical replication of an illegal thing makes it infinitely harder to prosecute individuals who have the authentically illegal thing.
Suppose a man is caught with a hard drive full of terabytes of genuine CSAM. He says "Officer, this is actually all AI generated, so everything here is legal."
Oh good. We've got AI now fully responsible for their own crimes, with no accountability chain.
Corporations CEO's need to be liable for any automated law-breaking. Which would put Elon in multiple crosshairs immediately.
Yeah, this stuff is so frustrating. It seems like a company's AI product can do literally anything, and the most we hear from the company is "oopsie. We made a booboo. This stuff is really hard!" and that's it.
If a random guy was posting CSAM on his Twitter account, he'd have the PD kicking his door down within a couple of hours and be branded as a predator for life.
When a multi billionaire does it, he doesn't even have to give a serious comment when asked about it.
This trend of treating LLMs as wild uncontrollable animals with conscious thoughts has to stop. They're machines that act the way they do because of their programming. The people doing the programing have to be held responsible.
For all of it. From the copyright infringement to the deaths to the CSAM. All of it.
I think it's got to be on the person using the tool.
I can't sue Toyota if a drunk driver hurts me or a gun company if someone shoots me.
If we held tech companies responsible everytime their any product was used by a criminal, the government could shut down any company at any time.
Now move your argument into 2026.
It's not about shutting down the govenment. It's about forcing Corporations (who are legal persons) to have legal consequences for their actions.
I would treat that more like a faulty product.
It's the difference between someone using their car to hit me, or my own car malfunctioning and injuring me. One is the company's fault, and one is the fault of a 3rd party.
Ok, so now we're getting somewhere. Who is responsible when a faulty product starts producing child pornography? And none of this "the company". Who is responsible. Who is at fault, who suffers the consequences, and who prevents it in the future?
The answer is no one. I'll skip ahead for you. Companies are immortal, and immune from consequences, because they socialize the risk to the share-holders instead of the executive.
The person requesting the 'faulty product' to make CSAM? These AI's just don't start producing images on their own, you have to describe it detail what you want the AI to produce.
Responsibility for accidents in large organizations is often complicated. I think there are some more useful questions to ask:
How fast do they fix the problem?
What are they doing to ensure that nothing similar ever happens again?
A responsible, safety-conscious organization will have processes to drive accident rates towards zero. This often has little to do with figuring out which employee is to blame. Sometimes someone needs to be fired due to malace, but that’s often not the case.
But I have no confidence that X is like that, due to its leadership.
Maybe that's true for some tools, but a social media website is more like a venue (like a concert hall or a zoo) than a tool. Anything that Twitter provides as a built-in feature should be (mostly) safe.
If people smuggle something in, that's different; then their responsibility is whether they are providing sufficient security.
Has a human made a statement? It's not clear what it means when "Grok" is posting vs "xAI posts on Grok's account" if that's actually a human being or not in the second case.
Afaict the only statement the company has made is "Legacy Media Lies".
Just another example of users voluntarily enjoying AI chatbots I suppose.
I imagine it means that someone at the company posted it to the Grok account on Twitter, but it's weirdly phrased as if the bot did it.
Right it seems like that but isn't clear and if they are, it's weird that they didn't align their email response to their Twitter messaging, especially with something this severe.
I don't like Musk and I've long defended that company owners and C-suite executives (depending on what exactly is being discussed) should be held more responsible for a company's output and its impact in society.
What worries me, though is whether regulating what can and can't be done with AI in this manner is legally consistent. Several big AI models and software are available for free and anyone can run them in a decently powerful home setup (I've done this myself, though I don't currently have anything installed). When some idiot produces one of these images at home, who is responsible? A bunch of open source software maintainers who wrote some python for free? I don't think so. Probably the legal responsability belongs to the creator and/or distributor of the image, unless we were to decide that the whole training a model by infringing on other people's copyright shouldn't be allowed in the first place, which isn't going to happen because laws don't apply to the billionnaires with a vested interest in perpetuating these technologies and everyone else is already addicted to AI.
But then said AI service owners, who can hire expensive lawyers, might argue that their products are general purpose products trained on everything, that anyone can use to produce anything, and that it's ultimately not technologically feasible to perfectly prevent unsavory output. In much the same way that we already have all kinds of neutral carrier protections for all manner of services and infrastructure, isn't it likely that we'll end up seeing the same sort of thing for AI services?
Worse, if the opposite happens - if this results in neutral carrier protections crumbling in other areas - is that something we want?
I think if you're running a public zoo, protecting the public is your problem. (Possibly with some shared blame if park visitors are behaving like idiots.) If someone's pet pitbull is harrassing strangers, it's their problem. Blaming the owners doesn't seem inconsistent to me.
I also think it's fair to judge organizations by how they respond to unexpected problems. Do they fix it quickly? Does it stay fixed? Also, compensation for the people affected might be in order.
Edit: not sure how open source software fits into that. I'm a bit more comfortable releasing libraries than apps, since the person releasing the app is more obviously the 'owner.'
Should my ISP be responsible for the actions of its customers (when using the internet)? Why/why not?
Privacy, mostly. When the traffic is encrypted (or should be), there's not a whole lot that they can do.
I do think they ought to be doing more about botnets and denial of service attacks, though. Cutting off access seems too harsh now that we depend on Internet service so much, but they could start up a firewall and maybe reduce speeds until you fix it.
There are proposals like chatcontrol (all data is required to be sent to the government prior to encryption), which I'm sure you knew was where I was getting - since they're my own primary concern. But even though nothing is more of a zoo than the internet as a whole, I can frame the same question in a myriad ways that don't require accounting for lack of access to users' data, for example:
Should Google be held responsible/liable if some rando uses Maps to plan a terrorist attack, or let's say a jewel heist in a famous parisian museum? Can they reliably tell something is off about the user's behavior patterns, if they put in enough effort?
Note that I accept that it's perfectly possible your answer(s) would be yes, and I'll respect that. Especially when it comes to CSAM, it's a strongly unifying issue - there's a reason why it's a go-to for lawmakers attempting to erode privacy protections. But there's a lot of good that comes out of having carrier neutrality too.
Yeah, I think society is going to have to decide about what's a tool and what's supposed to be a walled garden. I don't think making the entire Internet into a walled garden is practical or desirable, but there are things that could be done to make the Internet less dangerous.
An example of that: it should be trivial to make an adults-only website. Like, there's a software setting where you set adults_only=true, and that enables some kind of HTTP header, and all mainstream browsers know they need to check if the user is an adult or not. (And sure, with a bit of programming knowledge you could work around it, but it would help.)
Maybe it's not quite that simple, but this probably could have been done decades ago if society and the browser vendors in particular made it a priority.
Yeah, I don't think we'll see that kind of solution because kids would break it pretty much immediately. Speaking as someone who started programming at 8 before the Internet was even a thing (no stack overflow much less vibe coding!)
But I agree with you there. Increasingly more, companies are just using invasive third party age verification processors and that's very much a worst of both worlds scenario. I think ideally age verification should function like third party authentication protocols, wherein the client, service provider and authentication provider exchange short lived tokens in a triangle, except the "age provider" should be the government. In the EU the government already knows who everyone is, so no additional trust is required. Let's see how the solution they are purportedly working on ends up functioning.
I do have programming knowledge, and I'm not sure I could break my simple scheme using only an iPad with parental controls on. If there's a reliable way for the device to figure out which websites are adult-only, and you don't have programming tools on the device, the most obvious solutions are to install a VPN (which maybe the parental controls don't let you do) or use another website to forward the web pages.
Building a website isn't so hard, but you need somewhere to build it. And if those Internet services are mostly adult-only themselves, it might be hard to find.
If you have outside help, it's a lot easier, though.
Yes, that's what I meant. Each kid doesn't have to figure it out individually, but kids collectively are absolutely capable of figuring it out and helping each other. And many kids love to be in adult spaces just because they're not supposed to.
I still think getting to the point where locked-down devices are mostly harmless would be a step up. Also, cooperating to beat the challenge is a lesson in itself :-)
From the article:
[...]
I wish we had an administration with some teeth. I’ll disagree with many commenters here and say that I do not believe corporate executives should be criminalized for AI behavior. I think the separation of liability between owner and business is fairly important for modern-day commerce. This kind of error can happen from employee sabotage or negligence and the owners shouldn’t be taking on liability for their entire staff.
Still, I would consider this pretty blatant criminal behavior enabled by a corporation. There’s precedent for punishment of corporate criminal behavior: reparative and punitive fines and seizure of all assets used to conduct criminal activity. For something as egregious as generation of CSAM, a strong federal government would be seizing the AI model. This shit is not that hard to prevent (only Grok has this problem and I’m sure bad actors have tested every model for vulnerabilities) and this just shows that xAI is, at best, grossly incompetent, and at worst, choosing to facilitate the production of CSAM for business gain. They clearly cannot be trusted with their technology; police officers will seize delivery vehicles used to deliver illegal drugs. Why not AI models used to produce illegal content?
It has teeth. It just leaves them on the nightstand at times
And what teeth it has, aren't pointed at actual pedophiles. Because they'd be pointing at themselves.
The world is absurd like that.
What amazes me is that people elect this guy that became famous by being an ass on television. Then they act surprised when it turns out this guy is in fact an ass.
Or when he was caught on camera saying he grabs women by the pussy and was known to be a creep while owning child and teen beauty pageants. And then again people are surprised when he turns out to be a pedo.
Or that a guy known for tax evasion and shady business practices that only enriches himself might not be the guy you want to "drain the swamp". He loves the swamp so much he can tell Shrek get out of HIS swamp.
Or that a guy that went bankrupt not once, not twice but on six different occasions, might not have the best idea of good economic policy.
Grok Is Being Used to Depict Horrific Violence Against Real Women
X blames users for Grok-generated CSAM; no fixes announced
[...]
[...]
To shift away from the moral bankruptcy needed to make a machine that does this and the consequences of it existing; I'm curious on the How of this particular situation.
If this is an "emergent" behaviour, the timing is very convenient. Because in the start of December there was a new light/mid-weight model framework called something like Zee Image. It's one that currently focuses more on realistic people and artificial 'photography". And naturally, it immediately started to be trained for deepfakes. There is already uncensored versions of it along with a full library of LORAs for celebrities, models, athletes and best of all, politicians. (Sidebar: is there anyone I can report these type of things to. I know the bigger model hosting sites draw a line at real people but it's not that hard to find this sort of crap and it's got to be illegal.)
Don't want to speak ill of the lovely AI specialists someone like Elon Musk would hire. But it would be surprised if all those AI researchers are just copying other peoples homework. And in this case they pushed an uncensored version of an open source project. Cant imagine why they would even have an uncensored model in a build, but it seems like an innocent enough mistake.
I started developing this theory when Sora released 6 months after public releases of the Chinese Wan Video Model. Around when people were getting past the models tendency to favor Asian features and there was already a library of different styles and animations. Its not like these companies really care about property rights or intellectual integrity and whats one more thing after strip mining the entire internet.
Yeah, if it’s open source, maybe they used it. Who knows?
But there are plenty of other ways for ideas to spread in a fast-moving field.
For example, when AI researchers come up with a new technique, they will often write a paper about how they did it, because that’s how you make your reputation and it also helps with hiring. There is also sometimes open source code to go along with a paper.
If the technique isn’t published, I expect that the AI labs are watching each other and attempting to work out how other firms did it.
I wouldn’t expect all that much direct copying of proprietary code because they have different systems and rewriting it to work locally is part of understanding it.
6 months is a pretty long time in the ML world and probably long enough to implement another company's designs from scratch based on their published papers on the topic when you're as big as the companies involved here.
So we're not even that many steps away from calling being opposed to CSAM "woke," are we? They've already been attempting to shift the window on age of consent.
Merriam-Webster's word of the year 2026: Ephebophilia.