So we're not even that many steps away from calling being opposed to CSAM "woke," are we? They've already been attempting to shift the window on age of consent.
So we're not even that many steps away from calling being opposed to CSAM "woke," are we? They've already been attempting to shift the window on age of consent.
Oh good. We've got AI now fully responsible for their own crimes, with no accountability chain. Corporations CEO's need to be liable for any automated law-breaking. Which would put Elon in...
Oh good. We've got AI now fully responsible for their own crimes, with no accountability chain.
Corporations CEO's need to be liable for any automated law-breaking. Which would put Elon in multiple crosshairs immediately.
Yeah, this stuff is so frustrating. It seems like a company's AI product can do literally anything, and the most we hear from the company is "oopsie. We made a booboo. This stuff is really hard!"...
Yeah, this stuff is so frustrating. It seems like a company's AI product can do literally anything, and the most we hear from the company is "oopsie. We made a booboo. This stuff is really hard!" and that's it.
If a random guy was posting CSAM on his Twitter account, he'd have the PD kicking his door down within a couple of hours and be branded as a predator for life.
When a multi billionaire does it, he doesn't even have to give a serious comment when asked about it.
This trend of treating LLMs as wild uncontrollable animals with conscious thoughts has to stop. They're machines that act the way they do because of their programming. The people doing the programing have to be held responsible.
I think it's got to be on the person using the tool. I can't sue Toyota if a drunk driver hurts me or a gun company if someone shoots me. If we held tech companies responsible everytime their any...
I think it's got to be on the person using the tool.
I can't sue Toyota if a drunk driver hurts me or a gun company if someone shoots me.
If we held tech companies responsible everytime their any product was used by a criminal, the government could shut down any company at any time.
I’ve had this conversation too many times online and I’m exhausted from playing the pedophile’s advocate (seriously it’s not a camp I enjoy aligning myself with)… but I still think it’s worth...
I’ve had this conversation too many times online and I’m exhausted from playing the devil’s pedophile’s advocate (seriously it’s not a camp I enjoy aligning myself with)… but I still think it’s worth asking whether fictional CSAM of non-existing “minors” should really be treated by law the same way as real CSAM that only exists by actually victimizing real people.
Think about sociopaths. In any population some n% of people will have that condition. I think most of us who have looked into sociopathy are aware of this, and try not to think about it too much, because we can’t just “purge” all sociopaths from the world, though we might fantasize about that. This is their world too, they have all the same human rights as anyone else. Better to find ways to coexist and help them identify their pathology and manage it, minimizing the damage they can do to others.
I think pedophilia’s probably not that different from sociopathy. These people are here, they live among us, and likely they don’t have a lot of control over the urges they feel. I’d rather extend some empathy about that, and help them channel those impulses in ways that don’t hurt anyone else, than shun and criminalize them for having a mental illness.
This isn’t really a statement about xAI or Grok, I’m just commenting on the assumptions people tend to make about how certain unsavory segments of the populace ought to be treated. I’ve said things to this effect before (maybe not on Tildes, I can’t remember) and got piled on for being, apparently, pro-pedophile. I just feel this is one of those ethical/social issues that people 100 years in the future are going to look back on us as barbarians for not reckoning with sooner. Personally, if generative AI can (even partially) satisfy the demand for content that currently requires a whole absolutely horrific underground economy of child trafficking and abuse… well we should maybe do a more thorough cost/benefit analysis before writing it off entirely.
If you can provide evidence of your idea, which I wouldn't say you've done in this comment, I don't think your viewpoint should be shunned. However, I think this is an inappropriate topic to bring...
If you can provide evidence of your idea, which I wouldn't say you've done in this comment, I don't think your viewpoint should be shunned. However, I think this is an inappropriate topic to bring this rant to. The article involves grok being used to create non-consensual porn of real people. And if it's willing to create images of minors, it's certainly going to be able to do it to real ones as easily as it will for adults. No child deserves to have this happen to them, but it is becoming a real issue with the rapid advancement and minimal regulation of the new technology.
Thanks for the clarification, I missed the point in the article where these are using real people’s likenesses. That’s not victimless. Well I’d like to say this is the first time I’ve climbed up...
Thanks for the clarification, I missed the point in the article where these are using real people’s likenesses. That’s not victimless.
Well I’d like to say this is the first time I’ve climbed up on my soapbox at an inappropriate time, but that wouldn’t be accurate either, lol
Has a human made a statement? It's not clear what it means when "Grok" is posting vs "xAI posts on Grok's account" if that's actually a human being or not in the second case. Afaict the only...
Has a human made a statement? It's not clear what it means when "Grok" is posting vs "xAI posts on Grok's account" if that's actually a human being or not in the second case.
Afaict the only statement the company has made is "Legacy Media Lies".
Just another example of users voluntarily enjoying AI chatbots I suppose.
Right it seems like that but isn't clear and if they are, it's weird that they didn't align their email response to their Twitter messaging, especially with something this severe.
Right it seems like that but isn't clear and if they are, it's weird that they didn't align their email response to their Twitter messaging, especially with something this severe.
Elon Musk’s chatbot Grok posted on Friday that lapses in safeguards had led it to generate “images depicting minors in minimal clothing” on social media platform X. The chatbot, a product of Musk’s company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts.
Screenshots shared by users on X showed Grok’s public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents.
[...]
Many users on X have prompted Grok to generate sexualized, nonconsensual AI-altered versions of images in recent days, in some cases removing people’s clothing without their consent. Musk on Thursday reposted an AI photo of himself in a bikini, captioned with cry-laughing emojis, in a nod to the trend.
Grok’s generation of sexualized images appeared to lack safety guardrails, allowing for minors to be featured in its posts of people, usually women, wearing little clothing, according to posts from the chatbot. In a reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said “no system is 100% foolproof,” adding that xAI was prioritising improvements and reviewing details shared by users.
I don't like Musk and I've long defended that company owners and C-suite executives (depending on what exactly is being discussed) should be held more responsible for a company's output and its...
I don't like Musk and I've long defended that company owners and C-suite executives (depending on what exactly is being discussed) should be held more responsible for a company's output and its impact in society.
What worries me, though is whether regulating what can and can't be done with AI in this manner is legally consistent. Several big AI models and software are available for free and anyone can run them in a decently powerful home setup (I've done this myself, though I don't currently have anything installed). When some idiot produces one of these images at home, who is responsible? A bunch of open source software maintainers who wrote some python for free? I don't think so. Probably the legal responsability belongs to the creator and/or distributor of the image, unless we were to decide that the whole training a model by infringing on other people's copyright shouldn't be allowed in the first place, which isn't going to happen because laws don't apply to the billionnaires with a vested interest in perpetuating these technologies and everyone else is already addicted to AI.
But then said AI service owners, who can hire expensive lawyers, might argue that their products are general purpose products trained on everything, that anyone can use to produce anything, and that it's ultimately not technologically feasible to perfectly prevent unsavory output. In much the same way that we already have all kinds of neutral carrier protections for all manner of services and infrastructure, isn't it likely that we'll end up seeing the same sort of thing for AI services?
Worse, if the opposite happens - if this results in neutral carrier protections crumbling in other areas - is that something we want?
I think if you're running a public zoo, protecting the public is your problem. (Possibly with some shared blame if park visitors are behaving like idiots.) If someone's pet pitbull is harrassing...
I think if you're running a public zoo, protecting the public is your problem. (Possibly with some shared blame if park visitors are behaving like idiots.) If someone's pet pitbull is harrassing strangers, it's their problem. Blaming the owners doesn't seem inconsistent to me.
I also think it's fair to judge organizations by how they respond to unexpected problems. Do they fix it quickly? Does it stay fixed? Also, compensation for the people affected might be in order.
Edit: not sure how open source software fits into that. I'm a bit more comfortable releasing libraries than apps, since the person releasing the app is more obviously the 'owner.'
I wish we had an administration with some teeth. I’ll disagree with many commenters here and say that I do not believe corporate executives should be criminalized for AI behavior. I think the...
I wish we had an administration with some teeth. I’ll disagree with many commenters here and say that I do not believe corporate executives should be criminalized for AI behavior. I think the separation of liability between owner and business is fairly important for modern-day commerce. This kind of error can happen from employee sabotage or negligence and the owners shouldn’t be taking on liability for their entire staff.
Still, I would consider this pretty blatant criminal behavior enabled by a corporation. There’s precedent for punishment of corporate criminal behavior: reparative and punitive fines and seizure of all assets used to conduct criminal activity. For something as egregious as generation of CSAM, a strong federal government would be seizing the AI model. This shit is not that hard to prevent (only Grok has this problem and I’m sure bad actors have tested every model for vulnerabilities) and this just shows that xAI is, at best, grossly incompetent, and at worst, choosing to facilitate the production of CSAM for business gain. They clearly cannot be trusted with their technology; police officers will seize delivery vehicles used to deliver illegal drugs. Why not AI models used to produce illegal content?
So we're not even that many steps away from calling being opposed to CSAM "woke," are we? They've already been attempting to shift the window on age of consent.
Oh good. We've got AI now fully responsible for their own crimes, with no accountability chain.
Corporations CEO's need to be liable for any automated law-breaking. Which would put Elon in multiple crosshairs immediately.
Yeah, this stuff is so frustrating. It seems like a company's AI product can do literally anything, and the most we hear from the company is "oopsie. We made a booboo. This stuff is really hard!" and that's it.
If a random guy was posting CSAM on his Twitter account, he'd have the PD kicking his door down within a couple of hours and be branded as a predator for life.
When a multi billionaire does it, he doesn't even have to give a serious comment when asked about it.
This trend of treating LLMs as wild uncontrollable animals with conscious thoughts has to stop. They're machines that act the way they do because of their programming. The people doing the programing have to be held responsible.
For all of it. From the copyright infringement to the deaths to the CSAM. All of it.
I think it's got to be on the person using the tool.
I can't sue Toyota if a drunk driver hurts me or a gun company if someone shoots me.
If we held tech companies responsible everytime their any product was used by a criminal, the government could shut down any company at any time.
I’ve had this conversation too many times online and I’m exhausted from playing the
devil’spedophile’s advocate (seriously it’s not a camp I enjoy aligning myself with)… but I still think it’s worth asking whether fictional CSAM of non-existing “minors” should really be treated by law the same way as real CSAM that only exists by actually victimizing real people.Think about sociopaths. In any population some n% of people will have that condition. I think most of us who have looked into sociopathy are aware of this, and try not to think about it too much, because we can’t just “purge” all sociopaths from the world, though we might fantasize about that. This is their world too, they have all the same human rights as anyone else. Better to find ways to coexist and help them identify their pathology and manage it, minimizing the damage they can do to others.
I think pedophilia’s probably not that different from sociopathy. These people are here, they live among us, and likely they don’t have a lot of control over the urges they feel. I’d rather extend some empathy about that, and help them channel those impulses in ways that don’t hurt anyone else, than shun and criminalize them for having a mental illness.
This isn’t really a statement about xAI or Grok, I’m just commenting on the assumptions people tend to make about how certain unsavory segments of the populace ought to be treated. I’ve said things to this effect before (maybe not on Tildes, I can’t remember) and got piled on for being, apparently, pro-pedophile. I just feel this is one of those ethical/social issues that people 100 years in the future are going to look back on us as barbarians for not reckoning with sooner. Personally, if generative AI can (even partially) satisfy the demand for content that currently requires a whole absolutely horrific underground economy of child trafficking and abuse… well we should maybe do a more thorough cost/benefit analysis before writing it off entirely.
If you can provide evidence of your idea, which I wouldn't say you've done in this comment, I don't think your viewpoint should be shunned. However, I think this is an inappropriate topic to bring this rant to. The article involves grok being used to create non-consensual porn of real people. And if it's willing to create images of minors, it's certainly going to be able to do it to real ones as easily as it will for adults. No child deserves to have this happen to them, but it is becoming a real issue with the rapid advancement and minimal regulation of the new technology.
Thanks for the clarification, I missed the point in the article where these are using real people’s likenesses. That’s not victimless.
Well I’d like to say this is the first time I’ve climbed up on my soapbox at an inappropriate time, but that wouldn’t be accurate either, lol
Has a human made a statement? It's not clear what it means when "Grok" is posting vs "xAI posts on Grok's account" if that's actually a human being or not in the second case.
Afaict the only statement the company has made is "Legacy Media Lies".
Just another example of users voluntarily enjoying AI chatbots I suppose.
I imagine it means that someone at the company posted it to the Grok account on Twitter, but it's weirdly phrased as if the bot did it.
Right it seems like that but isn't clear and if they are, it's weird that they didn't align their email response to their Twitter messaging, especially with something this severe.
From the article:
[...]
I don't like Musk and I've long defended that company owners and C-suite executives (depending on what exactly is being discussed) should be held more responsible for a company's output and its impact in society.
What worries me, though is whether regulating what can and can't be done with AI in this manner is legally consistent. Several big AI models and software are available for free and anyone can run them in a decently powerful home setup (I've done this myself, though I don't currently have anything installed). When some idiot produces one of these images at home, who is responsible? A bunch of open source software maintainers who wrote some python for free? I don't think so. Probably the legal responsability belongs to the creator and/or distributor of the image, unless we were to decide that the whole training a model by infringing on other people's copyright shouldn't be allowed in the first place, which isn't going to happen because laws don't apply to the billionnaires with a vested interest in perpetuating these technologies and everyone else is already addicted to AI.
But then said AI service owners, who can hire expensive lawyers, might argue that their products are general purpose products trained on everything, that anyone can use to produce anything, and that it's ultimately not technologically feasible to perfectly prevent unsavory output. In much the same way that we already have all kinds of neutral carrier protections for all manner of services and infrastructure, isn't it likely that we'll end up seeing the same sort of thing for AI services?
Worse, if the opposite happens - if this results in neutral carrier protections crumbling in other areas - is that something we want?
I think if you're running a public zoo, protecting the public is your problem. (Possibly with some shared blame if park visitors are behaving like idiots.) If someone's pet pitbull is harrassing strangers, it's their problem. Blaming the owners doesn't seem inconsistent to me.
I also think it's fair to judge organizations by how they respond to unexpected problems. Do they fix it quickly? Does it stay fixed? Also, compensation for the people affected might be in order.
Edit: not sure how open source software fits into that. I'm a bit more comfortable releasing libraries than apps, since the person releasing the app is more obviously the 'owner.'
Should my ISP be responsible for the actions of its customers (when using the internet)? Why/why not?
I wish we had an administration with some teeth. I’ll disagree with many commenters here and say that I do not believe corporate executives should be criminalized for AI behavior. I think the separation of liability between owner and business is fairly important for modern-day commerce. This kind of error can happen from employee sabotage or negligence and the owners shouldn’t be taking on liability for their entire staff.
Still, I would consider this pretty blatant criminal behavior enabled by a corporation. There’s precedent for punishment of corporate criminal behavior: reparative and punitive fines and seizure of all assets used to conduct criminal activity. For something as egregious as generation of CSAM, a strong federal government would be seizing the AI model. This shit is not that hard to prevent (only Grok has this problem and I’m sure bad actors have tested every model for vulnerabilities) and this just shows that xAI is, at best, grossly incompetent, and at worst, choosing to facilitate the production of CSAM for business gain. They clearly cannot be trusted with their technology; police officers will seize delivery vehicles used to deliver illegal drugs. Why not AI models used to produce illegal content?