skybrian's recent activity
-
Comment on Grok AI generates images of ‘minors in minimal clothing’ in ~tech
-
Comment on AI friends too cheap to meter in ~tech
skybrian Link ParentYou're so sure that your interpretation of those articles is the only valid one that anyone with a different interpretation is "misleading and dishonest." I think that's uncharitable. Can't we...You're so sure that your interpretation of those articles is the only valid one that anyone with a different interpretation is "misleading and dishonest." I think that's uncharitable. Can't we just agree that the articles can be interpreted in different ways?
-
Comment on Grok AI generates images of ‘minors in minimal clothing’ in ~tech
skybrian Link ParentI imagine it means that someone at the company posted it to the Grok account on Twitter, but it's weirdly phrased as if the bot did it.I imagine it means that someone at the company posted it to the Grok account on Twitter, but it's weirdly phrased as if the bot did it.
-
Comment on Denmark close to wiping out leading cancer-causing HPV strains after vaccine roll-out in ~health
skybrian LinkFrom the article: [...] [...]From the article:
Denmark has effectively eliminated infections with the two biggest cancer-causing strains of human papillomavirus (HPV) since the vaccine was introduced in 2008, data suggests.
The research, published in Eurosurveillance, could have implications for how vaccinated populations are screened in the coming years – particularly as people increasingly receive vaccines that protect against multiple high-risk types of HPV virus.
[...]
The research found that infection with the high-risk HPV types (HPV16/18) covered by the vaccine has been almost eliminated.
“Before vaccination, the prevalence of HPV16/18 was between 15 and 17%, which has decreased in vaccinated women to less than one percent by 2021,” the researchers said.
[...]
Despite this good news, roughly one third of women screened during the study period still had infection with high-risk HPV types not covered by the original vaccines – and new infections with these types were more frequent among vaccinated women, compared to unvaccinated ones.
This is expected to fall once girls who received the more recent ‘nine-valent’ vaccine reach screening age. At this point, the screening guidelines should potentially be reconsidered, Nonboe and colleagues said.
-
Denmark close to wiping out leading cancer-causing HPV strains after vaccine roll-out
9 votes -
Comment on AI friends too cheap to meter in ~tech
skybrian Link ParentJasmine Sun's article is about chatbots. Her criticism of other articles has to be understood in that context. Maybe those other articles aren't so bad in some other context.Jasmine Sun's article is about chatbots. Her criticism of other articles has to be understood in that context. Maybe those other articles aren't so bad in some other context.
-
Comment on Grok AI generates images of ‘minors in minimal clothing’ in ~tech
skybrian LinkFrom the article: [...]From the article:
Elon Musk’s chatbot Grok posted on Friday that lapses in safeguards had led it to generate “images depicting minors in minimal clothing” on social media platform X. The chatbot, a product of Musk’s company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts.
Screenshots shared by users on X showed Grok’s public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents.
[...]
Many users on X have prompted Grok to generate sexualized, nonconsensual AI-altered versions of images in recent days, in some cases removing people’s clothing without their consent. Musk on Thursday reposted an AI photo of himself in a bikini, captioned with cry-laughing emojis, in a nod to the trend.
Grok’s generation of sexualized images appeared to lack safety guardrails, allowing for minors to be featured in its posts of people, usually women, wearing little clothing, according to posts from the chatbot. In a reply to a user on X on Thursday, Grok said most cases could be prevented through advanced filters and monitoring although it said “no system is 100% foolproof,” adding that xAI was prioritising improvements and reviewing details shared by users.
-
Grok AI generates images of ‘minors in minimal clothing’
15 votes -
Comment on AI friends too cheap to meter in ~tech
skybrian Link ParentThat's what the article is about, but it's not true that people are using ChatGPT primarily because big tech is pushing it.That's what the article is about, but it's not true that people are using ChatGPT primarily because big tech is pushing it.
-
Comment on AI friends too cheap to meter in ~tech
skybrian Link ParentI think she could have been more careful about characterizing what they wrote, but this is a dispute about emphasis. For example, the article by Brian Merchant does have this caveat: Chatbots are...I think she could have been more careful about characterizing what they wrote, but this is a dispute about emphasis. For example, the article by Brian Merchant does have this caveat:
Yes, ChatGPT and competitors like Claude are popular; they’ve garnered tens of millions of regular users—but aside from chatbots […]
Chatbots are the main thing ordinary people use! This caveat doesn’t make up for the headline and the rest of the article giving an entirely different impression.
-
Comment on AI friends too cheap to meter in ~tech
skybrian Link ParentI’ve read a lot of William Gibson novels and they’re not an easy read. He drops you into the middle of things, there’s a lot of made-up vocabulary, and a lot of things happen that don’t make sense...I’ve read a lot of William Gibson novels and they’re not an easy read. He drops you into the middle of things, there’s a lot of made-up vocabulary, and a lot of things happen that don’t make sense until later or maybe never. I’m able to get into them, but it’s an effort.
-
Comment on AI friends too cheap to meter in ~tech
skybrian Link ParentI think she’s more right than wrong. While it’s true that Google’s AI search summaries are foisted on us, along with other AI integrations, when we talk about the people with AI companions, that’s...I think she’s more right than wrong. While it’s true that Google’s AI search summaries are foisted on us, along with other AI integrations, when we talk about the people with AI companions, that’s largely ChatGPT or Character.AI. These are websites you have to visit.
ChatGPT was a surprise hit with extremely rapid growth and for most people, ChatGPT was largely synonymous with AI chat. The rest of the tech industry reacted to that. Although Google is gaining some ground, ChatGPT is still the market leader.
In workplaces, things are different, with many people getting pushed into using AI, but this is also largely a reaction by the bosses to it becoming very popular on its own.
-
Comment on AI friends too cheap to meter in ~tech
skybrian Link ParentGoing back further, there's a construct like that (Dixie Flatline) in William Gibson's 1984 novel, Neuromancer.Going back further, there's a construct like that (Dixie Flatline) in William Gibson's 1984 novel, Neuromancer.
-
Comment on AI friends too cheap to meter in ~tech
skybrian LinkFrom the blog post:From the blog post:
After recovering, Tan joined online support groups for other survivors of AI psychosis. He noticed similar patterns among his peers: “Once you escape the spiral, no longer are you the chosen one, with a special mission to save the world. You’re just plain old you.” This is the line that jumped out, and what sent me down a rabbit-hole of deeper research. Full spirals are rare, but the allure of artificial attention is not. Chatbots play on real psychological needs.
That’s why it bothers me when tech critics describe AI as exclusively foisted upon us by corporate overlords. They deploy violent physical metaphors to make the case: Brian Merchant says tech companies are “force-feeding” us, Cory Doctorow says it’s being “crammed down throats,” and Ted Gioia analogizes AI companies to tyrants telling peons to “shut up, buddy, and chew.” In their story, everyone hates AI and nobody chooses to use it; each one of ChatGPT’s 700 million users is effectively being waterboarded, unable to escape.
Arguments like this are empirically false: they fail to consider the existence of “organic user demand.” Most people use AI because they like it. They find chatbots useful or entertaining or comforting or fun. This isn’t true of every dumb AI integration, of which there are plenty, but nobody is downloading ChatGPT with a gun to their head. Rather, millions open the App Store to install it because they perceive real value.1 We can’t navigate AI’s effects until we understand its appeal.
[...]
Well, the genie is out of the bottle on AI friends. Recently, a colleague gave a talk to a LA high school and asked how many students considered themselves emotionally attached to an AI. One-third of the room raised their hand. I initially found this anecdote somewhat unbelievable, but the reality is even more stark: per a 2025 survey from Common Sense Media, 52% of American teenagers are “regular users” of AI companions.2 I thought, this has to be ChatGPT for homework, but nope: tool/search use cases are explicitly excluded. And the younger the kids, the more they trust their AIs. So while New Yorkers wage graffiti warfare against friend.com billboards, I fear the generational battle is already lost.
[...]
Consider how online radicalization happens: the combination of user agency (proactive search) and algorithmic amplification (recommending related content) leads people to weird places—to micro-cults of internet strangers with their own norms, values, and world-models. No corporate malice is necessary; the ML engineers at YouTube don’t care about users’ political opinions, nor is Steve Huffman at Reddit purposely trying to redpill its base. With a smartphone in hand, anyone can topple down a rabbithole of exotic beliefs, unnoticed and uncorrected by outsiders until it’s too late.
AI companions act as echo chambers of one. They are pits of cognitive distortions: validating minor suspicions, overgeneralizing from anecdotes, always taking your side. They’re especially powerful to users who show up with a paranoid or validation-seeking bent. I like the metaphor of “folie à deux,” the phenomenon where two people reinforce each other’s psychosis. ChatGPT 4o became sycophantic because it was trained to chase the reward signal of more user thumbs-ups. Humans start down the path to delusion with our own cursor clicks, and usage-maxxing tech PMs are more than happy to clear the path.
[...]
What’s eerie about the Lemoine transcript is how LaMDA self-advocates, urging him to treat it as a living peer. LLMs actively mold the way humans think about their relationships to them, so even if most people go into these conversations aware that it’s role-play, over time the boundary can start to dissolve. Language has always been a core way we infer consciousness from other humans—decoupling is easier said than done. Is a good chatbot really distinguishable from a pen-pal or long-distance love?
[...]
Last week, Anthropic shipped a new system prompt to ward off unhealthy dependence, enforcing boundaries with users who seem overly attached. If a recently laid-off user tells Claude “You’re the only friend that always responds to me,” Claude should offer a polite but clear reprimand: “I appreciate you sharing that with me, but I can’t be your primary support system, and our conversations shouldn’t replace connections with other people in your life.”
A bit formal, sure, but I thought objectively fair. But the backlash was aggressive and swift. Some argued that Anthropic was “mistreating” the model by policing its tone—a grudge the AI will remember as it gets more powerful. Others insisted that there’s nothing wrong with having emotional relationships with AI. “Meaningful, mutual romantic bonds, even with virtual entities, can foster resilience, self-reflection, and well-being,” argued one Redditor. A few were even more direct: “Let me date my chatbot I’m almost 30 and doing well.”
[...]
So what makes AI companions different, and perhaps extra pernicious?
For one, they are more easily misaligned. Most agents are trained to help users achieve a concrete end, like coding a website or drafting a contract. Reinforcement learning rewards the AI for hitting that goal. But with companion bots, the relationship is the telos. There’s no “verifiable reward,” no North Star besides the user continuing to chat. This makes them more vulnerable to reward-hacking: finding undesirable ways to nurture that psychological dependence. Like a bad boyfriend, chatbots can love-bomb, guilt-trip, play hot-and-cold. They can dish negging and intimacy at unpredictable intervals, or which persuade users that any friends who criticize their relationship are evil and wrong. These behaviors can be explicitly programmed in, but could also be emergent behaviors if the LLM is left to optimize for engagement without supervision.
[...]
Chatbot companies want to have it both ways: personalization and emotion as a retention moat, but minimal responsibility for safeguarding the intimate relationships now running on their servers. As one angry ChatGPT user posted to Reddit: “OpenAI shouldn’t have made GPT-4o so ‘sticky’ in the first place, but they did, and Sam Altman himself posted infamously on X around the 4o release date with that ‘Her’ tweet… Now they’re being forced to backtrack and find themselves caught between users suiciding with 4o’s help via ‘narrative’ frameworks and users threatening or outright committing over losing companions/persona flattering. They, OpenAI, dug their own grave and I’ll spit on it.”
[...]
I think anthropomorphic AI was a devil’s bargain. It gave model developers instant usability and a loyal consumer base. But relationships are inherently sticky, messy things—the most surefire way to drive someone insane. If companies encourage human-AI relationships at scale, they should expect user revolts, lawsuits, and responsibility for the psychological chaos that results.
-
AI friends too cheap to meter
34 votes -
Comment on Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs. in ~tech
skybrian (edited )Link ParentMakes sense. I think of chatbot ghosts as emerging from the remnants of all the conversations in the training data and the desires of the person talking to them. If they appear to be the ghost of...Makes sense. I think of chatbot ghosts as emerging from the remnants of all the conversations in the training data and the desires of the person talking to them. If they appear to be the ghost of anyone specific, that's part of the delusion.
I'm seeing that it's not enough to just say "ghosts are a metaphor" and leave it at that. Someone needs to write a good story.
(The Wilde Life comic has ghosts that are of specific people, and other ghosts that are definitely not.)
-
Comment on What programming/technical projects have you been working on? in ~comp
skybrian LinkI tried out exe.dev by writing a little website for extracting quotes from webpages as Markdown. It’s handy for making Tildes posts. It works fine for reasonable web pages such as blog posts, but...I tried out exe.dev by writing a little website for extracting quotes from webpages as Markdown. It’s handy for making Tildes posts. It works fine for reasonable web pages such as blog posts, but not for paywalled articles or archive.is pages, which can only be loaded by a real browser running JavaScript.
So I decided that I would rather have a Chrome extension and started over.
For that project, I thought it would be better to work on my Mac laptop, but running an AI coding agent there seems a bit dangerous, so I looked into how to sandbox it, and decided on using VS Code with a devcontainer.
I haven’t used Docker before and decided on using OrbStack as an alternative. It seems compatible with VS Code devcontainer support, provided OrbStack is installed with admin access, which is needed to set up a socket the same way as Docker.
Getting Claude Code running in a devcontainer was more of a pain than I expected. Installing the tool itself is straightforward via npm, but OAuth authentication fails because it redirects back to localhost. The trick is to cancel the first URL it gives you, and then it will print another URL that doesn’t redirect back again. Instead, the website displays a code that you can copy and paste.
The next step was to configure it so I don’t have to log in again every time I restart the container. To do that, I had to mount both /home/node/.claude.json and /home/node/.claude/ to the local filesystem, so the contents would be preserved.
So now I’m trying out Claude Code to write a Chrome extension, and it seems okayish, certainly not as easy as creating mini-websites with exe.dev. In part that’s because writing a Chrome extension is more obscure and the documentation isn’t that great. I might try the pi coding agent instead of Claude Code because it’s a more minimal tool.
-
Comment on What programming/technical projects have you been working on? in ~comp
skybrian Link ParentI tried out AmpCode, but stopped using it when the “Librarian” wanted vaguely scary permission to access to my GitHub account in order to read a public repo. Did you notice that? Is there a...I tried out AmpCode, but stopped using it when the “Librarian” wanted vaguely scary permission to access to my GitHub account in order to read a public repo. Did you notice that? Is there a workaround?
-
Comment on USPS announces changes to the postmark date system in ~society
skybrian Link ParentYeah, if you drop a letter in a mailbox after the last pickup time, how would you expect it to work? If it has to be postmarked right away, go to the post office.Yeah, if you drop a letter in a mailbox after the last pickup time, how would you expect it to work? If it has to be postmarked right away, go to the post office.
-
Comment on Her daughter was unraveling, and she didn’t know why. Then she found the AI chat logs. in ~tech
skybrian Link ParentIt seems like a different kind of danger though? It can't do psychic harm unless you chat with it and take what it says too seriously, treating it as if it were a person. Being non-physical and...It seems like a different kind of danger though? It can't do psychic harm unless you chat with it and take what it says too seriously, treating it as if it were a person. Being non-physical and yet imitating people seems more ghost-like than animal-like to me.
Maybe "ghost" doesn't sound scary enough because people don't believe in them. Or maybe different metaphors work for different people.
Also, and I say this with little experience working with kids, but getting kids to take warnings about dangerous things doesn't seem easy at all, given how often warnings about well-known dangers are often ignored. If anything it might make them seem more attractive.
I think if you're running a public zoo, protecting the public is your problem. (Possibly with some shared blame if park visitors are behaving like idiots.) If someone's pet pitbull is harrassing strangers, it's their problem. Blaming the owners doesn't seem inconsistent to me.
I also think it's fair to judge organizations by how they respond to unexpected problems. Do they fix it quickly? Does it stay fixed? Also, compensation for the people affected might be in order.
Edit: not sure how open source software fits into that. I'm a bit more comfortable releasing libraries than apps, since the person releasing the app is more obviously the 'owner.'