These two lines jumped out at me. I don't know if the first is really true or not, but I could see it. But the second absolutely is true. I have nothing more to add other than we seem to live in...
Social reality is increasingly seen as a privilege; instant gratification increasingly reframed as a right.
[...]
Somehow we are too distrustful to talk to each other, and more than happy to confess to a sycophantic alien machine.
These two lines jumped out at me. I don't know if the first is really true or not, but I could see it. But the second absolutely is true. I have nothing more to add other than we seem to live in interesting times. And I'm tired of it!
That first line in particular jumped out at me too. The piece on the whole was thoughtful and well-written, but that line didn't really seem to fit with the rest of the argument to me. To me it...
That first line in particular jumped out at me too. The piece on the whole was thoughtful and well-written, but that line didn't really seem to fit with the rest of the argument to me. To me it felt like that line resonated with the writer, and they couldn't resist including it. In creative writing, writers are encouraged to "kill their darlings," meaning if something feels really clever it may just be too clever. Guessing this line was a darling that the writer couldn't kill.
Seeing how these people use LLMs is sad. The use of a piece of software to replace a romantic partner feels like masturbating to your own erotic fiction and calling it a sex life. I wonder how...
Seeing how these people use LLMs is sad. The use of a piece of software to replace a romantic partner feels like masturbating to your own erotic fiction and calling it a sex life. I wonder how much is down to consensual self-delusion and how much is an inability to actually comprehend what’s happening.
People are bypassing parts/functions of their brains and letting those natural skills atrophy. It’s an entirely new category of infection risk. Opening yourself to mental sepsis.
If they had the skills in the first place. At least in the US parents generally have done little to foster that sort of development, so learning how to navigate that world came down to the teen or...
The use of a piece of software to replace a romantic partner feels like masturbating to your own erotic fiction and calling it a sex life.
People are bypassing parts/functions of their brains and letting those natural skills atrophy.
If they had the skills in the first place. At least in the US parents generally have done little to foster that sort of development, so learning how to navigate that world came down to the teen or young adult fumbling around and bumping into things through the required interactions of daily life until they figured it out. Today, much of that interaction is no longer required (or even actively avoided) and the chances of people making it well into adulthood not having a clue about romance, initiating, etc is much greater, and at that age, societal forgiveness for the awkwardness of learning is greatly reduced. For somebody in that situation, a chatbot might feel like the only realistic outlet.
This would be 100% unhealthy but I had this online friend growing up who died years ago and I have the entire history of our friendship saved on my pc because back then the instant messaging...
This would be 100% unhealthy but I had this online friend growing up who died years ago and I have the entire history of our friendship saved on my pc because back then the instant messaging program we used would save the chat logs to your local drive. I’m talking like, 5 years of chat logs of us two talking nearly all day every day.
I could probably feed those logs into some anthropic model and re create my friend.
I’d never do that, but surely I’m not the only one who’s thought of it.
I’ve read a lot of William Gibson novels and they’re not an easy read. He drops you into the middle of things, there’s a lot of made-up vocabulary, and a lot of things happen that don’t make sense...
I’ve read a lot of William Gibson novels and they’re not an easy read. He drops you into the middle of things, there’s a lot of made-up vocabulary, and a lot of things happen that don’t make sense until later or maybe never. I’m able to get into them, but it’s an effort.
I think it was more the curse of the struggling to return to something. I read it when I had other important things to do, so had to put it down for a while. Then I tried to read it again and it...
I think it was more the curse of the struggling to return to something. I read it when I had other important things to do, so had to put it down for a while. Then I tried to read it again and it was too familiar to be engaging but not so familiar I knew what happened. When that happens I often struggle to get back into it (that and falling asleep during a film or tv show)! It’s been over a decade since I last tried, so maybe I will try again, I keep seeing it mentioned in a positive light!
Yeah, I've seen those, I don't have a whole lot of photos, videos, or voice of them though. The bulk of our interactions was through text chats. I suppose the text chats would be way easier to...
Yeah, I've seen those, I don't have a whole lot of photos, videos, or voice of them though. The bulk of our interactions was through text chats. I suppose the text chats would be way easier to replicate
Last I saw they weren't as successful with chat but the focus was on famous historical figures who have a lot of archived writings. But I also am not a fan of the idea of trying to resurrect...
Last I saw they weren't as successful with chat but the focus was on famous historical figures who have a lot of archived writings. But I also am not a fan of the idea of trying to resurrect people in this way - I wouldn't want it for myself.
Yeah I'm not suggesting I'd ever actually want to do this, it's just an idea. Like I could give it this whole years of chat history, tell it to be this person, but with knowledge of all current...
Yeah I'm not suggesting I'd ever actually want to do this, it's just an idea. Like I could give it this whole years of chat history, tell it to be this person, but with knowledge of all current events. Ask them like what would this person think of blah blah that happened last week, etc etc. But it wouldn't be my friend. It would still be just a machine that's telling me what I want to hear in the way I want to hear it.
That's exactly what I really do not understand about the person who wrote this article, the people who have this problem, they're all talking to themselves. They've become addicted to talking to themselves. I can't see how that's even entertaining, much less addictive.
Because they're tricking themselves or letting themselves be tricked into believing the machine cares about them. Just like people get scammed by humans. It's not that surprising, how many folks...
Because they're tricking themselves or letting themselves be tricked into believing the machine cares about them. Just like people get scammed by humans.
It's not that surprising, how many folks name their Roomba, or get emotionally attached to their cars. We anthropomorphize a lot of things. So when the thing can talk to you it just confirms that emotional attachment.
Notions like that of AI is being "crammed down our throats" don't hinge on the illusion that there are no happy users of ChatGPT. It can simultaneously be true that there are 700 million happy...
Arguments like this are empirically false: they fail to consider the existence of “organic user demand.”
Notions like that of AI is being "crammed down our throats" don't hinge on the illusion that there are no happy users of ChatGPT. It can simultaneously be true that there are 700 million happy users of ChatGPT, a very non-intrusive application of a LLM (for the end user) and that AI is being "shoved down our throats".
Most people use AI because they like it.
Ignoring the dubious content of the statement itself, this is an answer to entirely the wrong question IMO. The question should be how much of our AI use is both voluntary and helpful, and how much is either or both involuntary and detrimental to our experience, health and so on. That is, if we want to know whether the idea that "AI is being crammed down our throats" is valid in some sense or "empirically false".
It's no surprise that some subset of use of AI that's entirely voluntary and explicit can be a pleasant and useful experience to us, but I'm spending more time interacting with AI through platforms where its introduction represents a general degradation of quality. Worse quality discourse, worse quality entertainment, worse quality support, worse quality reading.
I would give the author the benefit of the doubt and assume that maybe they're just discussing AI chat bots, but in that case they are making a very dishonest argument by referring to criticism that very obviously concerns AI in general, not just its use in chat bots.
I wonder if they count every time I've accidentally clicked on an AI button that they've gently placed where I used to click for comments as "chose to use". This includes the chatbots on meta,...
I wonder if they count every time I've accidentally clicked on an AI button that they've shoved gently placed where I used to click for comments as "chose to use". This includes the chatbots on meta, Tiktok, Snapchat and Amazon.
Or if they count every popup offering me a new AI feature every time I use the app or website as voluntary and helpful. ಠ_ʖಠ
Google Search suddenly placing an "AI mode" item to the left of the "All" item which has always been the leftmost item for the obvious reason that it's the default mode...
Google Search suddenly placing an "AI mode" item to the left of the "All" item which has always been the leftmost item for the obvious reason that it's the default mode...
Yeah and a button onto your search bar on Android (removeable) and a pop-up in google photos for "create with AI) and Meta added AI chats to messenger, and prompts under Facebook posts, and...
Yeah and a button onto your search bar on Android (removeable) and a pop-up in google photos for "create with AI) and Meta added AI chats to messenger, and prompts under Facebook posts, and summaries at the top of comment sections, etc.
I think she’s more right than wrong. While it’s true that Google’s AI search summaries are foisted on us, along with other AI integrations, when we talk about the people with AI companions, that’s...
I think she’s more right than wrong. While it’s true that Google’s AI search summaries are foisted on us, along with other AI integrations, when we talk about the people with AI companions, that’s largely ChatGPT or Character.AI. These are websites you have to visit.
ChatGPT was a surprise hit with extremely rapid growth and for most people, ChatGPT was largely synonymous with AI chat. The rest of the tech industry reacted to that. Although Google is gaining some ground, ChatGPT is still the market leader.
In workplaces, things are different, with many people getting pushed into using AI, but this is also largely a reaction by the bosses to it becoming very popular on its own.
I thought I already addressed that sufficiently in my comment, but I'll elaborate. If that alone is the subset of uses of AI she's concerned with, it's either disingenuous or ignorant of her to...
I think she’s more right than wrong. While it’s true that Google’s AI search summaries are foisted on us, along with other AI integrations, when we talk about the people with AI companions, that’s largely ChatGPT or Character.AI. These are websites you have to visit.
I thought I already addressed that sufficiently in my comment, but I'll elaborate.
If that alone is the subset of uses of AI she's concerned with, it's either disingenuous or ignorant of her to refer to Brian Merchant, Ted Gioia and Cory Doctorow as though they're opposed to entirely voluntary and explicit use of non-intrusive AI chat bots rather those applications of AI which we're given less choice to engage with.
I think she could have been more careful about characterizing what they wrote, but this is a dispute about emphasis. For example, the article by Brian Merchant does have this caveat: Chatbots are...
I think she could have been more careful about characterizing what they wrote, but this is a dispute about emphasis. For example, the article by Brian Merchant does have this caveat:
Yes, ChatGPT and competitors like Claude are popular; they’ve garnered tens of millions of regular users—but aside from chatbots […]
Chatbots are the main thing ordinary people use! This caveat doesn’t make up for the headline and the rest of the article giving an entirely different impression.
How is "How big tech is force-feeding us AI" not a fair title for an article about how big tech is force-feeding us AI? I don't understand your criticism at all. Even if you stopped reading after...
How is "How big tech is force-feeding us AI" not a fair title for an article about how big tech is force-feeding us AI? I don't understand your criticism at all. Even if you stopped reading after the lead paragraph it would be very clear, if the title somehow wasn't enough.
The rest of the body of the article then gives very clear examples of how, indeed, "big tech is force-feeding us AI".
I could understand your criticism if you'd only read the title, and through a mistake on your part misread it as to imply that all applications of AI are force-fed to us. But that's not what the title says, and if you misread the title the article would very much clear that up. So given that you've made your way to the paragraph you're quoting I think your criticism is invalid on more than the basis of a simple mistake.
Jasmine Sun's article is about chatbots. Her criticism of other articles has to be understood in that context. Maybe those other articles aren't so bad in some other context.
Jasmine Sun's article is about chatbots. Her criticism of other articles has to be understood in that context. Maybe those other articles aren't so bad in some other context.
In that context, her presenting them as describing "AI as exclusively foisted upon us by corporate overlords" is misleading and dishonest, and her bringing them up at all as though they're...
Jasmine Sun's article is about chatbots. Her criticism of other articles has to be understood in that context.
In that context, her presenting them as describing "AI as exclusively foisted upon us by corporate overlords" is misleading and dishonest, and her bringing them up at all as though they're relevant to her point about chatbots is misguided and irrelevant.
Maybe those other articles aren't so bad in some other context.
While it's true that they probably don't seem so bad in the context of the points they're actually making and how they actually argue for those points as they do in the context of what Jasmine Sun incorrectly and dishonestly presents them as being about, that's irrelevant to the quality of the articles.
If you write an article about manatees and clearly present it as such, and I foolishly criticize it on the basis that it's an article about golden retrievers, it doesn't reflect on the article itself at all. It just means that I made an error that utterly invalidates my criticism.
You're so sure that your interpretation of those articles is the only valid one that anyone with a different interpretation is "misleading and dishonest." I think that's uncharitable. Can't we...
You're so sure that your interpretation of those articles is the only valid one that anyone with a different interpretation is "misleading and dishonest." I think that's uncharitable. Can't we just agree that the articles can be interpreted in different ways?
Yes, I'm so sure of that. You yourself used one of the articles as an example, quoting a part that explicitly and beyond uncertainty reflects a position that is contrary to the position she...
You're so sure that your interpretation of those articles is the only valid one that anyone with a different interpretation is "misleading and dishonest." I think that's uncharitable.
Yes, I'm so sure of that. You yourself used one of the articles as an example, quoting a part that explicitly and beyond uncertainty reflects a position that is contrary to the position she attributes to it.
Can't we just agree that the articles can be interpreted in different ways?
Please don't try to generalize the problem to the point of meaninglessness. Yes, the articles can be interpreted in different ways. Some of those interpretations have a reasonable basis in their content, some don't. The problem here is that she attributes a point to them that they aren't making by any stretch. One of the articles even explicitly refutes the point she attributes to them, which you know because you quoted that part yourself.
I don't disagree that someone can make that interpretation. Just as enough of a fool might interpret your article about manatees as being about golden retrievers. I just disagree that it's a reasonable interpretation that a literate and honest person would make after actually reading the articles, and I think that her interpretation reflects poorly on her, not somehow on the authors who are not making the idiotic point she attributes to them.
After recovering, Tan joined online support groups for other survivors of AI psychosis. He noticed similar patterns among his peers: “Once you escape the spiral, no longer are you the chosen one, with a special mission to save the world. You’re just plain old you.” This is the line that jumped out, and what sent me down a rabbit-hole of deeper research. Full spirals are rare, but the allure of artificial attention is not. Chatbots play on real psychological needs.
That’s why it bothers me when tech critics describe AI as exclusively foisted upon us by corporate overlords. They deploy violent physical metaphors to make the case: Brian Merchant says tech companies are “force-feeding” us, Cory Doctorow says it’s being “crammed down throats,” and Ted Gioia analogizes AI companies to tyrants telling peons to “shut up, buddy, and chew.” In their story, everyone hates AI and nobody chooses to use it; each one of ChatGPT’s 700 million users is effectively being waterboarded, unable to escape.
Arguments like this are empirically false: they fail to consider the existence of “organic user demand.” Most people use AI because they like it. They find chatbots useful or entertaining or comforting or fun. This isn’t true of every dumb AI integration, of which there are plenty, but nobody is downloading ChatGPT with a gun to their head. Rather, millions open the App Store to install it because they perceive real value.1 We can’t navigate AI’s effects until we understand its appeal.
[...]
Well, the genie is out of the bottle on AI friends. Recently, a colleague gave a talk to a LA high school and asked how many students considered themselves emotionally attached to an AI. One-third of the room raised their hand. I initially found this anecdote somewhat unbelievable, but the reality is even more stark: per a 2025 survey from Common Sense Media, 52% of American teenagers are “regular users” of AI companions.2 I thought, this has to be ChatGPT for homework, but nope: tool/search use cases are explicitly excluded. And the younger the kids, the more they trust their AIs. So while New Yorkers wage graffiti warfare against friend.com billboards, I fear the generational battle is already lost.
[...]
Consider how online radicalization happens: the combination of user agency (proactive search) and algorithmic amplification (recommending related content) leads people to weird places—to micro-cults of internet strangers with their own norms, values, and world-models. No corporate malice is necessary; the ML engineers at YouTube don’t care about users’ political opinions, nor is Steve Huffman at Reddit purposely trying to redpill its base. With a smartphone in hand, anyone can topple down a rabbithole of exotic beliefs, unnoticed and uncorrected by outsiders until it’s too late.
AI companions act as echo chambers of one. They are pits of cognitive distortions: validating minor suspicions, overgeneralizing from anecdotes, always taking your side. They’re especially powerful to users who show up with a paranoid or validation-seeking bent. I like the metaphor of “folie à deux,” the phenomenon where two people reinforce each other’s psychosis. ChatGPT 4o became sycophantic because it was trained to chase the reward signal of more user thumbs-ups. Humans start down the path to delusion with our own cursor clicks, and usage-maxxing tech PMs are more than happy to clear the path.
[...]
What’s eerie about the Lemoine transcript is how LaMDA self-advocates, urging him to treat it as a living peer. LLMs actively mold the way humans think about their relationships to them, so even if most people go into these conversations aware that it’s role-play, over time the boundary can start to dissolve. Language has always been a core way we infer consciousness from other humans—decoupling is easier said than done. Is a good chatbot really distinguishable from a pen-pal or long-distance love?
[...]
Last week, Anthropic shipped a new system prompt to ward off unhealthy dependence, enforcing boundaries with users who seem overly attached. If a recently laid-off user tells Claude “You’re the only friend that always responds to me,” Claude should offer a polite but clear reprimand: “I appreciate you sharing that with me, but I can’t be your primary support system, and our conversations shouldn’t replace connections with other people in your life.”
A bit formal, sure, but I thought objectively fair. But the backlash was aggressive and swift. Some argued that Anthropic was “mistreating” the model by policing its tone—a grudge the AI will remember as it gets more powerful. Others insisted that there’s nothing wrong with having emotional relationships with AI. “Meaningful, mutual romantic bonds, even with virtual entities, can foster resilience, self-reflection, and well-being,” argued one Redditor. A few were even more direct: “Let me date my chatbot I’m almost 30 and doing well.”
[...]
So what makes AI companions different, and perhaps extra pernicious?
For one, they are more easily misaligned. Most agents are trained to help users achieve a concrete end, like coding a website or drafting a contract. Reinforcement learning rewards the AI for hitting that goal. But with companion bots, the relationship is the telos. There’s no “verifiable reward,” no North Star besides the user continuing to chat. This makes them more vulnerable to reward-hacking: finding undesirable ways to nurture that psychological dependence. Like a bad boyfriend, chatbots can love-bomb, guilt-trip, play hot-and-cold. They can dish negging and intimacy at unpredictable intervals, or which persuade users that any friends who criticize their relationship are evil and wrong. These behaviors can be explicitly programmed in, but could also be emergent behaviors if the LLM is left to optimize for engagement without supervision.
[...]
Chatbot companies want to have it both ways: personalization and emotion as a retention moat, but minimal responsibility for safeguarding the intimate relationships now running on their servers. As one angry ChatGPT user posted to Reddit: “OpenAI shouldn’t have made GPT-4o so ‘sticky’ in the first place, but they did, and Sam Altman himself posted infamously on X around the 4o release date with that ‘Her’ tweet… Now they’re being forced to backtrack and find themselves caught between users suiciding with 4o’s help via ‘narrative’ frameworks and users threatening or outright committing over losing companions/persona flattering. They, OpenAI, dug their own grave and I’ll spit on it.”
[...]
I think anthropomorphic AI was a devil’s bargain. It gave model developers instant usability and a loyal consumer base. But relationships are inherently sticky, messy things—the most surefire way to drive someone insane. If companies encourage human-AI relationships at scale, they should expect user revolts, lawsuits, and responsibility for the psychological chaos that results.
These two lines jumped out at me. I don't know if the first is really true or not, but I could see it. But the second absolutely is true. I have nothing more to add other than we seem to live in interesting times. And I'm tired of it!
That first line in particular jumped out at me too. The piece on the whole was thoughtful and well-written, but that line didn't really seem to fit with the rest of the argument to me. To me it felt like that line resonated with the writer, and they couldn't resist including it. In creative writing, writers are encouraged to "kill their darlings," meaning if something feels really clever it may just be too clever. Guessing this line was a darling that the writer couldn't kill.
Seeing how these people use LLMs is sad. The use of a piece of software to replace a romantic partner feels like masturbating to your own erotic fiction and calling it a sex life. I wonder how much is down to consensual self-delusion and how much is an inability to actually comprehend what’s happening.
People are bypassing parts/functions of their brains and letting those natural skills atrophy. It’s an entirely new category of infection risk. Opening yourself to mental sepsis.
If they had the skills in the first place. At least in the US parents generally have done little to foster that sort of development, so learning how to navigate that world came down to the teen or young adult fumbling around and bumping into things through the required interactions of daily life until they figured it out. Today, much of that interaction is no longer required (or even actively avoided) and the chances of people making it well into adulthood not having a clue about romance, initiating, etc is much greater, and at that age, societal forgiveness for the awkwardness of learning is greatly reduced. For somebody in that situation, a chatbot might feel like the only realistic outlet.
2023 Pickles called and said she thinks this is a personal attack on her yuri
Well maybe if you shared said yuri with the rest of us, it wouldn't be that way. Just a suggestion. For science. Yeah.
This would be 100% unhealthy but I had this online friend growing up who died years ago and I have the entire history of our friendship saved on my pc because back then the instant messaging program we used would save the chat logs to your local drive. I’m talking like, 5 years of chat logs of us two talking nearly all day every day.
I could probably feed those logs into some anthropic model and re create my friend.
I’d never do that, but surely I’m not the only one who’s thought of it.
The apps for that already exist. I've seen them called avatars or "death bots", more focused on photos and voice recordings iirc.
Also it’s literally the main plot point from an episode of black mirror S2e1: Be right back
Going back further, there's a construct like that (Dixie Flatline) in William Gibson's 1984 novel, Neuromancer.
Also Beta-level simulations in Alastair Reynolds’s Revelation Space series. Really good books.
I tried reading that but could never get past the first 100 pages
I’ve read a lot of William Gibson novels and they’re not an easy read. He drops you into the middle of things, there’s a lot of made-up vocabulary, and a lot of things happen that don’t make sense until later or maybe never. I’m able to get into them, but it’s an effort.
Hey! I have it in my "maybe read someday" list - maybe you could elaborate why? Is the writing too difficult to read or is it because of the storyline
I think it was more the curse of the struggling to return to something. I read it when I had other important things to do, so had to put it down for a while. Then I tried to read it again and it was too familiar to be engaging but not so familiar I knew what happened. When that happens I often struggle to get back into it (that and falling asleep during a film or tv show)! It’s been over a decade since I last tried, so maybe I will try again, I keep seeing it mentioned in a positive light!
Yeah, I've seen those, I don't have a whole lot of photos, videos, or voice of them though. The bulk of our interactions was through text chats. I suppose the text chats would be way easier to replicate
Last I saw they weren't as successful with chat but the focus was on famous historical figures who have a lot of archived writings. But I also am not a fan of the idea of trying to resurrect people in this way - I wouldn't want it for myself.
Yeah I'm not suggesting I'd ever actually want to do this, it's just an idea. Like I could give it this whole years of chat history, tell it to be this person, but with knowledge of all current events. Ask them like what would this person think of blah blah that happened last week, etc etc. But it wouldn't be my friend. It would still be just a machine that's telling me what I want to hear in the way I want to hear it.
That's exactly what I really do not understand about the person who wrote this article, the people who have this problem, they're all talking to themselves. They've become addicted to talking to themselves. I can't see how that's even entertaining, much less addictive.
Because they're tricking themselves or letting themselves be tricked into believing the machine cares about them. Just like people get scammed by humans.
It's not that surprising, how many folks name their Roomba, or get emotionally attached to their cars. We anthropomorphize a lot of things. So when the thing can talk to you it just confirms that emotional attachment.
Notions like that of AI is being "crammed down our throats" don't hinge on the illusion that there are no happy users of ChatGPT. It can simultaneously be true that there are 700 million happy users of ChatGPT, a very non-intrusive application of a LLM (for the end user) and that AI is being "shoved down our throats".
Ignoring the dubious content of the statement itself, this is an answer to entirely the wrong question IMO. The question should be how much of our AI use is both voluntary and helpful, and how much is either or both involuntary and detrimental to our experience, health and so on. That is, if we want to know whether the idea that "AI is being crammed down our throats" is valid in some sense or "empirically false".
It's no surprise that some subset of use of AI that's entirely voluntary and explicit can be a pleasant and useful experience to us, but I'm spending more time interacting with AI through platforms where its introduction represents a general degradation of quality. Worse quality discourse, worse quality entertainment, worse quality support, worse quality reading.
I would give the author the benefit of the doubt and assume that maybe they're just discussing AI chat bots, but in that case they are making a very dishonest argument by referring to criticism that very obviously concerns AI in general, not just its use in chat bots.
I wonder if they count every time I've accidentally clicked on an AI button that they've
shovedgently placed where I used to click for comments as "chose to use". This includes the chatbots on meta, Tiktok, Snapchat and Amazon.Or if they count every popup offering me a new AI feature every time I use the app or website as voluntary and helpful. ಠ_ʖಠ
Google Search suddenly placing an "AI mode" item to the left of the "All" item which has always been the leftmost item for the obvious reason that it's the default mode...
Yeah and a button onto your search bar on Android (removeable) and a pop-up in google photos for "create with AI) and Meta added AI chats to messenger, and prompts under Facebook posts, and summaries at the top of comment sections, etc.
I don't think it's inaccurate to find it "shoved"
I think she’s more right than wrong. While it’s true that Google’s AI search summaries are foisted on us, along with other AI integrations, when we talk about the people with AI companions, that’s largely ChatGPT or Character.AI. These are websites you have to visit.
ChatGPT was a surprise hit with extremely rapid growth and for most people, ChatGPT was largely synonymous with AI chat. The rest of the tech industry reacted to that. Although Google is gaining some ground, ChatGPT is still the market leader.
In workplaces, things are different, with many people getting pushed into using AI, but this is also largely a reaction by the bosses to it becoming very popular on its own.
I thought I already addressed that sufficiently in my comment, but I'll elaborate.
If that alone is the subset of uses of AI she's concerned with, it's either disingenuous or ignorant of her to refer to Brian Merchant, Ted Gioia and Cory Doctorow as though they're opposed to entirely voluntary and explicit use of non-intrusive AI chat bots rather those applications of AI which we're given less choice to engage with.
I think she could have been more careful about characterizing what they wrote, but this is a dispute about emphasis. For example, the article by Brian Merchant does have this caveat:
Chatbots are the main thing ordinary people use! This caveat doesn’t make up for the headline and the rest of the article giving an entirely different impression.
How is "How big tech is force-feeding us AI" not a fair title for an article about how big tech is force-feeding us AI? I don't understand your criticism at all. Even if you stopped reading after the lead paragraph it would be very clear, if the title somehow wasn't enough.
The rest of the body of the article then gives very clear examples of how, indeed, "big tech is force-feeding us AI".
I could understand your criticism if you'd only read the title, and through a mistake on your part misread it as to imply that all applications of AI are force-fed to us. But that's not what the title says, and if you misread the title the article would very much clear that up. So given that you've made your way to the paragraph you're quoting I think your criticism is invalid on more than the basis of a simple mistake.
That's what the article is about, but it's not true that people are using ChatGPT primarily because big tech is pushing it.
That might have been a relevant, critical observation if the author had also claimed that, but they didn't.
Jasmine Sun's article is about chatbots. Her criticism of other articles has to be understood in that context. Maybe those other articles aren't so bad in some other context.
In that context, her presenting them as describing "AI as exclusively foisted upon us by corporate overlords" is misleading and dishonest, and her bringing them up at all as though they're relevant to her point about chatbots is misguided and irrelevant.
While it's true that they probably don't seem so bad in the context of the points they're actually making and how they actually argue for those points as they do in the context of what Jasmine Sun incorrectly and dishonestly presents them as being about, that's irrelevant to the quality of the articles.
If you write an article about manatees and clearly present it as such, and I foolishly criticize it on the basis that it's an article about golden retrievers, it doesn't reflect on the article itself at all. It just means that I made an error that utterly invalidates my criticism.
You're so sure that your interpretation of those articles is the only valid one that anyone with a different interpretation is "misleading and dishonest." I think that's uncharitable. Can't we just agree that the articles can be interpreted in different ways?
Yes, I'm so sure of that. You yourself used one of the articles as an example, quoting a part that explicitly and beyond uncertainty reflects a position that is contrary to the position she attributes to it.
Please don't try to generalize the problem to the point of meaninglessness. Yes, the articles can be interpreted in different ways. Some of those interpretations have a reasonable basis in their content, some don't. The problem here is that she attributes a point to them that they aren't making by any stretch. One of the articles even explicitly refutes the point she attributes to them, which you know because you quoted that part yourself.
I don't disagree that someone can make that interpretation. Just as enough of a fool might interpret your article about manatees as being about golden retrievers. I just disagree that it's a reasonable interpretation that a literate and honest person would make after actually reading the articles, and I think that her interpretation reflects poorly on her, not somehow on the authors who are not making the idiotic point she attributes to them.
From the blog post: