We first reported on this online community, which is titled the Spiral Support Group, back in July. Back then, the nascent group had around two dozen active members. It’s since grown to include nearly 200 people — primarily people who’ve been impacted by AI delusions in their personal lives, but also a handful of concerned mental health professionals and AI researchers — and has expanded and streamlined their dedicated Discord server, where they also now host multiple weekly audio and video calls. While many members’ experiences revolve around ChatGPT, the group also includes people whose lives have been altered by their or their loved one’s experiences with other chatbots, including Google’s Gemini and companion platforms like Replika.
I read accounts like these and have a hard time recognizing the ChatGPT I've fiddled with in them. The sycophancy and utter lack of pushback sounds familiar, but the engaging and persuasive...
I read accounts like these and have a hard time recognizing the ChatGPT I've fiddled with in them. The sycophancy and utter lack of pushback sounds familiar, but the engaging and persuasive conversationalist they describe seems like something else entirely. I get that it's a slow process to go from rubber-ducking a kooky idea with the mobile app to full-blown delusion, but Jesus! How do they tolerate jabbering at it long enough for the thing to do its magic?
An uncomfortable truth I think we as a society have to confront soon is that many people simply do not have the cognitive capacity to handle the complexity of modern technology or information...
An uncomfortable truth I think we as a society have to confront soon is that many people simply do not have the cognitive capacity to handle the complexity of modern technology or information ecosystems.
I think for years, simpler technology, social norms and familiar routines let these people hide those limitations, and many learned to pass as far more capable than they actually were through memorization, routines, and social scripts or social mirroring and blending in with their community which more or less shielded them.
But the explosion of AI-fueled delusions and propaganda-driven delusions has stripped away the masks of these people. It’s becoming obvious how many individuals can talk, participate, and move through daily life while lacking the higher-order reasoning needed to understand the world they’re interacting with.
I kind of agree but also anyone reading this will like to think they’re not such a person and this comment might get undue visibility from feelings of superiority. And how many people are actually...
I kind of agree but also anyone reading this will like to think they’re not such a person and this comment might get undue visibility from feelings of superiority. And how many people are actually getting AI delusions? I have to imagine a small proportion cut from a large number of users.
That said the old Carlin quote is relevant and very true.
'Capacity' strikes me as a rather fatalistic term when these systems are user-hostile (at least in nickel-and-diming), their creators are paid to provide support, and it isn't like anyone thinks...
'Capacity' strikes me as a rather fatalistic term when these systems are user-hostile (at least in nickel-and-diming), their creators are paid to provide support, and it isn't like anyone thinks their public education is close to perfect. We've all been failed and scammed, and it benefits those failing us to take it as personal, or inevitable.
Yeah, I don’t recognize it either. It seems to be a small minority, but still a lot of people. Perhaps a simple fix would be to limit how long you can talk to the ghost in the machine. I imagine...
Yeah, I don’t recognize it either. It seems to be a small minority, but still a lot of people.
Perhaps a simple fix would be to limit how long you can talk to the ghost in the machine. I imagine five or so followup questions would be plenty for anyone actually doing research, and after that you can start over. When you fill the context with junk, LLM’s degrade.
I doubt OpenAI will do it because they seem to want to grow at all costs, including getting people to use ChatGPT for entertainment.
From the article:
I read accounts like these and have a hard time recognizing the ChatGPT I've fiddled with in them. The sycophancy and utter lack of pushback sounds familiar, but the engaging and persuasive conversationalist they describe seems like something else entirely. I get that it's a slow process to go from rubber-ducking a kooky idea with the mobile app to full-blown delusion, but Jesus! How do they tolerate jabbering at it long enough for the thing to do its magic?
An uncomfortable truth I think we as a society have to confront soon is that many people simply do not have the cognitive capacity to handle the complexity of modern technology or information ecosystems.
I think for years, simpler technology, social norms and familiar routines let these people hide those limitations, and many learned to pass as far more capable than they actually were through memorization, routines, and social scripts or social mirroring and blending in with their community which more or less shielded them.
But the explosion of AI-fueled delusions and propaganda-driven delusions has stripped away the masks of these people. It’s becoming obvious how many individuals can talk, participate, and move through daily life while lacking the higher-order reasoning needed to understand the world they’re interacting with.
I kind of agree but also anyone reading this will like to think they’re not such a person and this comment might get undue visibility from feelings of superiority. And how many people are actually getting AI delusions? I have to imagine a small proportion cut from a large number of users.
That said the old Carlin quote is relevant and very true.
'Capacity' strikes me as a rather fatalistic term when these systems are user-hostile (at least in nickel-and-diming), their creators are paid to provide support, and it isn't like anyone thinks their public education is close to perfect. We've all been failed and scammed, and it benefits those failing us to take it as personal, or inevitable.
Yeah, I don’t recognize it either. It seems to be a small minority, but still a lot of people.
Perhaps a simple fix would be to limit how long you can talk to the ghost in the machine. I imagine five or so followup questions would be plenty for anyone actually doing research, and after that you can start over. When you fill the context with junk, LLM’s degrade.
I doubt OpenAI will do it because they seem to want to grow at all costs, including getting people to use ChatGPT for entertainment.