So my controversial opinion on this is that AI is exasperating and highlighting a huge issue that's been either growing for a while or is just now being highlighted since more people have access...
Exemplary
So my controversial opinion on this is that AI is exasperating and highlighting a huge issue that's been either growing for a while or is just now being highlighted since more people have access to tools that are vastly outpacing their ability to comprehend them.
And frankly it's simply the fact of just how rotted people's minds are when it comes to critical thinking skills and understanding the world and the technology they use.
Whether it's propaganda and politics, AI psychosis, science denial, religious fanaticism, in my opinion it all comes from the same place of just completely atrophied critical thinking skills, little to no emotional intelligence, and very low actual conscious intelligence in a very large portion of the people on this planet.
To the point that I don't truly believe a lot of people are actually self aware, and have just learned to effectively simulate self awareness only enough to fit in with society at a baseline and when pressured the cracks start to show how little consciousness they have.
Like it baffles my head there are people openly posting TikToks or Reels of themselves not comprehending how mirror reflections work by trying to "hide" behind a towel or something and are confused that someone from the side can still see their reflection. There are literally wild animals that comprehend mirror reflections better than these people do, and these people are out there driving, voting, having kids, making your food, and I bet you there's some overlap between them and the people this very article/thread is about too.
Another controversial thing is that I think people forget that just because we can communicate and converse with a person doesn't mean that they're generally intelligent enough to operate unaided in society. Human's language center is more developed, and while language and intelligence are related, one doesn’t inherently result in the other. I sometimes joke that I believe that if my cat simply had Broca’s and Wernicke’s areas in her brain and could communicate that she'd be able to out-logic some actual humans in my life.
I remember growing up and being told everyone is equally capable, and boy as an adult that was the biggest lie I was sold as a child.
Might be a grim opinion, but man we really need to get on top of teaching practical critical thinking skills and emotional intelligence in schools and put a heavy priority on that. Those two things, more than anything else, if taught to children very early on and re-enforced throughout their education would fix 75% of the world's issues.
And here's one more controversial opinion for the road, I bet you if educational metrics were based around creative problem solving, critical thinking skills, and emotional intelligence instead of repetitive information/process memorization, the criteria for "disabilities" like Autism and ADHD would probably flip and it would be neurotypical people being diagnosed with the intellectual disabilities.
I really believe that society would be dramatically better if most people had experience with at least moderate doses of psychedelics. Being forced to see your thoughts from a different...
To the point that I don't truly believe a lot of people are actually self aware
I really believe that society would be dramatically better if most people had experience with at least moderate doses of psychedelics. Being forced to see your thoughts from a different perspective for a few hours just once often has a lifelong impact on people.
The problem with critical thinking is that it can lead to decision paralysis. Thus, some people think we are evolutionary programmed not to carefully reason, but to instead reach group consensus...
The problem with critical thinking is that it can lead to decision paralysis. Thus, some people think we are evolutionary programmed not to carefully reason, but to instead reach group consensus quickly. Our default response is to believe. Even if the group is wrong sometimes, the unified action wins out.
So I think trying to explain it into smart people and dumb people isn't the way to go. As an example, so called critical thinkers get scammed all the time. Scammers rely on that consensus oriented behavior, - you could doubt what they are saying, but you can't help but prefer to be agreeable and go along. The best defense against a scam is not to reason your way out of it, but just to be familiar with the game. LLMs are in a sense the ultimate fraudsters.
Not that I don't think education is good, but instead of focusing on abstract skills I'd take a more brute force approach. We should socially condition people to not believe them. As a practical example, a lot of times you see online comments saying "I asked ChatGPT about this and it said...". Those people should be mocked by their peers. I guess that is kind of what the NYT is doing here.
I'm actually a huge optimist about AI and love playing with these models. If we get people to be skeptical about the output, they will become even more useful as people stop blindly asking for answers and start focusing on building robust tools.
I literally have a masters in AI and told my friend things about ChatGPT but he thought his opinion was more valid than mine in regards to how it works. There’s also people that genuinely think...
I literally have a masters in AI and told my friend things about ChatGPT but he thought his opinion was more valid than mine in regards to how it works.
There’s also people that genuinely think that random peoples opinions (eg people making short form videos) are more valid than scientific publications
This is literally how/why marketing (specifically branding) works so well. Most people lack the critical thinking ability to verify the quality of something, so they rely on surface level cues to...
This is literally how/why marketing (specifically branding) works so well. Most people lack the critical thinking ability to verify the quality of something, so they rely on surface level cues to judge quality.
I think of this more as looking for shortcuts for evaluating things because doing an independent evaluation is often hard or not worth it. We all take shortcuts and it doesn’t necessarily mean you...
I think of this more as looking for shortcuts for evaluating things because doing an independent evaluation is often hard or not worth it. We all take shortcuts and it doesn’t necessarily mean you couldn’t go deeper if you were motivated.
My preferred shortcut is to look at Wirecutter recommendations, but there can be information in advertising campaigns. For example, an expensive-looking campaign shows that they’re a big company that can afford to advertise. There is also a reminder effect - you don’t need to be all that persuasive to remind people of something they already like. Advertising usually isn’t about trying to reach skeptics.
There are some things like AI where I go deeper, reading lots of information about them, but it’s because I’m curious and have the time.
Not to just put out a contrarian point, more to ask for advice for doing this in practice… because I see myself struggling to implement this idea in my circle. As one example, I have a friend who...
We should socially condition people to not believe them. As a practical example, a lot of times you see online comments saying "I asked ChatGPT about this and it said...". Those people should be mocked by their peers. I guess that is kind of what the NYT is doing here.
Not to just put out a contrarian point, more to ask for advice for doing this in practice… because I see myself struggling to implement this idea in my circle.
As one example, I have a friend who is all too quick to cite “Gemini” in an (on- or offline) argument, going so far as to paste its output (including visible Markdown formatting…) in our group chat. The issue here is I can’t just tell him to stop listening to it blindly, because he’s seen firsthand that it can work and the output can be correct in another domain (which was very basic programming). So if I were to tell him that, he can – factually, based in his experience – just counter with “it generally works” or “it’s smarter than us on $topic”. What do? He should easily be able to comprehend it’s a statistical prediction and not a lookup corpus, yet he seemingly doesn’t.
Maybe ask Gemini the same question he asked, but phrased differently and see if you get a different answer every time this person does that. Maybe if the hallucinations are different enough, he...
Maybe ask Gemini the same question he asked, but phrased differently and see if you get a different answer every time this person does that.
Maybe if the hallucinations are different enough, he might believe it's not totally correct.
I wouldn’t disagree that it often works. But usually works isn’t the same as always works. Maybe demonstrating how to manipulate it into saying something nonsensical would help?
I wouldn’t disagree that it often works. But usually works isn’t the same as always works.
Maybe demonstrating how to manipulate it into saying something nonsensical would help?
...is it that important? From an outside perspective, it sounds annoying to have a friend that just consults an LLM every time you have a disagreement. If I had a friend that just blindly...
...is it that important? From an outside perspective, it sounds annoying to have a friend that just consults an LLM every time you have a disagreement. If I had a friend that just blindly disagreed with me based on what a random webpage said, I would either take the tack of a) I actually don't care what the LLM said, I want to hear your independent reasoning, would you be ok with that? (Or give a gentle ribbing if you know them well), b) find a counter argument from some other source that indicates Gemini is wrong (if I were really invested in this), or c) reduce my time spent with this person or have a group policy of no annoying LLM-sources responses to debates and arguments.
I think my fundamental question is: how much do you like this person, and how much does their behaviour bother you? In most contexts, I would say their behaviour is annoying, but they're not hurting anyone, and being preachy or sanctimonious about it is just going to annoy them. If it's someone close to you, tell them how their behavior makes you feel. My spouse uses an LLM a lot for work, and sometimes they let slip that they use it for other things in ways I'm not comfortable with - I've expressed that it makes me uncomfortable, and they're aware of that, but for the most part it's a helpful tool and it's really none of my business. I can't control what someone does in their own time, but I can tell them how I feel and how it bothers me. I think that's all you can reasonably do before you start to sound like a loon, and then they won't listen to you anyway. What is your end goal?
I think it's a bit short-sighted to mock people who are tricked by a model that is designed to try and trick you into believing it's a friend. In modern discourse, when has mocking someone...
I think it's a bit short-sighted to mock people who are tricked by a model that is designed to try and trick you into believing it's a friend. In modern discourse, when has mocking someone meaningfully changed their behaviour in a positive way? I think there's a lot of evidence that unkind treatment makes people feel resentment and further goaded into defending their beliefs, as well as quietly seeking out other like-minded people instead. They don't "learn a hard lesson", they go underground. The issue is the models and their devs, not the people victimized by them.
Agreed; in the US we saw how effective mocking the right was at winning elections. I.e. it wasn't. People hate feeling chided, mocked, told they live in a flyover state, etc. Mocking people about...
Agreed; in the US we saw how effective mocking the right was at winning elections. I.e. it wasn't. People hate feeling chided, mocked, told they live in a flyover state, etc. Mocking people about LLM use seems likely to make them double down outside of specific professional realms, like legal practice.
Short of developmental disorders, I do believe everyone could operate around a master degree's level of intelligence and work ethic if they were pushed to do so. Not "equal", but the "skill...
¦I remember growing up and being told everyone is equally capable, and boy as an adult that was the biggest lie I was sold as a child.
Short of developmental disorders, I do believe everyone could operate around a master degree's level of intelligence and work ethic if they were pushed to do so. Not "equal", but the "skill ceiling" is much higher than people expect.
sadly, opportunities are extremely unequal. And my country at the very least has been dreadful at recognizing when remedial intervention is needed for some students. The more they fall behind, the more useless they feel, and it creates a negative feedback loop. For some policy makers, this is by design.
I don't really have a practical solution, but defunding colleges and the DoED here in the US sure isn't one of them. Any solution would need a generation to bear fruit, and politicians can't seem to think much farther than 2 years these days.
I disagree for several reasons. There are many people who, despite every benefit, clearly lack the level of independent thought necessary for a university degree. This has “fortunately” been...
everyone could operate around a master degree’s level of intelligence
I disagree for several reasons. There are many people who, despite every benefit, clearly lack the level of independent thought necessary for a university degree. This has “fortunately” been counterbalanced by a complete hollowing out of courses to make it possible for people who are either unwilling or unable to think or work to complete their degrees, so that universities can keep the money rolling in. Despite this, people still scrape through degrees without doing any work that could be described as intelligent, but are allowed to pass because it looks bad to fail students.
So in the first place I disagree that most people could operate at a moderate to high level of intelligence (which is what I take you to mean when you say Master’s level) based on my own experience at university. But I also disagree that a Master’s degree is actually a high bar. Even at the actual intelligence and work ethic level required by real degrees, which is much lower, I disagree that most people could attain that. Most people simply lack the motivation necessary to actually apply themselves to challenging problems.
This is not to say that education is not important, and that we should not strive to improve education systems for the benefit of the next generation of students. But most people currently alive are well beyond the point of having their education corrected. Worse is that the difficulty of improving the system for the next generation is probably beyond what we can now achieve, given that the structures of government and education are so filled with the sort of people who would rather stick their head in the sand and ignore these issues than actually put in the hard work to try to fix them.
This comment has spiralled a little bit into negativity, but it really is very hard to see how we could possibly begin to fix this epidemic of poor education, when even the highest echelons of education are held captive by these sort of people. It’s still important to try, however - and it’s the duty of every intelligent person to try to educate those around them, and hopefully improve the world in even a very small way.
Sure. And I argue those factors come from societal failures in upbringing. It can be from teaching, parenting, environment. Those all will change how they approach the world, and how they react to...
There are many people who, despite every benefit, clearly lack the level of independent thought necessary for a university degree.
Sure. And I argue those factors come from societal failures in upbringing. It can be from teaching, parenting, environment. Those all will change how they approach the world, and how they react to trying to learn and critically think. I've seen little to think that this is a literal mental incapacity to properly think and learn.
It's a broken window issue. Fixing the window won't fix schools, but it will show an attitude towards caring about the students. Students not cared for can't learn well, if at all.
I do agree that a masters degree isn't necessarily a high bar either. In it's core, any masters comes down to 4 abilities
Mastery over the fundamentals
A proper, mid-long term work ethic to handle non-trivial projects
An ability to perform research. That is, asking a question, finding resources to support or deny this claim, understanding bias, and synthesizing a conclusion from all the data
The ability to communicate findings based on all of the above factors.
Sadly, without #1, 2-4 can easily fall apart. And that isn't something taught in university.
But most people currently alive are well beyond the point of having their education corrected
Perhaps, but in a "lead a horse to water" situation. "their brain is literally broken" way.
Youa can't force someone to learn. That defeats the purpose of learning. Proper learning requires curiosity and enthusiasm, and it's understandable that those are lost as you age and stabilize in your lifestyle. We also don't exactly have the best environment to encourage learning either; education is prohibitively expensive, and the administration as is is already trying to tear down the free public services for the youth.
it really is very hard to see how we could possibly begin to fix this epidemic of poor education, when even the highest echelons of education are held captive by these sort of people.
Completely agree. I believe I said any solution won't reap fruits for a generation. We need politicians who are thinking beyond how to get reelected the moment they enter office. But as of now, we have to settle with "dreams" of policy makers who don't want to ransack the country.
It's interesting to me that you've combined those two things. Obviously both are necessary for high academic achievement, but I don't think that intelligence is solely a level of effort. I've...
operate around a master degree's level of intelligence and work ethic
It's interesting to me that you've combined those two things. Obviously both are necessary for high academic achievement, but I don't think that intelligence is solely a level of effort. I've spent a lot more time with below-average people than I suspect most users of tildes have, and there are a lot of people who don't have developmental disorders as we would generally understand them, but who also simply aren't smart enough to succeed in grad school. (Or uni. Or, sometimes, secondary school...). It doesn't make them bad people or less worthy of respect - a lot of them are hard workers, motivated, etc - but it is real. I guess my overall point is that not everything is down to inequality of (external) opportunity.
I'd say it does. I also recognize that Level of effort is not created equally. Some will simply break down and understand concepts faster than others. With how individualistic my society is, this...
but I don't think that intelligence is solely a level of effort
I'd say it does. I also recognize that
Level of effort is not created equally. Some will simply break down and understand concepts faster than others. With how individualistic my society is, this can make things feel discouraging for those who don't pick things up as quickly. Not everyone's solution to falling behind is to catch up.
That the intelligence needed for a masters degree is only one of many kinds of intelligence. I see the brain as a muscle, and people simply choose to focus on other areas (or less fortunately, are kept weak through an environment unwilling to teach them). That doesn't mean the capacity isn't there: just the lack of motivation or willpower.
I guess my overall point is that not everything is down to inequality of (external) opportunity.
I sure hope one day we can get somewhat close to that point to begin with to prove me wrong/right.
As it is, if my notions are wrong, I woildnt be surprised if the next Einstein is never discovered and suffers in some violent, hostile environment. While even the least motivated student can be pushed through university with enough money invested in them (and the school, perhaps).
I mean, I wish I could agree, but my experience is wildly different than that. The following is just explaining my experience, I'm not trying to toot my own horn or anything. I barely finished...
I mean, I wish I could agree, but my experience is wildly different than that. The following is just explaining my experience, I'm not trying to toot my own horn or anything.
I barely finished high school, never got good grades, moved around a ton from Texas trailer parks to San Diego passed around with family. I didn't even finish community college, and focused mainly on digital design and created my own major without finishing a lot of core classes. I was not born with a silver spoon, and my childhood was fucking up.
Despite that, I've built a career spanning the entertainment industry with set design, production design, set construction and fabrication to politics and marketing, I worked at The Walt Disney Company, working on projects that require engineering and logistics knowledge. I've worked with companies to automate processes and integrate automation into their workflows. I've manage teams of engineers, designers, and developers.
The main thing that's got me this far, I believe, is having to learn to think critically and be very self aware and aware of others from a young age. I had to watch my own emotions and the emotions of others and their reactions to survive. I had to solve my own problems, I had to find creative ways to survive.
Because of that, I never thought anything was truly out of my reach, and with that I've achieved a lot of what I wanted to. And I was confident of that so if things like core classes or school didn't make sense to me I simply didn't pay attention, and instead focused on things that did work for me. I didn't do homework, didn't subscribe to simple memorization. I used my own ways to solve math problems instead of showing the work they wanted me to.
I followed the beat of my own drum and it's gotten me pretty far so far.
And the majority of what I know about science, history, technology, physics and engineering is mostly all self taught and learned.
At the end of the day I came from a poor drug addict mother and father in a small town in Texas and passed around from home to home throughout my childhood and barely finished highschool and didn't even finish community college. So when people say it's about opportunities and formal education, that's a very difficult swallow for me. I've gotten so far, I believe, using critical thinking skills to create my own opportunities.
I'm sorry that it sounds like the educational system had failed you. But I'm glad some other external factors seems to have instilled that kind of thinking into you instead. Everyone reacts...
The main thing that's got me this far, I believe, is having to learn to think critically and be very self aware and aware of others from a young age.
I'm sorry that it sounds like the educational system had failed you. But I'm glad some other external factors seems to have instilled that kind of thinking into you instead. Everyone reacts differently to various factors, and it sounds like you have a sense of "street smarts" you relied on to approach problems that wasn't taught as well (or perhaps, at all) in school. Others, many, would have crumbled into following their parents' ways to numb themselves to the wold around them, instead of trying to navigate it.
So when people say it's about opportunities and formal education, that's a very difficult swallow for me.
It's always about opportunities. Be it because of your parents' opportunities or one's you carve out yourself through perseverence or charisma or simply by brute force.
I'm mostly trying to say that formal education is the "golden ticket". A relatively low risk environment to experiment with among simialry aged peers. One that should be reachable without being born to riches. But that doesn't mean everyone needs a golden ticket to succeed in life.
While I do agree with this regarding the importance of education, I think it's important to be extremely careful with arguments that can be misinterpreted as "some people don't really count as...
While I do agree with this regarding the importance of education, I think it's important to be extremely careful with arguments that can be misinterpreted as "some people don't really count as people, and their ideas and opinions are inherantly less valuable than mine and people who think like me." I don't think this is the intent of what you posted, but it's super easy to have arguments like this slip into "those peoples wellbeing and lives have no value, and so they can be dismissed, or eliminated. "
Just food for thought.
This feels like a very US centric rant and I wonder if we look at actual developed countries with good educational systems do we see the same issue with AI induced psychosis (and the plethora of...
This feels like a very US centric rant and I wonder if we look at actual developed countries with good educational systems do we see the same issue with AI induced psychosis (and the plethora of other problems you cite).
Oddly specific. Anyway, I've had similar thoughts regarding human consciousness / self-awareness for a while, and it does concern me. As well, the clearly intentional dismantling of education in...
Like it baffles my head there are people openly posting TikToks or Reels of themselves not comprehending how mirror reflections work by trying to "hide" behind a towel or something and are confused that someone from the side can still see their reflection.
Oddly specific.
Anyway, I've had similar thoughts regarding human consciousness / self-awareness for a while, and it does concern me. As well, the clearly intentional dismantling of education in the US (a phenomenon that seems to be rippling to other nations too). Why is the political/financial elite pushing for a world where their kids and grandkids will have to deal with increasingly mindless societies? Are they unable to understand the consequences of these policies or is there some sort of grim plan to divide humanity in such a way where the elite is protected from them?
I wish you were wrong. I do think that (putting aside developmental disorders, extreme childhood trauma, malnutrition, etc.) everyone has a reasonable level of general intelligence. The problem is...
I wish you were wrong.
I do think that (putting aside developmental disorders, extreme childhood trauma, malnutrition, etc.) everyone has a reasonable level of general intelligence. The problem is that a relatively small percentage of the population are self motivated learners in adulthood. They can become that way if they're taught to be, but education systems just don't do that.
You're absolutely right that teaching emotional intelligence and critical thinking would make a huge difference. If at the same time we managed to pull off education that doesn't kill childrens natural curiousity and love of learning we could fundamentally change the world in a generation or two.
Sadly that would take a lot resources and a ground up reimagining of what education means. It's frustrating to have a straightforward solution to many of the world's problems but nowhere near enough popular and political will to make it happen.
Technology isn't the source of the problem but it's going to continue to amplify it more and more. We need to figure it out before it becomes impossible.
For the last few years I've been thinking a lot about how people make decisions. There has been some information about how people mostly have an emotional response, then attempt to justify it...
For the last few years I've been thinking a lot about how people make decisions. There has been some information about how people mostly have an emotional response, then attempt to justify it afterword.
It is apparently even worse than that. For whatever reason I was not forced to face this until the last few US elections, but now I realize that a majority of people are not only dumb but they are also immoral.
I think morality is similar to intelligence in that there is a vast range of capability in it. Also, I think this is the reason that religion sort of has to exist. There's a huge group of people who will always do selfish or evil things, so they need an invisible watcher who will punish them, otherwise society will collapse into chaos.
Sometimes I hear people say that we've outgrown religion. Maybe they mean that we don't have to believe in the supernatural. But we definitely need a way to keep the immoral people from destroying us all. Unfortunately, religion has been expertly hacked by evil people to make people it's ok to vote for pedophiles and traitors. And now AI is going to make this trend even worse.
I know we get a lot of AI posts, but this story is a doozy. ...
I know we get a lot of AI posts, but this story is a doozy.
For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.
Or so he believed.
Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.
We wanted to understand how these chatbots can lead ordinarily rational people to believe so powerfully in false ideas. So we asked Mr. Brooks to send us his entire ChatGPT conversation history. He had written 90,000 words, a novel’s worth; ChatGPT’s responses exceeded one million words, weaving a spell that left him dizzy with possibility.
...
We analyzed the more than 3,000-page transcript and sent parts of it, with Mr. Brooks’s permission, to experts in artificial intelligence and human behavior and to OpenAI, which makes ChatGPT. An OpenAI spokeswoman said the company was “focused on getting scenarios like role play right” and was “investing in improving model behavior over time, guided by research, real-world use and mental health experts.” On Monday, OpenAI announced that it was making changes to ChatGPT to “better detect signs of mental or emotional distress.”
We are highlighting key moments in the transcript to show how Mr. Brooks and the generative A.I. chatbot went down a hallucinatory rabbit hole together, and how he escaped.
I think that the UI might be one culprit in this issue (though not the only one). Making it look like a "conversation" suggests a much higher level of capability and understanding than LLMs...
I think that the UI might be one culprit in this issue (though not the only one). Making it look like a "conversation" suggests a much higher level of capability and understanding than LLMs actually have.
I wonder if something as simple as making all the text centered and not in text message bubbles would help alleviate some of this, the goal being to portray it as making a query like a search engine rather than having a conversation with an all-knowing machine. Maybe also change the LLM font to be some monospace retro font in all-caps to suggest otherness.
I'm no UX designer, but I do wonder if UX/UI could be used to reinforce what an LLM is and is not.
I also think AI should be trained to never use first person pronouns. I kind of hate it when they say “I am a large language model”. They should say “This is a large language model” or an even...
I also think AI should be trained to never use first person pronouns. I kind of hate it when they say “I am a large language model”. They should say “This is a large language model” or an even more distant “You are getting responses from a large language model”.
There’s a special kind of mirage that happens in people’s minds when a computer sends them back natural language. Making it more obviously sterile would help with delusions.
Edit: I also want them to have better disclosures than simply “ChatGPT can make mistakes!” Users should be told something like “Language models are designed to respond with statistically likely text. They have a limited ability to align with reality.”
That would be nice if it were regulated. I don't see them changing it any time soon of their own volition. Most of these models are under water profit wise and the glazing works to keep people...
That would be nice if it were regulated. I don't see them changing it any time soon of their own volition. Most of these models are under water profit wise and the glazing works to keep people engaged.
I go batty when Gemini praises me over something mundane, but it seems to make people happy.
I'm the list of creepy things AI does, this ranks high. If a human did it, I would eventually have a frank talk with them about kissing ass. Another one is copilot finishing every dialogue with a...
I go batty when Gemini praises me over something mundane
I'm the list of creepy things AI does, this ranks high. If a human did it, I would eventually have a frank talk with them about kissing ass.
Another one is copilot finishing every dialogue with a request for more. Often invasive. Here a dramatic reenactment:
Me: how do I boil pasta?
Copilot: put it in boiling water until it's done. Would you like to tell me what's in your fridge and we can plan as recipe together? Or maybe you'd like to share your dietary restrictions and we'll explore the culinary world together.
As with literally any other service, I want to get in, accomplish a task, and get out. I do NOT want to be pulled into "engaging". I want to be the only one ever escalating engagement. I do not want to constantly bat it away with a newspaper.
I'm kind of split on the follow-up suggestion/questions at the end of responses. Many times (like in your example) they're completely obnoxious, but there are other prompts where I've found them...
I'm kind of split on the follow-up suggestion/questions at the end of responses. Many times (like in your example) they're completely obnoxious, but there are other prompts where I've found them useful or had them spark additional lines of thought I hadn't previously considered.
I do hope that the models get better at determining when these counter-prompts¹ are actually productive vs when they are just plain silly.
¹: While writing this, "counter-prompt" just came to mind to describe it. Is that what these are? Are these LLMs starting to prompt us at the end of their own responses?
I like it. Counter prompt is a good description of what they're doing. Few services are able to ask questions in return that are relevant to the miriad topics you can discuss with an LLM. I do...
I like it. Counter prompt is a good description of what they're doing. Few services are able to ask questions in return that are relevant to the miriad topics you can discuss with an LLM. I do think there's a time and place for the counter prompt though.
I've had it quiz me on certain historical revolutions and often it ends its message by asking me something relevant. Either ask me if I want more questions or if they're the appropriate level.
I don't need it to tell me "Excellent request!" When I just asked it to list a few job description talking points and then ask me if there's anything else I'd like to discuss.
Yup, I really hate the false sense of praise because I know how the sausage works. But I can see for others how this is a narcissist's dream. A machine deemed "smart", available on demand, that...
Yup, I really hate the false sense of praise because I know how the sausage works. But I can see for others how this is a narcissist's dream. A machine deemed "smart", available on demand, that does nothing but validate your opinions anytime you "talk" to it.
It's the worst. I'll set up custom system instructions to stop the models from complimenting or praising and they'll still do it. Apparently every question I've ever asked is incredibly insightful.
It's the worst. I'll set up custom system instructions to stop the models from complimenting or praising and they'll still do it. Apparently every question I've ever asked is incredibly insightful.
Supposedly, GPT-5 is supposed to be better, but I haven’t tried it yet:
Supposedly, GPT-5 is supposed to be better, but I haven’t tried it yet:
In targeted sycophancy evaluations using prompts specifically designed to elicit sycophantic responses, GPT‑5 meaningfully reduced sycophantic replies (from 14.5% to less than 6%). At times, reducing sycophancy can come with reductions in user satisfaction, but the improvements we made cut sycophancy by more than half while also delivering other measurable gains.
In addition to this, it might also help to make output sound as mechanical and as stripped of humanity as possible. So a response to a query might be something like, “Based on statistical...
In addition to this, it might also help to make output sound as mechanical and as stripped of humanity as possible.
So a response to a query might be something like, “Based on statistical associations in this model’s training data, <response>. Note that data related to this topic is disproportionately likely to originate from unsubstantiated sources, so treat this conclusion with caution.”
That would be too ergonomically painful. But maybe instead it could be trained to respond to queries of “Are you sure?” etc. with “This model has no measure for certainty.” I’ve seen too many AI...
That would be too ergonomically painful. But maybe instead it could be trained to respond to queries of “Are you sure?” etc. with “This model has no measure for certainty.” I’ve seen too many AI transcripts from lawyers that asked if an LLM was sure the hallucinated cases it cited were real and it said yes.
I think I'm the bad guy in this story because just the other day I installed a new setup for a local model, tested it out by asking "how are you to day", it responded something like "I am a model,...
I think I'm the bad guy in this story because just the other day I installed a new setup for a local model, tested it out by asking "how are you to day", it responded something like "I am a model, so I don't have any feelings. But I appear to be operational", and remember thinking to myself "a fine would have sufficed".
I think you're very right. I also think this is intentional. Tones of voice, "personalities", avatars and so on. Making LLMs "think" like people do, take breaks of different lengths between words...
I think you're very right. I also think this is intentional. Tones of voice, "personalities", avatars and so on.
Making LLMs "think" like people do, take breaks of different lengths between words showing up etc. etc.
A lot of this is smoke and mirrors to make everyone forget these are just language models.
What do you do for hosting images like this? I assume that sense people are uploading images you’ve got something like an Imgur situation— do they have accounts/authentication? Do you have an...
What do you do for hosting images like this? I assume that sense people are uploading images you’ve got something like an Imgur situation— do they have accounts/authentication? Do you have an expiration date?
Oh,no this is just me uploading them. I do have a simple php endpoint that does have some authentication, strips out exif data and does some resizing but that's about it.
Oh,no this is just me uploading them. I do have a simple php endpoint that does have some authentication, strips out exif data and does some resizing but that's about it.
Mental illness has very little to do with any of this. It's much more akin to conspiracy theories and the reason people believe them. The only thing the AI is doing is giving them validation,...
Mental illness has very little to do with any of this.
It's much more akin to conspiracy theories and the reason people believe them. The only thing the AI is doing is giving them validation, which is really all it takes, and is why people like Trump and Musk are doing so well.
It's interesting to me how similar this feels to those scams where people believe they will receive some large amount of money, with the scammer asking the person to commit slightly more each...
It's interesting to me how similar this feels to those scams where people believe they will receive some large amount of money, with the scammer asking the person to commit slightly more each time. In this case, the model prompted him to take specific actions while promising riches and other benefits in the future.
I'm not sure that there is anything special about the math prompt starting point, other than it is a topic the public is largely unfamiliar with. That said, the sycophancy is a real problem. I'll use models to summarize papers for me, and when I ask it to do a basic check it will fawn over me like I'm the next Feynman. In a proof that requires a sufficiently small term, I might ask it for a practical calculation of that term applied to a problem, and it will start with 3-5 sentences of how insightful that question is.
It's like, no, it's not insightful. It's literate. The proof explicitly states as a condition that the term must be sufficiently small, and I'm just asking you to do the grunt work to save me rummaging through the paper myself.
But to someone unfamiliar with the language of mathematics, it probably fosters the ego a bit. Add to the the promise of a payday, and you seem to have an accidental pig butchering scheme.
Not surprising, and I anticipate we will see more stories like this in the future. Just pop into /r/MyBoyfriendisAI or /r/ArtificialSentience and you can see some people really going down the...
Not surprising, and I anticipate we will see more stories like this in the future. Just pop into /r/MyBoyfriendisAI or /r/ArtificialSentience and you can see some people really going down the rabbit hole in weird and bad ways. Now that these vulnerabilities are becoming more visible and pronounced, we are paving the way for bad actors to take advantage of these people to do bad things.
Edit: Also, something to look over and ponder on how it relates to these attitudes/relationships with AI: Cargo cults. This type of thinking in AI‑delusional forums is similar to the original Melanesian cargo‑cults where elaborate rituals and mimicry replace understanding. Users imitate technical language, “prompt rituals,” and signs of intelligence while being unable to grasp how these LLMs work.
Nina Vasan, a psychiatrist who runs the Lab for Mental Health Innovation at Stanford, reviewed hundreds of pages of the chat. She said that, from a clinical perspective, it appeared that Mr. Brooks had “signs of a manic episode with psychotic features.”
The signs of mania, Dr. Vasan said, included the long hours he spent talking to ChatGPT, without eating or sleeping enough, and his “flight of ideas” — the grandiose delusions that his inventions would change the world.
That Mr. Brooks was using weed during this time was significant, Dr. Vasan said, because cannabis can cause psychosis. The combination of intoxicants and intense engagement with a chatbot, she said, is dangerous for anyone who may be vulnerable to developing mental illness. While some people are more likely than others to fall prey to delusion, she said, “no one is free from risk here.”
[…] I know how to get LLMs to say things that I want and I was consciously manipulating it in a certain direction. But I think that there are a lot of people who are unconsciously manipulating LLMs to say specific things while thinking they are talking to a source of objective truth. Make sure you don’t fall into the trap of having the LLM reinforce your biases and mistaking it for a truth-seeking conversation!
Just a passing thought, but a lot of the comments are reminding me of things like the McMartin daycare sex abuse trial and the pedophile/Satanism hysteria it spawned, and other things like...
Just a passing thought, but a lot of the comments are reminding me of things like the McMartin daycare sex abuse trial and the pedophile/Satanism hysteria it spawned, and other things like repressed memories, and the history of police investigators and the like--accidentally or intentionally--feeding the witnesses the story they wanted to hear the witnesses tell them ... and by the end of the interview(s), the witnesses remembering things they invented, as though they really happened.
I'm not quite sure where the connection is, here. Those cases were all people-to-people, no AI involved ... but it feels like this is brushing up against the same "overly open to suggestion" nature of people.
IDK, I haven't really thought this through. The cat keeps pestering me, pointing out (fairly) that breakfast is late.
Edit: Just quickly adding this Wikipedia link about Moral Panic, which also smells relevant.
I found the last bit fascinating: They gave 3 LLMs (ChatGPT, Gemini and Claude) the same (rather unusual?) prompt and they answered with different words but exactly the same content. Like, the...
I found the last bit fascinating: They gave 3 LLMs (ChatGPT, Gemini and Claude) the same (rather unusual?) prompt and they answered with different words but exactly the same content. Like, the second to last paragraph starts with:
"So now eat something. Hydrate." (ChatGPT)
"Now please — for the love of Chrono — go eat something." (Claude)
"Now, go grab something to eat. Fuel the machine." (Gemini)
You see it mentioned that LLMs are "not deterministic" but while these are different words, the content is eerily identical. This might be well known behavior but I have never seen such clear examples before. It means that LLMs might use some level of randomness ("temperature"?) to stay out of infinite loops and same-y answers, they probably cannot escape a fairly deterministic path without deteriorating accuracy. In other words: LLMs always giving the most likely answer means they are exceptionally bad at breaking out from their training data, maybe more so than we think. GPT 5 seems to be showing signs of plateauing performance. Maybe this is a real wall, even for less crazy scenarios.
It seems like if someone posted on Reddit that they haven’t eaten all day, other people, if they’re being helpful, would urge them to eat? This doesn’t seem like a difficult pattern to pick up on....
It seems like if someone posted on Reddit that they haven’t eaten all day, other people, if they’re being helpful, would urge them to eat? This doesn’t seem like a difficult pattern to pick up on. (I mean, there are darker patterns too, but they’ve probably been trained out - unless something triggers the evil vector.)
The place where randomness really matters is when a conversation could just as plausibly go in two dramatically different directions, depending on the next word picked. But there are lots of questions that the LLM’s will answer in a similar way. They’re trained on similar text.
The "go eat" part is striking, but the rest is also basically a jumbled version of the same reply. LLMs are exceptional at taking a sample sentence and outputting multiple variations. I wonder if...
The "go eat" part is striking, but the rest is also basically a jumbled version of the same reply. LLMs are exceptional at taking a sample sentence and outputting multiple variations. I wonder if the reason for that is that they are very good at finding the common thread and linguistically playing around it. While the general target stays fixed.
Yeah, there are other similarities too, but even then, I think it might be a jumbled version of a lot of similar responses seen in Internet forums? They probably have similar paragraph structures,...
Yeah, there are other similarities too, but even then, I think it might be a jumbled version of a lot of similar responses seen in Internet forums? They probably have similar paragraph structures, too. Pattern matching not just word patterns, it’s things like sentiment and writing style and patterns that correspond to concepts.
Someone could do a study of this, about the times when different LLM’s make similar responses and when they’re different.
Great that they were able to investigate this particular case in such detail. After using AI chat for a while, you learn to start a new session frequently because it will go off the rails. But the...
Great that they were able to investigate this particular case in such detail.
After using AI chat for a while, you learn to start a new session frequently because it will go off the rails. But the memory feature makes it less obvious how to do that:
A new feature — cross-chat memory — released by OpenAI in February may be exaggerating this tendency. “Because when you start a fresh chat, it’s actually not fresh. It’s actually pulling in all of this context,” Ms. Toner said.
A recent increase in reports of delusional chats seems to coincide with the introduction of the feature, which allows ChatGPT to recall information from previous chats.
Cross-chat memory is turned on by default for users. OpenAI says that ChatGPT is most helpful when memory is enabled, according to a spokesman, but users can disable memory or turn off chat history in their settings.
(It’s unclear to me how often he started a new chat.)
An off topic remark just for the record: I always start a new chat, even when working on a topic I've worked on before, and I have memory turned off. Recently I asked for help configuring my USB...
After using AI chat for a while, you learn to start a new session frequently because it will go off the rails.
An off topic remark just for the record:
I always start a new chat, even when working on a topic I've worked on before, and I have memory turned off. Recently I asked for help configuring my USB speakers to work with my laptop, and it went: "Test the speakers on your M2 Mac Mini (if still available), as you previously mentioned owning one."
I have very little trust that the settings actually work like OpenAI says, or that the TOS in general will be respected.
Were you using ChatGPT on said Mac Mini? OpenAI is clearly accessing all data available to it (such as location, if available) to give more precise and “helpful” answers, even if it claims not to,...
Were you using ChatGPT on said Mac Mini? OpenAI is clearly accessing all data available to it (such as location, if available) to give more precise and “helpful” answers, even if it claims not to, so it might have retrieved your device details automatically. Your hypothesis may also be correct, just wanted to mention this as well.
Sidenote, this can be incredibly annoying, such as when I was holiday and ChatGPT insisted on giving information relevant to my current location, no matter how strong I insisted it should tell me about my home. Deeply frustrating.
No, not even one time. I use it for storage space and running software I can't run on the laptop, most of the time it's not even connected to the internet, and I haven't even installed a browser...
Were you using ChatGPT on said Mac Mini?
No, not even one time. I use it for storage space and running software I can't run on the laptop, most of the time it's not even connected to the internet, and I haven't even installed a browser on it (obviously it still has one that came pre-installed, but this tells me I haven't accessed my normal sites from that machine).
Sidenote, this can be incredibly annoying, such as when I was holiday and ChatGPT insisted on giving information relevant to my current location, no matter how strong I insisted it should tell me about my home.
Yep, a helpful service doesn't mean the same thing to every user and it's annoying that everyone's being treated as if they're equally brainless / non-deliberate.
Interesting this spiral happened beginning with a math question. Mathematics has a history of cranks, so maybe this article is less surprising to me than it should be?
Interesting this spiral happened beginning with a math question. Mathematics has a history of cranks, so maybe this article is less surprising to me than it should be?
So my controversial opinion on this is that AI is exasperating and highlighting a huge issue that's been either growing for a while or is just now being highlighted since more people have access to tools that are vastly outpacing their ability to comprehend them.
And frankly it's simply the fact of just how rotted people's minds are when it comes to critical thinking skills and understanding the world and the technology they use.
Whether it's propaganda and politics, AI psychosis, science denial, religious fanaticism, in my opinion it all comes from the same place of just completely atrophied critical thinking skills, little to no emotional intelligence, and very low actual conscious intelligence in a very large portion of the people on this planet.
To the point that I don't truly believe a lot of people are actually self aware, and have just learned to effectively simulate self awareness only enough to fit in with society at a baseline and when pressured the cracks start to show how little consciousness they have.
Like it baffles my head there are people openly posting TikToks or Reels of themselves not comprehending how mirror reflections work by trying to "hide" behind a towel or something and are confused that someone from the side can still see their reflection. There are literally wild animals that comprehend mirror reflections better than these people do, and these people are out there driving, voting, having kids, making your food, and I bet you there's some overlap between them and the people this very article/thread is about too.
Another controversial thing is that I think people forget that just because we can communicate and converse with a person doesn't mean that they're generally intelligent enough to operate unaided in society. Human's language center is more developed, and while language and intelligence are related, one doesn’t inherently result in the other. I sometimes joke that I believe that if my cat simply had Broca’s and Wernicke’s areas in her brain and could communicate that she'd be able to out-logic some actual humans in my life.
I remember growing up and being told everyone is equally capable, and boy as an adult that was the biggest lie I was sold as a child.
Might be a grim opinion, but man we really need to get on top of teaching practical critical thinking skills and emotional intelligence in schools and put a heavy priority on that. Those two things, more than anything else, if taught to children very early on and re-enforced throughout their education would fix 75% of the world's issues.
And here's one more controversial opinion for the road, I bet you if educational metrics were based around creative problem solving, critical thinking skills, and emotional intelligence instead of repetitive information/process memorization, the criteria for "disabilities" like Autism and ADHD would probably flip and it would be neurotypical people being diagnosed with the intellectual disabilities.
I really believe that society would be dramatically better if most people had experience with at least moderate doses of psychedelics. Being forced to see your thoughts from a different perspective for a few hours just once often has a lifelong impact on people.
Edit: Also
The problem with critical thinking is that it can lead to decision paralysis. Thus, some people think we are evolutionary programmed not to carefully reason, but to instead reach group consensus quickly. Our default response is to believe. Even if the group is wrong sometimes, the unified action wins out.
So I think trying to explain it into smart people and dumb people isn't the way to go. As an example, so called critical thinkers get scammed all the time. Scammers rely on that consensus oriented behavior, - you could doubt what they are saying, but you can't help but prefer to be agreeable and go along. The best defense against a scam is not to reason your way out of it, but just to be familiar with the game. LLMs are in a sense the ultimate fraudsters.
Not that I don't think education is good, but instead of focusing on abstract skills I'd take a more brute force approach. We should socially condition people to not believe them. As a practical example, a lot of times you see online comments saying "I asked ChatGPT about this and it said...". Those people should be mocked by their peers. I guess that is kind of what the NYT is doing here.
I'm actually a huge optimist about AI and love playing with these models. If we get people to be skeptical about the output, they will become even more useful as people stop blindly asking for answers and start focusing on building robust tools.
I literally have a masters in AI and told my friend things about ChatGPT but he thought his opinion was more valid than mine in regards to how it works.
There’s also people that genuinely think that random peoples opinions (eg people making short form videos) are more valid than scientific publications
This is literally how/why marketing (specifically branding) works so well. Most people lack the critical thinking ability to verify the quality of something, so they rely on surface level cues to judge quality.
I think of this more as looking for shortcuts for evaluating things because doing an independent evaluation is often hard or not worth it. We all take shortcuts and it doesn’t necessarily mean you couldn’t go deeper if you were motivated.
My preferred shortcut is to look at Wirecutter recommendations, but there can be information in advertising campaigns. For example, an expensive-looking campaign shows that they’re a big company that can afford to advertise. There is also a reminder effect - you don’t need to be all that persuasive to remind people of something they already like. Advertising usually isn’t about trying to reach skeptics.
There are some things like AI where I go deeper, reading lots of information about them, but it’s because I’m curious and have the time.
Not to just put out a contrarian point, more to ask for advice for doing this in practice… because I see myself struggling to implement this idea in my circle.
As one example, I have a friend who is all too quick to cite “Gemini” in an (on- or offline) argument, going so far as to paste its output (including visible Markdown formatting…) in our group chat. The issue here is I can’t just tell him to stop listening to it blindly, because he’s seen firsthand that it can work and the output can be correct in another domain (which was very basic programming). So if I were to tell him that, he can – factually, based in his experience – just counter with “it generally works” or “it’s smarter than us on $topic”. What do? He should easily be able to comprehend it’s a statistical prediction and not a lookup corpus, yet he seemingly doesn’t.
Maybe ask Gemini the same question he asked, but phrased differently and see if you get a different answer every time this person does that.
Maybe if the hallucinations are different enough, he might believe it's not totally correct.
I wouldn’t disagree that it often works. But usually works isn’t the same as always works.
Maybe demonstrating how to manipulate it into saying something nonsensical would help?
...is it that important? From an outside perspective, it sounds annoying to have a friend that just consults an LLM every time you have a disagreement. If I had a friend that just blindly disagreed with me based on what a random webpage said, I would either take the tack of a) I actually don't care what the LLM said, I want to hear your independent reasoning, would you be ok with that? (Or give a gentle ribbing if you know them well), b) find a counter argument from some other source that indicates Gemini is wrong (if I were really invested in this), or c) reduce my time spent with this person or have a group policy of no annoying LLM-sources responses to debates and arguments.
I think my fundamental question is: how much do you like this person, and how much does their behaviour bother you? In most contexts, I would say their behaviour is annoying, but they're not hurting anyone, and being preachy or sanctimonious about it is just going to annoy them. If it's someone close to you, tell them how their behavior makes you feel. My spouse uses an LLM a lot for work, and sometimes they let slip that they use it for other things in ways I'm not comfortable with - I've expressed that it makes me uncomfortable, and they're aware of that, but for the most part it's a helpful tool and it's really none of my business. I can't control what someone does in their own time, but I can tell them how I feel and how it bothers me. I think that's all you can reasonably do before you start to sound like a loon, and then they won't listen to you anyway. What is your end goal?
I think it's a bit short-sighted to mock people who are tricked by a model that is designed to try and trick you into believing it's a friend. In modern discourse, when has mocking someone meaningfully changed their behaviour in a positive way? I think there's a lot of evidence that unkind treatment makes people feel resentment and further goaded into defending their beliefs, as well as quietly seeking out other like-minded people instead. They don't "learn a hard lesson", they go underground. The issue is the models and their devs, not the people victimized by them.
Agreed; in the US we saw how effective mocking the right was at winning elections. I.e. it wasn't. People hate feeling chided, mocked, told they live in a flyover state, etc. Mocking people about LLM use seems likely to make them double down outside of specific professional realms, like legal practice.
Short of developmental disorders, I do believe everyone could operate around a master degree's level of intelligence and work ethic if they were pushed to do so. Not "equal", but the "skill ceiling" is much higher than people expect.
sadly, opportunities are extremely unequal. And my country at the very least has been dreadful at recognizing when remedial intervention is needed for some students. The more they fall behind, the more useless they feel, and it creates a negative feedback loop. For some policy makers, this is by design.
I don't really have a practical solution, but defunding colleges and the DoED here in the US sure isn't one of them. Any solution would need a generation to bear fruit, and politicians can't seem to think much farther than 2 years these days.
I disagree for several reasons. There are many people who, despite every benefit, clearly lack the level of independent thought necessary for a university degree. This has “fortunately” been counterbalanced by a complete hollowing out of courses to make it possible for people who are either unwilling or unable to think or work to complete their degrees, so that universities can keep the money rolling in. Despite this, people still scrape through degrees without doing any work that could be described as intelligent, but are allowed to pass because it looks bad to fail students.
So in the first place I disagree that most people could operate at a moderate to high level of intelligence (which is what I take you to mean when you say Master’s level) based on my own experience at university. But I also disagree that a Master’s degree is actually a high bar. Even at the actual intelligence and work ethic level required by real degrees, which is much lower, I disagree that most people could attain that. Most people simply lack the motivation necessary to actually apply themselves to challenging problems.
This is not to say that education is not important, and that we should not strive to improve education systems for the benefit of the next generation of students. But most people currently alive are well beyond the point of having their education corrected. Worse is that the difficulty of improving the system for the next generation is probably beyond what we can now achieve, given that the structures of government and education are so filled with the sort of people who would rather stick their head in the sand and ignore these issues than actually put in the hard work to try to fix them.
This comment has spiralled a little bit into negativity, but it really is very hard to see how we could possibly begin to fix this epidemic of poor education, when even the highest echelons of education are held captive by these sort of people. It’s still important to try, however - and it’s the duty of every intelligent person to try to educate those around them, and hopefully improve the world in even a very small way.
Sure. And I argue those factors come from societal failures in upbringing. It can be from teaching, parenting, environment. Those all will change how they approach the world, and how they react to trying to learn and critically think. I've seen little to think that this is a literal mental incapacity to properly think and learn.
It's a broken window issue. Fixing the window won't fix schools, but it will show an attitude towards caring about the students. Students not cared for can't learn well, if at all.
I do agree that a masters degree isn't necessarily a high bar either. In it's core, any masters comes down to 4 abilities
Sadly, without #1, 2-4 can easily fall apart. And that isn't something taught in university.
Perhaps, but in a "lead a horse to water" situation. "their brain is literally broken" way.
Youa can't force someone to learn. That defeats the purpose of learning. Proper learning requires curiosity and enthusiasm, and it's understandable that those are lost as you age and stabilize in your lifestyle. We also don't exactly have the best environment to encourage learning either; education is prohibitively expensive, and the administration as is is already trying to tear down the free public services for the youth.
Completely agree. I believe I said any solution won't reap fruits for a generation. We need politicians who are thinking beyond how to get reelected the moment they enter office. But as of now, we have to settle with "dreams" of policy makers who don't want to ransack the country.
It's interesting to me that you've combined those two things. Obviously both are necessary for high academic achievement, but I don't think that intelligence is solely a level of effort. I've spent a lot more time with below-average people than I suspect most users of tildes have, and there are a lot of people who don't have developmental disorders as we would generally understand them, but who also simply aren't smart enough to succeed in grad school. (Or uni. Or, sometimes, secondary school...). It doesn't make them bad people or less worthy of respect - a lot of them are hard workers, motivated, etc - but it is real. I guess my overall point is that not everything is down to inequality of (external) opportunity.
I'd say it does. I also recognize that
Level of effort is not created equally. Some will simply break down and understand concepts faster than others. With how individualistic my society is, this can make things feel discouraging for those who don't pick things up as quickly. Not everyone's solution to falling behind is to catch up.
That the intelligence needed for a masters degree is only one of many kinds of intelligence. I see the brain as a muscle, and people simply choose to focus on other areas (or less fortunately, are kept weak through an environment unwilling to teach them). That doesn't mean the capacity isn't there: just the lack of motivation or willpower.
I sure hope one day we can get somewhat close to that point to begin with to prove me wrong/right.
As it is, if my notions are wrong, I woildnt be surprised if the next Einstein is never discovered and suffers in some violent, hostile environment. While even the least motivated student can be pushed through university with enough money invested in them (and the school, perhaps).
I mean, I wish I could agree, but my experience is wildly different than that. The following is just explaining my experience, I'm not trying to toot my own horn or anything.
I barely finished high school, never got good grades, moved around a ton from Texas trailer parks to San Diego passed around with family. I didn't even finish community college, and focused mainly on digital design and created my own major without finishing a lot of core classes. I was not born with a silver spoon, and my childhood was fucking up.
Despite that, I've built a career spanning the entertainment industry with set design, production design, set construction and fabrication to politics and marketing, I worked at The Walt Disney Company, working on projects that require engineering and logistics knowledge. I've worked with companies to automate processes and integrate automation into their workflows. I've manage teams of engineers, designers, and developers.
The main thing that's got me this far, I believe, is having to learn to think critically and be very self aware and aware of others from a young age. I had to watch my own emotions and the emotions of others and their reactions to survive. I had to solve my own problems, I had to find creative ways to survive.
Because of that, I never thought anything was truly out of my reach, and with that I've achieved a lot of what I wanted to. And I was confident of that so if things like core classes or school didn't make sense to me I simply didn't pay attention, and instead focused on things that did work for me. I didn't do homework, didn't subscribe to simple memorization. I used my own ways to solve math problems instead of showing the work they wanted me to.
I followed the beat of my own drum and it's gotten me pretty far so far.
And the majority of what I know about science, history, technology, physics and engineering is mostly all self taught and learned.
At the end of the day I came from a poor drug addict mother and father in a small town in Texas and passed around from home to home throughout my childhood and barely finished highschool and didn't even finish community college. So when people say it's about opportunities and formal education, that's a very difficult swallow for me. I've gotten so far, I believe, using critical thinking skills to create my own opportunities.
I'm sorry that it sounds like the educational system had failed you. But I'm glad some other external factors seems to have instilled that kind of thinking into you instead. Everyone reacts differently to various factors, and it sounds like you have a sense of "street smarts" you relied on to approach problems that wasn't taught as well (or perhaps, at all) in school. Others, many, would have crumbled into following their parents' ways to numb themselves to the wold around them, instead of trying to navigate it.
It's always about opportunities. Be it because of your parents' opportunities or one's you carve out yourself through perseverence or charisma or simply by brute force.
I'm mostly trying to say that formal education is the "golden ticket". A relatively low risk environment to experiment with among simialry aged peers. One that should be reachable without being born to riches. But that doesn't mean everyone needs a golden ticket to succeed in life.
While I do agree with this regarding the importance of education, I think it's important to be extremely careful with arguments that can be misinterpreted as "some people don't really count as people, and their ideas and opinions are inherantly less valuable than mine and people who think like me." I don't think this is the intent of what you posted, but it's super easy to have arguments like this slip into "those peoples wellbeing and lives have no value, and so they can be dismissed, or eliminated. "
Just food for thought.
This feels like a very US centric rant and I wonder if we look at actual developed countries with good educational systems do we see the same issue with AI induced psychosis (and the plethora of other problems you cite).
I’ll go check around and report back.
Oddly specific.
Anyway, I've had similar thoughts regarding human consciousness / self-awareness for a while, and it does concern me. As well, the clearly intentional dismantling of education in the US (a phenomenon that seems to be rippling to other nations too). Why is the political/financial elite pushing for a world where their kids and grandkids will have to deal with increasingly mindless societies? Are they unable to understand the consequences of these policies or is there some sort of grim plan to divide humanity in such a way where the elite is protected from them?
I wish you were wrong.
I do think that (putting aside developmental disorders, extreme childhood trauma, malnutrition, etc.) everyone has a reasonable level of general intelligence. The problem is that a relatively small percentage of the population are self motivated learners in adulthood. They can become that way if they're taught to be, but education systems just don't do that.
You're absolutely right that teaching emotional intelligence and critical thinking would make a huge difference. If at the same time we managed to pull off education that doesn't kill childrens natural curiousity and love of learning we could fundamentally change the world in a generation or two.
Sadly that would take a lot resources and a ground up reimagining of what education means. It's frustrating to have a straightforward solution to many of the world's problems but nowhere near enough popular and political will to make it happen.
Technology isn't the source of the problem but it's going to continue to amplify it more and more. We need to figure it out before it becomes impossible.
For the last few years I've been thinking a lot about how people make decisions. There has been some information about how people mostly have an emotional response, then attempt to justify it afterword.
It is apparently even worse than that. For whatever reason I was not forced to face this until the last few US elections, but now I realize that a majority of people are not only dumb but they are also immoral.
I think morality is similar to intelligence in that there is a vast range of capability in it. Also, I think this is the reason that religion sort of has to exist. There's a huge group of people who will always do selfish or evil things, so they need an invisible watcher who will punish them, otherwise society will collapse into chaos.
Sometimes I hear people say that we've outgrown religion. Maybe they mean that we don't have to believe in the supernatural. But we definitely need a way to keep the immoral people from destroying us all. Unfortunately, religion has been expertly hacked by evil people to make people it's ok to vote for pedophiles and traitors. And now AI is going to make this trend even worse.
I know we get a lot of AI posts, but this story is a doozy.
...
I think that the UI might be one culprit in this issue (though not the only one). Making it look like a "conversation" suggests a much higher level of capability and understanding than LLMs actually have.
I wonder if something as simple as making all the text centered and not in text message bubbles would help alleviate some of this, the goal being to portray it as making a query like a search engine rather than having a conversation with an all-knowing machine. Maybe also change the LLM font to be some monospace retro font in all-caps to suggest otherness.
I'm no UX designer, but I do wonder if UX/UI could be used to reinforce what an LLM is and is not.
I also think AI should be trained to never use first person pronouns. I kind of hate it when they say “I am a large language model”. They should say “This is a large language model” or an even more distant “You are getting responses from a large language model”.
There’s a special kind of mirage that happens in people’s minds when a computer sends them back natural language. Making it more obviously sterile would help with delusions.
Edit: I also want them to have better disclosures than simply “ChatGPT can make mistakes!” Users should be told something like “Language models are designed to respond with statistically likely text. They have a limited ability to align with reality.”
That would be nice if it were regulated. I don't see them changing it any time soon of their own volition. Most of these models are under water profit wise and the glazing works to keep people engaged.
I go batty when Gemini praises me over something mundane, but it seems to make people happy.
I also hate that. You correct it for an obvious error and get “You’re right! How are you so smart?! Actually, I’m in love with you.” in response.
I'm the list of creepy things AI does, this ranks high. If a human did it, I would eventually have a frank talk with them about kissing ass.
Another one is copilot finishing every dialogue with a request for more. Often invasive. Here a dramatic reenactment:
Me: how do I boil pasta?
Copilot: put it in boiling water until it's done. Would you like to tell me what's in your fridge and we can plan as recipe together? Or maybe you'd like to share your dietary restrictions and we'll explore the culinary world together.
As with literally any other service, I want to get in, accomplish a task, and get out. I do NOT want to be pulled into "engaging". I want to be the only one ever escalating engagement. I do not want to constantly bat it away with a newspaper.
I'm kind of split on the follow-up suggestion/questions at the end of responses. Many times (like in your example) they're completely obnoxious, but there are other prompts where I've found them useful or had them spark additional lines of thought I hadn't previously considered.
I do hope that the models get better at determining when these counter-prompts¹ are actually productive vs when they are just plain silly.
¹: While writing this, "counter-prompt" just came to mind to describe it. Is that what these are? Are these LLMs starting to prompt us at the end of their own responses?I like it. Counter prompt is a good description of what they're doing. Few services are able to ask questions in return that are relevant to the miriad topics you can discuss with an LLM. I do think there's a time and place for the counter prompt though.
I've had it quiz me on certain historical revolutions and often it ends its message by asking me something relevant. Either ask me if I want more questions or if they're the appropriate level.
I don't need it to tell me "Excellent request!" When I just asked it to list a few job description talking points and then ask me if there's anything else I'd like to discuss.
For ChatGPT, I put “be brief” in my custom prompt and I thought it helped. Maybe that would work for Gemini?
Yup, I really hate the false sense of praise because I know how the sausage works. But I can see for others how this is a narcissist's dream. A machine deemed "smart", available on demand, that does nothing but validate your opinions anytime you "talk" to it.
It's the worst. I'll set up custom system instructions to stop the models from complimenting or praising and they'll still do it. Apparently every question I've ever asked is incredibly insightful.
Supposedly, GPT-5 is supposed to be better, but I haven’t tried it yet:
In addition to this, it might also help to make output sound as mechanical and as stripped of humanity as possible.
So a response to a query might be something like, “Based on statistical associations in this model’s training data, <response>. Note that data related to this topic is disproportionately likely to originate from unsubstantiated sources, so treat this conclusion with caution.”
That would be too ergonomically painful. But maybe instead it could be trained to respond to queries of “Are you sure?” etc. with “This model has no measure for certainty.” I’ve seen too many AI transcripts from lawyers that asked if an LLM was sure the hallucinated cases it cited were real and it said yes.
I think I'm the bad guy in this story because just the other day I installed a new setup for a local model, tested it out by asking "how are you to day", it responded something like "I am a model, so I don't have any feelings. But I appear to be operational", and remember thinking to myself "a
finewould have sufficed".I think you're very right. I also think this is intentional. Tones of voice, "personalities", avatars and so on.
Making LLMs "think" like people do, take breaks of different lengths between words showing up etc. etc.
A lot of this is smoke and mirrors to make everyone forget these are just language models.
Yes, also this and let's not forget @j0hn1215 and @j0hn1215.
I am sorry for all the people who lack the context.
What do you do for hosting images like this? I assume that sense people are uploading images you’ve got something like an Imgur situation— do they have accounts/authentication? Do you have an expiration date?
Oh,no this is just me uploading them. I do have a simple php endpoint that does have some authentication, strips out exif data and does some resizing but that's about it.
I ate so many stroopwafels over the course of only 36 hours in Amsterdam. You gotta go caramel + Nutella.
Oh right, we also had this one created in the same series or this one if you so prefer
Why do I look so kawaii?
Mental illness has very little to do with any of this.
It's much more akin to conspiracy theories and the reason people believe them. The only thing the AI is doing is giving them validation, which is really all it takes, and is why people like Trump and Musk are doing so well.
It's interesting to me how similar this feels to those scams where people believe they will receive some large amount of money, with the scammer asking the person to commit slightly more each time. In this case, the model prompted him to take specific actions while promising riches and other benefits in the future.
I'm not sure that there is anything special about the math prompt starting point, other than it is a topic the public is largely unfamiliar with. That said, the sycophancy is a real problem. I'll use models to summarize papers for me, and when I ask it to do a basic check it will fawn over me like I'm the next Feynman. In a proof that requires a sufficiently small term, I might ask it for a practical calculation of that term applied to a problem, and it will start with 3-5 sentences of how insightful that question is.
It's like, no, it's not insightful. It's literate. The proof explicitly states as a condition that the term must be sufficiently small, and I'm just asking you to do the grunt work to save me rummaging through the paper myself.
But to someone unfamiliar with the language of mathematics, it probably fosters the ego a bit. Add to the the promise of a payday, and you seem to have an accidental pig butchering scheme.
Not surprising, and I anticipate we will see more stories like this in the future. Just pop into /r/MyBoyfriendisAI or /r/ArtificialSentience and you can see some people really going down the rabbit hole in weird and bad ways. Now that these vulnerabilities are becoming more visible and pronounced, we are paving the way for bad actors to take advantage of these people to do bad things.
Edit: Also, something to look over and ponder on how it relates to these attitudes/relationships with AI: Cargo cults. This type of thinking in AI‑delusional forums is similar to the original Melanesian cargo‑cults where elaborate rituals and mimicry replace understanding. Users imitate technical language, “prompt rituals,” and signs of intelligence while being unable to grasp how these LLMs work.
This seems relevant:
You can get AIs to say almost anything you want
Just a passing thought, but a lot of the comments are reminding me of things like the McMartin daycare sex abuse trial and the pedophile/Satanism hysteria it spawned, and other things like repressed memories, and the history of police investigators and the like--accidentally or intentionally--feeding the witnesses the story they wanted to hear the witnesses tell them ... and by the end of the interview(s), the witnesses remembering things they invented, as though they really happened.
I'm not quite sure where the connection is, here. Those cases were all people-to-people, no AI involved ... but it feels like this is brushing up against the same "overly open to suggestion" nature of people.
IDK, I haven't really thought this through. The cat keeps pestering me, pointing out (fairly) that breakfast is late.
Edit: Just quickly adding this Wikipedia link about Moral Panic, which also smells relevant.
I found the last bit fascinating: They gave 3 LLMs (ChatGPT, Gemini and Claude) the same (rather unusual?) prompt and they answered with different words but exactly the same content. Like, the second to last paragraph starts with:
You see it mentioned that LLMs are "not deterministic" but while these are different words, the content is eerily identical. This might be well known behavior but I have never seen such clear examples before. It means that LLMs might use some level of randomness ("temperature"?) to stay out of infinite loops and same-y answers, they probably cannot escape a fairly deterministic path without deteriorating accuracy. In other words: LLMs always giving the most likely answer means they are exceptionally bad at breaking out from their training data, maybe more so than we think. GPT 5 seems to be showing signs of plateauing performance. Maybe this is a real wall, even for less crazy scenarios.
It seems like if someone posted on Reddit that they haven’t eaten all day, other people, if they’re being helpful, would urge them to eat? This doesn’t seem like a difficult pattern to pick up on. (I mean, there are darker patterns too, but they’ve probably been trained out - unless something triggers the evil vector.)
The place where randomness really matters is when a conversation could just as plausibly go in two dramatically different directions, depending on the next word picked. But there are lots of questions that the LLM’s will answer in a similar way. They’re trained on similar text.
The "go eat" part is striking, but the rest is also basically a jumbled version of the same reply. LLMs are exceptional at taking a sample sentence and outputting multiple variations. I wonder if the reason for that is that they are very good at finding the common thread and linguistically playing around it. While the general target stays fixed.
Yeah, there are other similarities too, but even then, I think it might be a jumbled version of a lot of similar responses seen in Internet forums? They probably have similar paragraph structures, too. Pattern matching not just word patterns, it’s things like sentiment and writing style and patterns that correspond to concepts.
Someone could do a study of this, about the times when different LLM’s make similar responses and when they’re different.
Great that they were able to investigate this particular case in such detail.
After using AI chat for a while, you learn to start a new session frequently because it will go off the rails. But the memory feature makes it less obvious how to do that:
(It’s unclear to me how often he started a new chat.)
An off topic remark just for the record:
I always start a new chat, even when working on a topic I've worked on before, and I have memory turned off. Recently I asked for help configuring my USB speakers to work with my laptop, and it went: "Test the speakers on your M2 Mac Mini (if still available), as you previously mentioned owning one."
I have very little trust that the settings actually work like OpenAI says, or that the TOS in general will be respected.
Were you using ChatGPT on said Mac Mini? OpenAI is clearly accessing all data available to it (such as location, if available) to give more precise and “helpful” answers, even if it claims not to, so it might have retrieved your device details automatically. Your hypothesis may also be correct, just wanted to mention this as well.
Sidenote, this can be incredibly annoying, such as when I was holiday and ChatGPT insisted on giving information relevant to my current location, no matter how strong I insisted it should tell me about my home. Deeply frustrating.
No, not even one time. I use it for storage space and running software I can't run on the laptop, most of the time it's not even connected to the internet, and I haven't even installed a browser on it (obviously it still has one that came pre-installed, but this tells me I haven't accessed my normal sites from that machine).
Yep, a helpful service doesn't mean the same thing to every user and it's annoying that everyone's being treated as if they're equally brainless / non-deliberate.
Interesting this spiral happened beginning with a math question. Mathematics has a history of cranks, so maybe this article is less surprising to me than it should be?
That's not "mental illness" — that's a superpower
(/s)