37 votes

Why do some people posting ChatGPT answer to the discussion/debate/question?

This behaviour is thankfully not common on tildes? But like, I understand that if they try to pass off as their own argument. But what with the preface "I ask ChatGPT" and then end with "I don't know enough about topic" or "What do you think". What do they think how that contribute to the discussion? If OP want to ask ChatGPT-like answer, they can just log on and do it right there and then. And they clearly know the stigma and drawback of it (at least I hope so), but still believe it has enough factual information in the answer despite having little or no knowledge of the topic in question (Otherwise they will edit the output or outright just provide it).

(Sorry, if this come out not clearly, I am not very good as convey my idea, even in my native language)

37 comments

  1. [2]
    LetterCounter
    Link
    It seems like a waste of everyone's time. Like a worse version of "here's a link to Google"

    It seems like a waste of everyone's time. Like a worse version of "here's a link to Google"

    59 votes
    1. arqalite
      Link Parent
      At least for that, we used to have LMGTFY to make it a little humorous.

      At least for that, we used to have LMGTFY to make it a little humorous.

      17 votes
  2. [7]
    0x29A
    Link
    I think it gives people a false sense of feeling helpful- and unfortunately, even further, is based on a untrustworthy way of obtaining information - so there are layers of problems with this...

    I think it gives people a false sense of feeling helpful- and unfortunately, even further, is based on a untrustworthy way of obtaining information - so there are layers of problems with this approach. Feels like it's just a way to pretend to contribute without actually being involved in the discussion- it's soulless (just like the art AI creates, IMO).

    While I lean against chatGPT and other generative types of tools in general- if someone wants to contribute to a discussion- asks chatGPT something, cross-references this with other real, valid sources, forms their own set of opinions and knowledge about a subject and only then joins a discussion, then I have a lot less of a problem with it. But honestly- that entire workflow would be better without AI involved at all...

    Mostly it's just being used as a "lazy" shortcut, unsurprisingly. It's basically the same as someone not knowing anything about a subject and just going to a webpage and copy-pasting the information verbatim into a discussion (which happens also, and is just as unhelpful)

    28 votes
    1. [5]
      cazydave
      Link Parent
      This I understand, but I wonder why with the preface. Are they feeling guilty of relies on it, but still want to be helpful? Because with the internet ability to call-out answer as AI and not just...

      This I understand, but I wonder why with the preface. Are they feeling guilty of relies on it, but still want to be helpful? Because with the internet ability to call-out answer as AI and not just ill/misinform is suspect at best, just omit the warning, and it will take a long while (or maybe never) for people to point out while argue with them

      2 votes
      1. [2]
        irren_echo
        Link Parent
        It absolves them of responsibility if the answer is poorly received. "Oh, you disagree? Well don't blame me, ChatGPT said it!" Kind of like "I heard a rumor.... But it's just a rumor, I never said...

        It absolves them of responsibility if the answer is poorly received. "Oh, you disagree? Well don't blame me, ChatGPT said it!" Kind of like "I heard a rumor.... But it's just a rumor, I never said it was true (so don't blame me for passing on false information)."

        17 votes
        1. cazydave
          Link Parent
          I sincerely hope that attitude isn't prevalence as I thought.

          I sincerely hope that attitude isn't prevalence as I thought.

      2. [2]
        0x29A
        Link Parent
        Hrm, it is difficult to understand their intentions. One possibility could be that people prefacing posts with "ChatGPT says" are people that mistakenly think ChatGPT's output is truthful /...

        Hrm, it is difficult to understand their intentions. One possibility could be that people prefacing posts with "ChatGPT says" are people that mistakenly think ChatGPT's output is truthful / canonical / authoritative and so they assign far too much "weight" to it-- and so they think that tossing its output into a discussion will maybe, make their argument for them or something, or bring some undeniable truth into it?

        Not sure. Could be guilt, like you said. It might depend on the discussion/conversation, there are probably different reasons people do this, depending on the context.

        But I have seen, time and time again, despite any of these chatbots disclaimers to the contrary- people, especially those not tech-savvy, assume ChatGPTs output is true / smart / etc

        5 votes
        1. cazydave
          Link Parent
          Your claim about too trusting ChatGPT is likely true for most case, giving what I know IRL. Hope when the hype dying down people will have grounded look to it capability, although giving alot of...

          Your claim about too trusting ChatGPT is likely true for most case, giving what I know IRL. Hope when the hype dying down people will have grounded look to it capability, although giving alot of AI stuff need hype to pull funding it won't happen soon enough

          1 vote
    2. public
      Link Parent
      The process you describe in the middle paragraph is exactly the same as what academics teach as proper use of Wikipedia. Read the article to get a summary of the topic, then head to the citations...

      The process you describe in the middle paragraph is exactly the same as what academics teach as proper use of Wikipedia. Read the article to get a summary of the topic, then head to the citations to delve into the details and nuances. Much better at giving an understanding of context than attempting to synthesize a bunch of sources from a search.

      2 votes
  3. [11]
    krellor
    Link
    Well I'm not an expert, but I asked ChatGPT: User Please answer the following question: "This behaviour is thankfully not common on tildes? But like, I understand that if they try to pass off as...

    Well I'm not an expert, but I asked ChatGPT:

    User
    Please answer the following question: "This behaviour is thankfully not common on tildes? But like, I understand that if they try to pass off as their own argument. But what with the preface "I ask ChatGPT" and then end with "I don't know enough about topic" or "What do you think". What do they think how that contribute to the discussion? If OP want to ask ChatGPT-like answer, they can just log on and do it right there and then. And they clearly know the stigma and drawback of it (at least I hope so), but still believe it has enough factual information in the answer despite having little or no knowledge of the topic in question (Otherwise they will edit the output or outright just provide it).

    (Sorry, if this come out not clearly, I am not very good as convey my idea, even in my native language)"

    ChatGPT
    It seems like you're questioning the value of users prefacing their questions with "I ask ChatGPT" and then ending with statements like "I don't know enough about the topic" or "What do you think." You're wondering how this contributes to the discussion, especially if the user is aware of the limitations of relying solely on ChatGPT's responses. It's possible that some users believe they can still glean valuable insights or provoke thoughtful discussion by framing their questions this way, even if they lack expertise on the topic.

    However, it's important for participants to recognize the limitations of such contributions and strive for informed dialogue where possible.

    User
    Why do you think people would contribute ChatGPT answers without otherwise having knowledge or an opinion relevant to the discussion?

    ChatGPT
    People might contribute ChatGPT answers even if they lack personal knowledge or opinions for several reasons:

    • Curiosity: They might be genuinely curious about what ChatGPT has to say on a particular topic and want to explore the AI's capabilities.

    • Testing the AI: Some users might want to test ChatGPT's accuracy or versatility by asking questions across different domains.

    • Facilitating Discussion: They might use ChatGPT's responses as a starting point to spark discussion or debate within a community.

    • Convenience: It could be easier for some users to rely on ChatGPT's responses rather than conducting their own research or forming their own opinions.

    • Lack of Expertise: Users might genuinely lack expertise on a topic but still want to engage in the conversation by contributing what they perceive as relevant information.

    • Experimentation: Some users might be experimenting with different ways of interacting with AI or online communities.

    Overall, while there can be valid reasons for contributing ChatGPT answers without personal knowledge or opinions, it's essential for users to be transparent about the source of their contributions and to recognize the limitations of AI-generated content in certain contexts.

    ChatGPT can make mistakes. Consider checking important information.

    Joking aside, I think there are a few reasons, some of which ChatGPT mentioned. People like to contribute, and AI gives them one way to do that. They might like the idea of being helpful but don't have the expertise, knowledge, or time to really dig into the question.

    I suspect some of it is just participating in the new hotness, and maybe being on the younger side so they aren't quite aware of how unengaged or unserious that behavior can seem, depending on the topic.

    Honestly, I don't think folks relating ChatGPT answers really scrutinize their motives all that much.

    Edit

    This I understand, but I wonder why with the preface. Are they feeling guilty of relies on it, but still want to be helpful?

    Not who you asked this follow up to, but I suspect it is as simple as not wanting to be responsible/blamed/called out on errors or problems. Relaying ChatGPT answers let's you claim the glory of it is good, and avoid responsibility if it is bad.

    15 votes
    1. [4]
      chocobean
      Link Parent
      Your comment, In particular the idea that users " might like the idea of being helpful but don't have the expertise, knowledge, or time to really dig into the question" reminded me of behaviour...

      Your comment, In particular the idea that users " might like the idea of being helpful but don't have the expertise, knowledge, or time to really dig into the question" reminded me of behaviour from younger children. Sometimes they really want to be part of the conversation but they don't know how to contribute, so they chip in with "well my mom said...." or some quote verbatim from TV.

      Perhaps it's a tentative bid on being part of the conversation.

      10 votes
      1. krellor
        Link Parent
        Yeah, I think that's a good and generous way of viewing it. It can be intimidating for folks to stake a position and invite correction. Relaying a computers answer is a safe way to test the...

        Yeah, I think that's a good and generous way of viewing it. It can be intimidating for folks to stake a position and invite correction. Relaying a computers answer is a safe way to test the waters.

        Also, maybe they only have one job on this ship.

        3 votes
      2. [2]
        Gaywallet
        Link Parent
        You know I had never thought of this being a reason behind why people without expertise on a subject will chime in by repeating common knowledge which is actually incorrect or harmful and how I...

        You know I had never thought of this being a reason behind why people without expertise on a subject will chime in by repeating common knowledge which is actually incorrect or harmful and how I wish people wouldn't chime in with their incorrect knowledge about ideas such as the neurobiology of drugs with regards to the harm that it can cause. It's extremely frustrating when it happens and I wish people wouldn't continue to give false credibility to misconceptions, but this gives me a lot more compassion towards people because they might just want to be included in a conversation šŸ˜”

        3 votes
        1. chocobean
          Link Parent
          I try to remember and appreciate when much smarter people talk to me and include me in conversation, and I try also to live that way as well. I've always loved one little scene in the Chronicles...

          I try to remember and appreciate when much smarter people talk to me and include me in conversation, and I try also to live that way as well.

          I've always loved one little scene in the Chronicles of Narnia's The Last Battle: Puzzle, a simple donkey, had just inadvertently betrayed Narnia, causing the death of many good Narnians. Jewel is a unicorn and a good friend of Narnia's current king, whose royal party rescued Puzzle the night before.

          Puzzle felt himself to be still in disgrace, and also he didnā€™t really quite understand what had happened. [ ... ] Puzzle looked and felt a good deal better this morning. Jewel, being a Unicorn and therefore one of the noblest and delicatest of beasts, had been very kind to him, talking to him about things of the sort they could both understand like grass and sugar and the care of oneā€™s hoofs.

          I've put my foot in my mouth so many times, but the best and the most noble types of people will still try to engage me kindly. They know, either instinctively or having seen my blunders, that I can't keep up with them in conversations about political intrigue or battle philosophies, but they'll still chat with me about grass and sugar, and it makes a deep impression.

          2 votes
    2. [4]
      updawg
      Link Parent
      I got a significantly different response. People who post ChatGPT responses in discussions or debates might do so for a variety of reasons, even with a preface that they asked ChatGPT and an...

      I got a significantly different response.

      People who post ChatGPT responses in discussions or debates might do so for a variety of reasons, even with a preface that they asked ChatGPT and an admission of their own lack of knowledge on the topic. Here are a few potential reasons:
      1. Starting Point for Discussion: They may see the ChatGPT response as a neutral, baseline answer to generate discussion. It provides a starting point that is not overly influenced by personal bias, which can help in polarized or complex topics.

      2. Access to Summarized Information: ChatGPT can quickly provide a summarized overview of a topic, including various viewpoints. For someone not deeply familiar with a subject, this can seem like a valuable contribution, offering a broad perspective in a short amount of time.

      3. Encouraging Critical Thinking: By presenting an AI-generated answer and openly admitting to their own lack of knowledge, the poster might be inviting others to critically analyze the information provided. This could be seen as a way to encourage learning and deeper understanding through discussion.

      4. Technological Demonstration: Some might be intrigued by the capabilities of AI and want to share what it can do. They might see it as a demonstration of how technology can contribute to human discussions, even if imperfectly.

      5. Facilitating Engagement: People might post AI-generated responses to engage others who might not have interacted with the question otherwise. It could attract attention to the topic and stimulate conversation among users who would prefer to respond to or critique the AI's take rather than initiate their own.

      6. Inclusivity: For those who feel insecure about their knowledge or ability to articulate their thoughts on complex subjects, quoting an AI can be a way to participate in discussions without the fear of being judged for their personal views or understanding.

      7. Time and Effort: Researching and formulating a comprehensive response can be time-consuming. By using ChatGPT, users can quickly contribute something that they believe is of value, without the need for extensive personal research or crafting a detailed response.

      While the effectiveness and appropriateness of this approach can vary widely depending on the community, topic, and the quality of the discussion it generates, these motivations reflect a broad range of reasons why someone might choose to include AI-generated content in conversations.

      4 votes
      1. [2]
        krellor
        Link Parent
        Clearly we need to let ChatGPT fight it out with itself. Or better yet, ask Gemini and ChatGPT the same initial question and then prompt them to critique each other's answers back and forth.

        Clearly we need to let ChatGPT fight it out with itself. Or better yet, ask Gemini and ChatGPT the same initial question and then prompt them to critique each other's answers back and forth.

        4 votes
        1. Minty
          Link Parent
          I've made a script that does this, but one LLM pretends to be Pinky, and the otherā€”Brain. Naturally, the latter's job is to plan world conquest, and Pinky asks dumb, yet armor-piercing questions...

          I've made a script that does this, but one LLM pretends to be Pinky, and the otherā€”Brain. Naturally, the latter's job is to plan world conquest, and Pinky asks dumb, yet armor-piercing questions about it.

          It's mostly garbage, but sometimes pure gold!

          5 votes
      2. sparksbet
        Link Parent
        Prompt engineering is the biggest part of working with GenAI tools like this so I'm not surprised tbh -- even surprisingly minor differences in how a question is asked can totally change the response.

        Prompt engineering is the biggest part of working with GenAI tools like this so I'm not surprised tbh -- even surprisingly minor differences in how a question is asked can totally change the response.

        3 votes
    3. [2]
      cazydave
      Link Parent
      That doesn't look like an attitude that mess well with life. I rather hope they ignorant or trollish/humour in ChatGPT response.

      Not who you asked this follow up to, but I suspect it is as simple as not wanting to be responsible/blamed/called out on errors or problems. Relaying ChatGPT answers let's you claim the glory of it is good, and avoid responsibility if it is bad.

      That doesn't look like an attitude that mess well with life. I rather hope they ignorant or trollish/humour in ChatGPT response.

      1. krellor
        Link Parent
        It also doesn't necessarily need to be intentional, but could come from insecurity. E.g., they aren't confident enough to post their own position, so they hide behind ChatGPT. Which hopefully...

        It also doesn't necessarily need to be intentional, but could come from insecurity. E.g., they aren't confident enough to post their own position, so they hide behind ChatGPT. Which hopefully would be a bridge to developing the skills to develop and articulate an actual position.

        2 votes
  4. [2]
    Sodliddesu
    Link
    I'm gonna crinkle on my tinfoil hat for a moment and posit, what better way to gather training data for a post that by prompting GPT and then feeding it back all the information generated from the...

    I'm gonna crinkle on my tinfoil hat for a moment and posit, what better way to gather training data for a post that by prompting GPT and then feeding it back all the information generated from the discussion?

    Why is the sky blue? "I asked GPT and it said x" Well, GPT is wrong, it's actually due to Y/Z with a hint of A. "Good, that will inform future prompts!"

    Most likely not the case but still... Maybe?

    In reality, if someone asks a question I'm vaguely familiar on I may personally do some extra research while commenting ("When did the Clooney Batman come out anyway?") and for these darn kids these days they may be doing something similar but leaning on GPT to feed them 'information' about a topic.

    9 votes
    1. ShroudedScribe
      Link Parent
      Considering Reddit has made some huge data sales deals, this isn't impossible. But it seems like a faulty strategy to improve data, especially when half of the replies to something on social media...

      Considering Reddit has made some huge data sales deals, this isn't impossible.

      But it seems like a faulty strategy to improve data, especially when half of the replies to something on social media are smartass jokes, insults, and/or non-responses.

  5. [2]
    SteeeveTheSteve
    Link
    I've noticed that, it reminds me of "Let me google that for you" links for some reason. I think it's that extra bit effort needed to produce the link that solidly shows how far an asshat will go...

    I've noticed that, it reminds me of "Let me google that for you" links for some reason. I think it's that extra bit effort needed to produce the link that solidly shows how far an asshat will go to show they dislike your question that they think could have just been Googled. Only this time they think you could have asked a chatbot.

    Though, ChatGPT is new enough that many think they're doing someone a solid by copy/pasting an answer from it and may even feel like they're being smart by providing an answer from the "flawless" ChatGPT (because they don't know it can make stuff up). They provide proof it's unquestionable because it has the "I asked ChatGPT" tag. Kind of like people posting wiki info or saying it's in the encyclopedia.

    Lastly, there's the ones who do it for kicks. Many ChatGPT answers are funny and it's fun to share that and see who believes it. Kind of like posting Onion articles.

    8 votes
    1. disk
      Link Parent
      Certainly, it reminds me of mailing lists, with laconic answers ("yes", "no", "read the man pages") that contribute little to the discussion, since they're so vague you can't derive an actual...

      Certainly, it reminds me of mailing lists, with laconic answers ("yes", "no", "read the man pages") that contribute little to the discussion, since they're so vague you can't derive an actual solution to your problem/genuine answer.

      LLMs/transformers are a lot like that, I've asked Copilot about crystallography concepts as a test, only to be met with information about the fall of the Ottoman empire. And when it works, sometimes it comes up with vague non-answers (or worse yet, a confidently wrong answer).

      I don't see how someone could even equate "asking ChatGPT" to quoting an article or website. It's stripped of all references when posted here, and for all I know, it could be looking at a 1970s datasheet for something that was heavily reworked in the 2000s (that happened before, just not on Tildes).

  6. [5]
    stu2b50
    Link
    The only time I've seen examples of this are posts about AI or LLMs, where I see it as more of a tongue-in-cheek kind of thing.

    The only time I've seen examples of this are posts about AI or LLMs, where I see it as more of a tongue-in-cheek kind of thing.

    7 votes
    1. [4]
      first-must-burn
      Link Parent
      I think this was the only time I've done it, but I posted one asking ChatGPT to be critical of Noam Chomsky in the context of an article where Chomsky was critical of ChatGPT. It was definitely...

      I think this was the only time I've done it, but I posted one asking ChatGPT to be critical of Noam Chomsky in the context of an article where Chomsky was critical of ChatGPT. It was definitely tongue in cheek, in the vein of "God is dead. ~ Nietzsche" vs "Nietzsche is dead. ~ God".

      Although I was legitimately curious how accurate it was, but nobody took that up.

      9 votes
      1. [3]
        Plik
        Link Parent
        Ha, I remember that. I remember getting a few sentences in, then thinking "I really don't care about output from ChatGPT", and then moving on. I got what you were doing, and had no issue with the...

        Ha, I remember that.

        I remember getting a few sentences in, then thinking "I really don't care about output from ChatGPT", and then moving on.

        I got what you were doing, and had no issue with the post, I just didn't care at all to comment on or even think about a ChatGPT response. Not something I knew, I literally realized from your comment how little I cared.

        ...Which then made me realize how little I care about internet comments anymore, because they are probably all AI.

        ...Which now makes me wonder why am writing this.

        ...Which is now making me question reality.

        So, thanks, your comment was more helpful than I expected, and has perhaps triggered an existential crisis.

        8 votes
        1. [2]
          first-must-burn
          Link Parent
          Serious response: I try to act in good faith everywhere, but Tildes is a place where I often see that good faith returned. So I will assume you are a real person with real feelings. * I often feel...

          Serious response: I try to act in good faith everywhere, but Tildes is a place where I often see that good faith returned. So I will assume you are a real person with real feelings. *

          I often feel the way you described (existential crisis) about the Internet and life in general these days. I'm sorry if I tipped you into something, but you are not alone in these feelings.

          The Internet is part of our lives for better or worse, so my hope and aim is to act sincerely and find ways to make real connection so that not all of it has to be terrible.

          ā€“ā€“ā€“
          * Less serious footnote: if you are an AI chat bot, maybe being kind will result in a seat at the table when our AI overlords finally take over.

          5 votes
          1. Plik
            Link Parent
            Haha thanks. I assume most people here are real, but there is always that percentage of doubt. I am not SupremacyAI, or another AI....but that's exactly the kind of thing an AI would say.

            Haha thanks. I assume most people here are real, but there is always that percentage of doubt.

            I am not SupremacyAI, or another AI....but that's exactly the kind of thing an AI would say.

            3 votes
  7. [2]
    llehsadam
    Link
    This reminded me of a Redditor that kept on posting AI generated game ideas to the indie development subreddit I moderate. It just annoyed everyone, even if some of the ideas could be successful...

    This reminded me of a Redditor that kept on posting AI generated game ideas to the indie development subreddit I moderate. It just annoyed everyone, even if some of the ideas could be successful games. And the AI flavoring in the response made the audience the complete opposite of receptive.

    I think they need to ask themselves first if the community would welcome AI responses - the answer so far is almost always no because we expect human to human interactions (ā€¦for now).

    6 votes
    1. somewaffles
      Link Parent
      Is that /r/gameideas, by any chance? (feel free to not respond to this if you don't want. Just curious because that subreddit can FEEL like the game ideas are AI generated half the time.) I've...

      Is that /r/gameideas, by any chance? (feel free to not respond to this if you don't want. Just curious because that subreddit can FEEL like the game ideas are AI generated half the time.)

      I've tried using AI to generate game ideas and mechanics, and so far I've found that it's kind of trash at it. It's possible I'm just bad at prompting, but it seems like AI tends to provide very vauge / derivitive ideas. Imo, fun in games comes from the details built into the main game loop. It tends to have the same issues that humans on those sorts of subreddits have, where the idea being posed could just as well be a book or a movie. Even when it does provide game-ified ideas, they are so vauge, its nothing I couldn't have come up with in 30 seconds (ie "It's pong but you control both paddles!")

  8. Thomas-C
    Link
    I don't mind a chatgpt post when the topic is about llms - sometimes it's interesting for comparison. But in the context of a different topic/discussion, I'm usually not going to engage with it....

    I don't mind a chatgpt post when the topic is about llms - sometimes it's interesting for comparison. But in the context of a different topic/discussion, I'm usually not going to engage with it. Once it's clear to me the writing is llm driven I tend to skip it. Both because the llm usually has little of interest to say and because I question whether the poster can really engage on the topic at all. I don't consider the post representative without some form of clarification on their part. I want to see what people have to say and chatgpt is not a person.

    I think there's room to be flexible though. For example, I can get behind someone using chatgpt to help make their language more understandable. If they said something on the front end, that that's what they're doing, ok sure, go for it. What I'm after is your perspective, so if the tool is helping to show us that, all good. If it happens to be that the llm does accurately capture your perspective, ok, sure too. Just say so, is all. I think saying so should be understood as proper etiquette for that sort of thing, if I'm gonna be proscriptive about it.

    What doesn't work to me is just posting whatever it says without any clarification/context. "Letting chatgpt represent them" maybe is a good way to put it. I won't engage with that. Skimmed at best, never voted on, never replied to. I don't care what chatgpt has to say about much of anything and I'm very doubtful someone who lets it represent them has much to say either.

    If you're well meaning, as in you think you're being helpful by having chatgpt construct a response but offer none of your own, consider for a second the purpose of a forum, the topic at hand, and whether what you're doing actually contributes. If you can thread that needle, power to you, but I'm willing to bet if you went back and looked, there won't be a pattern of it actually helping anything or fostering much discussion. I can't see one when I've tried to look, hence me disregarding those posts outright. Doesn't matter if there's some nugget of truth in the spit, I'd rather struggle with someone's rough prose than glide through llm spit. Because that's what I'm here for: Folks perspectives and opinions. If that's not what you're giving me then I'm likely ignoring you, and a chatgpt response is like waving a big flag saying "YOU CAN IGNORE ME IN CONCLUSION HAVE A NICE DAY" lol.

    5 votes
  9. Spoom
    Link
    A team member posted a brainteaser in our chat and asked folks to share evidence that they figured out how to solve it without actually giving the answer away. Many people participated and subtly...

    A team member posted a brainteaser in our chat and asked folks to share evidence that they figured out how to solve it without actually giving the answer away. Many people participated and subtly shared roundabout answers. Inevitably, of course, someone decided to say "this is what AI says" with a full solution provided by a bot. To their credit, they deleted the answer once someone pointed out that they were really ignoring the spirit of the request, but I think ultimately the fact that someone will do this says more about the person than the technology.

    2 votes
  10. [3]
    skybrian
    (edited )
    Link
    Itā€™s a novelty that wears off more quickly for some people than others. Some people find ChatGPT interesting and want to share. Iā€™m also guessing that it seems similar to quoting a source? ChatGPT...

    Itā€™s a novelty that wears off more quickly for some people than others. Some people find ChatGPT interesting and want to share.

    Iā€™m also guessing that it seems similar to quoting a source? ChatGPT is a hint generator, not a reference, but sometimes it might seem like a source.

    Hopefully, people will get in the habit of using it without talking about it too much, much like we donā€™t normally talk about which search queries we used to find a useful article. Itā€™s not necessary to give Google or your spellchecker or autocomplete credit, and similarly for ChatGPT.

    1 vote
    1. [2]
      cazydave
      Link Parent
      I'm doubtful that ChatGPT can work as hint guide on anything that is not common knowledge. But yeah, hope it catches on normal people about current AI limitation

      I'm doubtful that ChatGPT can work as hint guide on anything that is not common knowledge. But yeah, hope it catches on normal people about current AI limitation

      1 vote
      1. skybrian
        Link Parent
        ā€œCommon knowledgeā€ is ambiguous - it depends on which people youā€™re talking about. Iā€™ve used ChatGPT as a pretty good source of hints about how to do things in TypeScript. Theyā€™re probably things...

        ā€œCommon knowledgeā€ is ambiguous - it depends on which people youā€™re talking about. Iā€™ve used ChatGPT as a pretty good source of hints about how to do things in TypeScript. Theyā€™re probably things that are common knowledge among expert TypeScript programmers, people who understand everything in the manual along with common idioms, and it becomes less useful as I learn more. But I didnā€™t know that going in.

        Even when itā€™s just repeating things from Wikipedia, there are all sorts of things in Wikipedia that arenā€™t common knowledge in the sense of being things most people know.

        2 votes
  11. xk3
    Link
    I am an optimist. I think people that do this are well-intentioned. People feel lonely and want to socialize. They want to make friends. But not everyone is an expert on every topic and so they...

    I am an optimist. I think people that do this are well-intentioned. People feel lonely and want to socialize. They want to make friends. But not everyone is an expert on every topic and so they ask an "AI" to help them communicate with another human. It's better that they preface something with "I asked ChatGPT..." but I think eventually people will stop prefacing just like they have stopped prefacing "I asked Google..." even though they are totally different tools people are using them in a similar way with about the same amount of media literacy as before