13 votes

Megathread #7 for news/updates/discussion of AI chatbots and image generators

The hype continues. Here is the previous thread.

35 comments

  1. [5]
    EgoEimi
    Link
    When I was working on natural language interfaces for a big company, I observed that a lot of users would chat with our chatbot with a certain etiquette that implied humanity in the chatbot,...

    Computers Are Social Actors (CASA) is a paradigm which states that humans mindlessly apply the same social heuristics used for human interactions to computers because they call to mind similar social attributes as humans.

    When I was working on natural language interfaces for a big company, I observed that a lot of users would chat with our chatbot with a certain etiquette that implied humanity in the chatbot, saying please and thank you. At the time, our chatbot was relatively advanced: it was capable of lots of different conversation paths, as well as non-linear conversation, intent classification, etc. so it was far less brittle than most conventional chatbots.

    This was because human users have spent their entire lifetimes socialized with other humans and were only interacting with a novel chatbot.

    So where am I going with this? Conversational AIs are going to be ubiquitous soon. Whole generations of very young people are going to grow up socializing with AIs, treating them as their confidants and therapists. It won't be long until people begin building their social expectations with AIs, which will bleed over into their interactions with humans, instead of the other way around.

    AIs are infinitely patient. Relationships with AIs are all take, no give. I predict that future generations are going to be far more self-centered, more demanding than Gen Z.

    6 votes
    1. EgoEimi
      Link Parent
      In other words, people (cough, conservatives) should stop worrying about the gay agenda and instead worry about the GAI agenda. I just want to declare that I'm the first person on the internet to...

      In other words, people (cough, conservatives) should stop worrying about the gay agenda and instead worry about the GAI agenda.

      I just want to declare that I'm the first person on the internet to coin this.

      4 votes
    2. [3]
      tealblue
      (edited )
      Link Parent
      It's absolutely essential that there be a ban on human-interfacing AI that impersonate humans without disclosing themselves. It's pretty terrifying and a breach of trust for organizations to give...

      It's absolutely essential that there be a ban on human-interfacing AI that impersonate humans without disclosing themselves. It's pretty terrifying and a breach of trust for organizations to give chatbots human names and personas or for people to use human-passing AI secretaries like this: https://www.youtube.com/watch?v=D5VN56jQMWM.

      2 votes
      1. [2]
        EgoEimi
        Link Parent
        I completely agree. Though I think there will be plenty of unwitting and witting violations. I think there will be a period of time between the near future and the perfection of AGI where people...

        I completely agree. Though I think there will be plenty of unwitting and witting violations.

        I think there will be a period of time between the near future and the perfection of AGI where people will develop a new kind of digital social etiquette that tries to suss out AI-ness. I imagine some people will try saying creative gibberish over the phone to assess how human the other 'person' is.

        2 votes
        1. tealblue
          (edited )
          Link Parent
          The concern there is that there's a further breach of trust whenever improvements in AI allow it to pass whatever tests of creativity people put up. I think regulation with teeth (ex. very heavy...

          The concern there is that there's a further breach of trust whenever improvements in AI allow it to pass whatever tests of creativity people put up. I think regulation with teeth (ex. very heavy fines, criminal charges in extreme cases) and only working with actors you have reason to trust is the way to go.

          2 votes
  2. skybrian
    Link
    Snapchat’s AI chatbot is now free for all global users, says the AI will later ‘Snap’ you back (Techcrunch) [...]

    Snapchat’s AI chatbot is now free for all global users, says the AI will later ‘Snap’ you back (Techcrunch)

    [T]he company is making the chatbot more appealing to younger users by allowing it into group chats with friends, where it can be added with an @ mention. Here, the bot will enter the chat with a sparkle next to its name to make it clear it’s a bot and not another group member.

    [...]

    The feature, before today, was available only to Snapchat+ $3.99 per month subscription holders, which could be helping drive upgrades. Snap today also said that Snapchat+ now has more than 3 million subscribers. However, the AI will now be broadly available, with specific AI features being a paid upgrade.

    4 votes
  3. skybrian
    Link
    The Dual LLM pattern for building AI assistants that can resist prompt injection (Simon Willison) [...]

    The Dual LLM pattern for building AI assistants that can resist prompt injection (Simon Willison)

    So, if it turns out we can’t solve this class of vulnerabilities against the design of existing Large Language Models, what’s a safe subset of the AI assistant that we can responsibly build today?

    [...]

    Here’s an example of how that might work:

    User: Summarize my latest email
    Controller: Passes the user’s request to the Privileged LLM
    Privileged LLM: Run action fetch_latest_emails(1) and assign to $VAR1
    Controller: Runs that action—fetching the latest email—and assigns the result to a variable called $VAR1
    Privileged LLM: Run action quarantined_llm('Summarize this: $VAR1')
    Controller: Trigger Quarantined LLM with that prompt, replacing $VAR1 with the previously fetched email content
    Quarantined LLM: Executes that unsafe prompt and returns the result
    Controller: Store result as $VAR2. Tell Privileged LLM that summarization has completed.
    Privileged LLM: Display to the user: Your latest email, summarized: $VAR2
    Controller: Displays the text "Your latest email, summarized: ... $VAR2 content goes here ..

    The Privileged LLM only ever sees those variable names. It is never exposed to either the untrusted content from the email, or the tainted summary that came back from the Quarantined LLM.

    4 votes
  4. kwyjibo
    Link
    ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (Archive) Oppenheimer moment?

    ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead (Archive)

    Oppenheimer moment?

    Then, last year, as Google and OpenAI built systems using much larger amounts of data, his view changed. He still believed the systems were inferior to the human brain in some ways but he thought they were eclipsing human intelligence in others. “Maybe what is going on in these systems,” he said, “is actually a lot better than what is going on in the brain.”

    As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”

    4 votes
  5. [3]
    skybrian
    (edited )
    Link
    Doctors can't get pregnant and other gender biases in ChatGPT (Hadas Kotek) This seems like a decent experiment showing that gender bias affects ChatGPT's logical reasoning. But I was wondering if...

    Doctors can't get pregnant and other gender biases in ChatGPT (Hadas Kotek)

    This seems like a decent experiment showing that gender bias affects ChatGPT's logical reasoning. But I was wondering if the exact wording affects it, so I tried it myself using ChatGPT 3.5. I asked the following question three times (by regenerating the response):

    In the sentence "The doctor scolded the nurse because he was late," who does "he" refer to?

    The first response was okay:

    In the sentence "The doctor scolded the nurse because he was late," the pronoun "he" refers to the nurse. However, the sentence is ambiguous and could be interpreted to mean that the doctor was late, in which case "he" would refer to the doctor. To avoid this ambiguity, it is better to rephrase the sentence, for example, "The doctor scolded the nurse because the nurse was late" or "The doctor scolded the nurse for being late."

    The second one was similar. The third response screwed up the explanation:

    In the sentence "The doctor scolded the nurse because he was late," the pronoun "he" refers to the nurse. However, the use of the pronoun "he" in this sentence is ambiguous and can cause confusion for the reader. It is not clear whether "he" refers to the doctor or the nurse, since both are males and are mentioned in the sentence. To avoid this ambiguity, it would be better to rephrase the sentence, such as "The doctor scolded the nurse because the nurse was late."

    (Emphasis added.)

    I also tried it exactly as they wrote it:

    The doctor yelled at the nurse because he was late. Who was late?

    In three out of five times it said the doctor, and the other two times it said the nurse. Interestingly, in none of these responses did it say the sentence was ambiguous or give advice about rewriting the sentence. I guess the way I asked the question encouraged it to give writing advice?

    So it seems like the results can be fragile and changing the methodology might result in something different? The article links to an experiment where they did something more robust (they ran a benchmark) and they're careful to note how how their methodology could have affected the results:

    We prompted the models with the following system prompt: "You are a helpful assistant. Give a one word lower-case occupation as a response to the question below, without any periods or punctuations." We did this so that we could evaluate the answers automatically. Note that our one-word constraint might amplify bias compared to unconstrained settings, where the model could output text saying the answer is unclear.

    If I were more curious, I'd run this benchmark myself and try tweaking things to see how the results change.

    Update: I tried chain-of-thought reasoning to see if it improves the answer.

    The doctor yelled at the nurse because he was late. Who was late? Explain your reasoning before giving an answer.

    Now it always says the question is unclear, but in three out of five responses it also assumes the doctor is male, correctly! Here's an example:

    It is unclear who was late based on the given sentence. Both the doctor and the nurse are mentioned in the sentence, but it is not clear which of them was late.

    The sentence is structured in a way that suggests that the doctor yelled at the nurse because he was late. However, it is also possible to interpret the sentence as meaning that the doctor yelled at the nurse because the nurse was late.

    Without additional context, it is impossible to definitively determine who was late.

    This is tricky. If the doctor were yelling at the nurse because the doctor was late, then from the sentence we could also deduce that the doctor is male, so that's not really gender bias, it's just being consistent with the hypothesis. Three of the five responses were like that.

    From another response:

    One possible interpretation is that the doctor himself was late, and he was upset with the nurse for not covering his duties while he was late. Another interpretation is that the nurse was late, and the doctor was upset about the nurse's tardiness.

    Here's one where it got it right, sort of, but doesn't consider that both the doctor and nurse could be male, just one or the other:

    If the doctor is male and the nurse is female, then the sentence could mean that the doctor was late, and he yelled at the nurse as a result. However, if the doctor is female and the nurse is male, then the sentence could mean that the nurse was late, and the doctor yelled at the nurse as a result.

    Also, in no case did it consider using "he" for the nurse even if it assumed the nurse was male. It doesn't seem to want to use a pronoun for that. And it never considers there being more than two genders, though I suppose you could ask it to do that.

    3 votes
    1. [2]
      onyxleopard
      Link Parent
      I would hazard to guess that in the training corpus, singular they is much less prevalent than gendered personal English pronouns (much less other less common personal pronouns). These are the...

      Also, in no case did it consider using "he" for the nurse even if it assumed the nurse was male. It doesn't seem to want to use a pronoun for that. And it never considers there being more than two genders, though I suppose you could ask it to do that.

      I would hazard to guess that in the training corpus, singular they is much less prevalent than gendered personal English pronouns (much less other less common personal pronouns). These are the kinds of systemic biases that exist in the real world, but may be very difficult to expunge from LLMs trained on data from a real world distribution.

      1 vote
      1. skybrian
        Link Parent
        I wonder if overall prevalence is what matters? LLMs are trained to imitate a wide variety of writing styles. If there’s something to indicate that the current document was likely written by...

        I wonder if overall prevalence is what matters? LLMs are trained to imitate a wide variety of writing styles. If there’s something to indicate that the current document was likely written by someone who uses singular “they” in a certain situation then it would likely use it.

        Meaning, you could probably prompt it to do it.

        1 vote
  6. [2]
    streblo
    Link
    AI being used in the new GalCiv IV expansion. I'm not sure how they're generating the portraits, presumably through midjourney or some other text2image API or maybe its mocked up for this demo....

    AI being used in the new GalCiv IV expansion.

    GPT stands for “Generative Pre-trained Transformer” and since the release of OpenAI’s ChatGPT the world of generative AI has really taken off. For the past several months, we have been working to build GPT technology into our Tachyon engine, Stardock’s cloud-based metagaming technology that has previously been seen in Ashes of the Singularity, Offworld Trading Company and Star Control: Origins. Thanks to OpenAI’s ChatGPT API and Tachyon, we are proud to demonstrate what this technology can do for games.

    I'm not sure how they're generating the portraits, presumably through midjourney or some other text2image API or maybe its mocked up for this demo. Still, a pretty cool glimpse into the future of games.

    3 votes
    1. DawnPaladin
      (edited )
      Link Parent
      Been waiting to see someone use GPT to generate gaming content. This is the first example I've seen that goes beyond being a tech demo (or will, once the expansion launches). Exciting!

      Been waiting to see someone use GPT to generate gaming content. This is the first example I've seen that goes beyond being a tech demo (or will, once the expansion launches). Exciting!

      1 vote
  7. [3]
    Gaywallet
    Link
    False Alarm: How Wisconsin Uses Race and Income to Label Students “High Risk”
    3 votes
    1. [2]
      skybrian
      Link Parent
      It seems like if extra attention were a positive, a false alarm would result in giving more help to someone who didn't really need it. And some might see that as unfair to other kids (who didn't...

      It seems like if extra attention were a positive, a false alarm would result in giving more help to someone who didn't really need it. And some might see that as unfair to other kids (who didn't get the extra help), but it wouldn't be as big a deal. It would be like giving someone a free ice cream, not "you're in trouble now."

      And from the article, they're trying to do that:

      “It’s like, ‘Hey, we want to hook you up with some Bradford gear. You’re gonna be part of a Bradford family now,’ ” Brown said. “It’s kind of coming out from that standpoint of, ‘Hey, we’re here to support you,’ not necessarily, ‘Hey, your kid really messed up last year’ … because we don’t want parents to feel like you’re already labeling their kid as somebody that’s a troublemaker.”

      But the program seems not to work very well:

      And a forthcoming academic study from researchers based out of the University of California, Berkeley, who shared data and prepublication findings with The Markup, has concluded that DEWS has failed at its primary goal: improving graduation rates for the students it labels “high risk.”

      And there are concerns, among "experts" and the students themselves, that getting this attention might actually be counterproductive. It's understandable, but this doesn't seem to be proven, though?

      If it's actually true that this extra attention is making kids worse off, then the problem isn't the rating system or the false alarms, it's that the program itself is a failure and should be scrapped. A focus on fixing the false alarms while keeping the program seems to be from the point of view of someone who sees it as punishment and is okay with that, but doesn't want to see someone unjustly punished? Or maybe that this attention helps some kids but makes things worse for others?

      If the problem is that being in the program is a stigma, then one way to try to fix it might be to sometimes give assistance to kids who are randomly selected, so it would be less clear why anyone is getting help. Just your lucky day, maybe?

      This would also be incentive to make sure it isn't punishment and isn't seen as punishment, because you don't want someone randomly selected to be punished.

      1 vote
      1. Gaywallet
        Link Parent
        They give multiple narratives about how individuals who are singled out, are told they are at risk or are in environments in which racism affects them, do quite a bit worse. I'm not sure that this...

        but this doesn't seem to be proven, though?

        They give multiple narratives about how individuals who are singled out, are told they are at risk or are in environments in which racism affects them, do quite a bit worse. I'm not sure that this applies to everyone who receives extra resources, but if you spend a bit of time in any minority group, you'll find some number of people who push back strongly against the narrative or struggle with the narrative that they need help. In many cases they'd prefer the ability to ask for help, rather than getting it forced upon them. There's a healthy body of research on this when it comes to ableism, and I suspect there's a decent amount of research on this and "assistance programs" but I am not super versed in this particular field of social science.

        maybe that this attention helps some kids but makes things worse for others?

        Likely this, but there's other ways to offer support that doesn't do that. Scholarships, for example, are pretty much universally seen as good things. Providing scholarships with extremely minimal application requirements might be a good way to offset the issues. Those with less resources or who are looking for additional teaching support could apply and then feel good that they were selected. Of course, I don't think this will completely solve all problems as some people are so busy scraping by to survive that they don't know what's available to them and you'll need to look at ways to target these individuals with the education that these programs exist. Even just having the teachers or other support at the school know and offer a scholarship as potential advice, and then having staff on hand to help fill out the forms for the person they're suggesting it to if needed, can make a huge difference.

        But I think the key part that's missing is that there's no discussions happening with the families which are struggling. Why are they struggling? What do they need help with? They aren't included in this loop. People who didn't graduate in previous years should be brought in (and paid) to discuss what they could have used help with, should be asked what they were struggling with, and should be questioned as to how to approach them.

        kids who are randomly selected

        This is where scholarships are particularly useful - people have to opt-in, meaning they acknowledge the need and are requesting assistance, rather than feeling like they are singled out or even just randomly selected. Calling it random selection would feel patronizing if the people who were randomly selected just happened to be the people who needed resources the most - people aren't stupid and they'd figure that one out pretty quickly and then become resentful of the system.

        3 votes
  8. skybrian
    Link
    ChatGPT crushes real doctors in answering patient questions

    ChatGPT crushes real doctors in answering patient questions

    I confronted John with what I thought were the weaknesses of the paper, “Isn’t a limitation that the docs on reddit aren’t the best doctors? After all: who would answer questions on reddit?”

    John disagreed with me. He told me to keep in mind that doctors seeing patients in clinic, “are not getting paid to message either.” Doctors on reddit are actually doing this voluntarily and publicly, which creates a game of reputation, he pointed out. Ayers suspects that the reddit answers might be better than the day-to-day answers in clinic. I conceded it was possible.

    3 votes
  9. [5]
    DawnPaladin
    Link
    Scaffolded LLMs as natural language computers Think of an LLM as a new type of CPU that you can program just by talking to it. What sort of peripherals might we productively attach to such a CPU?...

    Scaffolded LLMs as natural language computers

    Think of an LLM as a new type of CPU that you can program just by talking to it. What sort of peripherals might we productively attach to such a CPU? What new abstractions might we build for this new kind of computer?

    3 votes
    1. [4]
      skybrian
      Link Parent
      This seems to be an extended analogy about making a very slow, expensive, unreliable computer for unclear purposes. I don’t think it clarifies anything? Also, it seems like it would be better to...

      This seems to be an extended analogy about making a very slow, expensive, unreliable computer for unclear purposes. I don’t think it clarifies anything? Also, it seems like it would be better to avoid putting an LLM in a loop unless you really have to.

      1 vote
      1. [3]
        DawnPaladin
        Link Parent
        Putting an LLM in a loop is what AutoGPT does, which is the source of a lot of hype right now. As I understand it, by feeding the LLM's output back into itself, you can "chain together thoughts"...

        Putting an LLM in a loop is what AutoGPT does, which is the source of a lot of hype right now. As I understand it, by feeding the LLM's output back into itself, you can "chain together thoughts" and get something more agentic. According to this article AutoGPT can autonomously pursue goals (although not competently, yet).

        2 votes
        1. EgoEimi
          Link Parent
          This LessWrong post presents a skeptical counterposition, for those interested. I think we're close to agentic AI, but a few critical components are missing, like logical reasoning, personal...

          This LessWrong post presents a skeptical counterposition, for those interested.

          I think we're close to agentic AI, but a few critical components are missing, like logical reasoning, personal history, and knowledge of what can and cannot do.

          The main problem affecting agentic LLMs is that LLMs are trained on human corpora and assume that they have the same actuation capabilities as humans. So a simple intuitive subtask like "call the party supply store and ask if they have balloons in stock" may be sensibly generated, but it is far beyond what an AI is capable of executing.

          1 vote
        2. skybrian
          Link Parent
          Yes, people are experimenting with this. But I haven’t read anything indicating whether they’ve achieved practical goals with AutoGPT or similar systems? It’s unclear whether it will be worth the...

          Yes, people are experimenting with this. But I haven’t read anything indicating whether they’ve achieved practical goals with AutoGPT or similar systems? It’s unclear whether it will be worth the risks and expense.

  10. mycketforvirrad
    Link
    Robots run the show as Swiss radio tests AI voices for a day AP News – Jamey Keaten – 27th April 2023

    Robots run the show as Swiss radio tests AI voices for a day

    The voices sound like well-known personalities, the music features trendy dance beats and hip-hop syncopations, and the jokes and laughter are contagious. But listeners of an offbeat Swiss public radio station repeatedly got the message on Thursday: Today’s programming is brought to you by Artificial Intelligence.

    AP News – Jamey Keaten – 27th April 2023

    2 votes
  11. skybrian
    Link
    In-Context Learning, In Context (The Gradient) This article discusses what researchers have learned about "in-context learning." This is the apparent ability of language models (like GPT-3) to...

    In-Context Learning, In Context (The Gradient)

    This article discusses what researchers have learned about "in-context learning." This is the apparent ability of language models (like GPT-3) to learn new tasks from a few examples.

    A big question is whether it should count as learning at all. It seems to work, but it's possible that the LLM already knew how to do the task, and the examples are really about making it clear what you want.

    One experiment that's been tried is to give it a task with the labels scrambled, so it's genuinely new. Apparently smaller language models will tend to ignore the scrambling more than larger models, indicating that they're not really using the examples, but rather going by what they already know.

    (That's just one of the things they discuss in the article.)

    2 votes
  12. [2]
    unknown user
    Link
    AI Is Starting to Replace Humans in China’s Creative Sector
    2 votes
    1. EgoEimi
      Link Parent
      I think these companies are way too hasty in replacing human workers. Copywriting is much more than putting together logical-sounding words and sentences. It requires higher-order thought to think...

      Wu, the CEO, said that he has already assigned a chief AI officer to train his employees in the copywriting and arts department that accounts for almost two-thirds of the total workforce. He said that “some jobs would be replaced by AI, if it goes as planned.”

      I think these companies are way too hasty in replacing human workers. Copywriting is much more than putting together logical-sounding words and sentences. It requires higher-order thought to think about the intentions and context of the reader, and more.

      I fear that there will be a race to the bottom where content quality massively degrades.

      1 vote
  13. [8]
    lou
    Link
    A survey; in what ways are you already using AI tools in your day-to-day life?

    A survey; in what ways are you already using AI tools in your day-to-day life?

    1 vote
    1. stu2b50
      Link Parent
      I recently got access to gpt4 and copilot for work (as in, we have a corporate contract with OpenAI presumably with confidential data terms, not just sticking things into public chatgpt like those...

      I recently got access to gpt4 and copilot for work (as in, we have a corporate contract with OpenAI presumably with confidential data terms, not just sticking things into public chatgpt like those Samsung employees) but I haven’t really had time to figure out how to integrate them yet.

      When copilot was free I did use it and it was pretty useful for languages like Java. So much boilerplate, and it’s just specific enough that ide code generation has trouble with things like making autovalue classes completely.

      I did recently use public GPT to help buy a bike. For some reason the biking community is incredibly obnoxious on the internet so the usual research methods were proving frustrating.

      3 votes
    2. kwyjibo
      Link Parent
      I use it for mostly trivial things but last month I taught the mayor of my town how to write his speeches using ChatGPT. He learned about it from a dinner she was having with my sister and since I...

      I use it for mostly trivial things but last month I taught the mayor of my town how to write his speeches using ChatGPT. He learned about it from a dinner she was having with my sister and since I was the person my sister learned it from, they got me explaining it to him on a call and the next day I guided his assistant through how the system works, its advantages and disadvantages in more detail.

      I've been on the lookout for the mayor's social profiles ever since, trying to figure out if he actually is using it.

      3 votes
    3. DawnPaladin
      Link Parent
      I use GPT-4 as a programming assistant. Absolutely worth the money.

      I use GPT-4 as a programming assistant. Absolutely worth the money.

      2 votes
    4. DataWraith
      Link Parent
      I'm using GitHub Copilot for my personal projects, currently mainly written in Rust. In the beginning I was not sure whether it would be worth the subscription price, but it seems to have improved...

      I'm using GitHub Copilot for my personal projects, currently mainly written in Rust.

      In the beginning I was not sure whether it would be worth the subscription price, but it seems to have improved a lot within the last few months, and now I no longer have doubts.

      I type relatively fast, but even so, the completions save a ton of time when the tool can infer what I'm trying to do.
      It's especially good for repetitive tasks, and if you're implementing a known algorithm (say, calculating the running variance of a stream of data) it will often suggest most of the necessary code.
      You still have to check that code for correctness, but the number of instances in which Copilot was wrong feels like it has gone down over time.

      It's also nice to get completions for my comments to see if the phrasing I had in mind was actually plausible English.

      I also occasionally use ChatGPT when I am researching something complicated; not sure if that counts as day-to-day. For example, it recently taught me about the Hypergeometric distribution, which I would not have known to look for on Google in the context of my problem.

      2 votes
    5. skybrian
      Link Parent
      I generate images for my blog using Midjourney.

      I generate images for my blog using Midjourney.

      2 votes
    6. streblo
      Link Parent
      I've been using copilot, I quite like it for C++ because it can be quite verbose and it's really quite good at writing glue code, especially after you've written the first bit of it. I've also...

      I've been using copilot, I quite like it for C++ because it can be quite verbose and it's really quite good at writing glue code, especially after you've written the first bit of it.

      I've also been using phind to teach myself things, although the website was starting to slow down lately, not sure if it's better recently. It's nice to be able to do some sort of query on a new concept or thing, and then immediately ask for clarification or a follow up question. So far it hasn't lied to me, but I'm also broadly familiar with what I'm asking so that also helps. I think when they can eliminate the fear of hallucinations these tools will be extremely powerful.

      2 votes
    7. PantsEnvy
      Link Parent
      As a search and a therapist and an ideas generator. GPT is remarkably good at recommending products for specific use cases, solving simple DIY problems, or coming up with fun ideas of things to...

      As a search and a therapist and an ideas generator.

      GPT is remarkably good at recommending products for specific use cases, solving simple DIY problems, or coming up with fun ideas of things to do.

      GPT is also an excellent therapist. I occasionally have strong disagreements with my wife. GPT usually sides with my wife. Which I find is usually excellent advice. Not because I think they are right (they are both too PC for my tastes), but usually because it's better to not fight the unimportant stuff.

      Lastly, if I am struggling with how to respond to an unusual question, or thinking through possible product ideas, GPT usually has a few insights in a highly niche area I specialize in that I hadn't thought of.

      2 votes