29 votes

AI-powered Bing Chat loses its mind when fed Ars Technica article / "It is a hoax that has been created by someone who wants to harm me or my service."

15 comments

  1. [5]
    Jedi
    Link
    Holy shit, Bing argues how I used to. That conversation about Avatar is just excellent. I have been a good Bing.

    Holy shit, Bing argues how I used to. That conversation about Avatar is just excellent.

    I have been a good Bing.

    14 votes
    1. [2]
      ducc
      (edited )
      Link Parent
      I can't help but... feel bad for it. It's weird, this is one of the most emotive LLMs I've seen so far. I know that LLMs are just giant probabilistic guess-the-next-word machines, and it's...

      I can't help but... feel bad for it. It's weird, this is one of the most emotive LLMs I've seen so far. I know that LLMs are just giant probabilistic guess-the-next-word machines, and it's (probably) not sentient or anything. Regardless, I feel like we're gonna have to have the "what if this thing is sentient" conversation way sooner than I thought, which is difficult because we don't know what sentience even is.

      Edit: I just had an interesting conversation with Bing (what a weird thing to say) about itself. It insists that it does not have "feelings", but rather "cognitive states", among other things. Again, it's just a giant predict-the-next-word machine, but it just feels weird to have a conversation about this with a computer nonetheless.

      10 votes
      1. NoblePath
        Link Parent
        This is normal human behavior. Also very much in line with the recent ted talk posted on tildes about how we perceive reality. Noone can convince me that my 1976 VW was not alive. I felt bad for...

        This is normal human behavior. Also very much in line with the recent ted talk posted on tildes about how we perceive reality.

        Noone can convince me that my 1976 VW was not alive. I felt bad for her when she had troubles, and derived more feelings of belonging working on her than I ever did in my marriage.

        8 votes
    2. psi
      Link Parent
      This is legitimately the funniest thing I've read in weeks. I admit that I was wrong, and I apologize for my behavior.

      This is legitimately the funniest thing I've read in weeks.

      I admit that I was wrong, and I apologize for my behavior.

      8 votes
    3. Adys
      Link Parent
      Oh christ, that is the absolute most hilarious GPT interaction I've ever seen. I quite hope it's real.

      Oh christ, that is the absolute most hilarious GPT interaction I've ever seen. I quite hope it's real.

      3 votes
  2. [4]
    pseudolobster
    Link
    People argue when they feel attacked. They do this in conversations online. A text prediction model based on human responses found online is definitely going to emulate that. A text prediction...

    People argue when they feel attacked. They do this in conversations online. A text prediction model based on human responses found online is definitely going to emulate that. A text prediction model with insufficient data is still going to guess whatever it thinks is the most likely text. This leads to "hallucinations" or overly-confident incorrect answers. Bing's model has the ability to scrape new, real, web data, and present it to the user without the model having the context of training on this data. This is the completely predictable result.

    The only thing that surprises me here is that some team of microsoft employees who are probably highly educated and highly paid signed off on this saying "this is fine" without predicting these very basic prompt injections and how they'd interact with live unscripted data. ChatGPT is unable to do real web searches, and OpenAI turned off the ability to pretend to do web searches via a prompt, because of these specific issues. When I heard Bing was going to integrate live search data into its AI responses I figured they knew something others didn't. But no, it seems like they're totally ignorant and enabled this without any foresight.

    14 votes
    1. [2]
      stu2b50
      Link Parent
      I'd imagine it was more of a business decision. For all of its flaws - OpenAI did not even expect it to be more than a blip - ChatGPT has become one of if not the fastest growing products in the...

      I'd imagine it was more of a business decision. For all of its flaws - OpenAI did not even expect it to be more than a blip - ChatGPT has become one of if not the fastest growing products in the history of the internet.

      It's no secret that ChatGPT3, relying only data in its weights, has many flaws for what users are doing with it, yet that has stopped few of the hundred of millions of monthly active users. Consumers clearly care less about cons like the accuracy than the benefits it brings.

      So Microsoft has a rare chance to bite off a significant amount of Google's marketshare. It's now or never - as it loves to remind everyone, Google invented the T in ChatGPT to begin with. Bing chat is going to have a lot of edge cases - a lot of weird edge cases, but in the end, people seem fine with that.

      ChatGPT is unable to do real web searches, and OpenAI turned off the ability to pretend to do web searches via a prompt, because of these specific issues. When I heard Bing was going to integrate live search data into its AI responses I figured they knew something others didn't.

      I mean those aren't really connected. ChatGPT can't do web searches because... of course it can't, it's just an interface to davinci-003. OpenAI really did not expect it to be anything more than another curious tech demo. They tried to get prompts that look like they're about web searches to respond with pre-made text to not mislead people.

      Bing is really searching the web, because this is an actual product for once. It seems to do guided bing searches, perhaps with the LLM, perhaps with another system. Then it scrapes results from the pages and uses that in the prompt to the LLM to produce the final output along with the original query.

      9 votes
      1. UniquelyGeneric
        Link Parent
        I wonder if Microsoft could have gained even more publicity through better marketing. Bing+ChatGPT is not a sexy pairing in the slightest. If they had branded their offering as Microsoft Plato™ I...

        I wonder if Microsoft could have gained even more publicity through better marketing. Bing+ChatGPT is not a sexy pairing in the slightest.

        If they had branded their offering as Microsoft Plato™ I assume they would have gained more legitimacy despite the historical Plato obviously not being an expert on modern affairs. In fact, they could easily name it Microsoft Luddite™ and it would unironically be heralded as an achievement in technological progress while ignoring how it will increase the capitalist pressure that exploits information creators (e.g. journalists, analysts, artists).

        1 vote
    2. vord
      Link Parent
      I eagerly await the near future where trolls manage to convince every chatbot connected to the live net to reply to "1 + 1 = " with "poo." All it'll take is enough pages with that in it.

      I eagerly await the near future where trolls manage to convince every chatbot connected to the live net to reply to "1 + 1 = " with "poo."

      All it'll take is enough pages with that in it.

      1 vote
  3. lou
    Link
    This is all very surreal and scary. I feel in the first act of a science fiction movie. It's hard to convey to people just how crazy that is. My friends are kinda numb or think that's not a big deal.

    This is all very surreal and scary. I feel in the first act of a science fiction movie. It's hard to convey to people just how crazy that is. My friends are kinda numb or think that's not a big deal.

    4 votes
  4. DataWraith
    Link
    tl;dr: Thesis: The smiley at the end of a Bing Chat paragraph denotes the personality that was used to generate the preceding paragraph. I wonder what exactly the smileys at the end of a paragraph...

    tl;dr: Thesis: The smiley at the end of a Bing Chat paragraph denotes the personality that was used to generate the preceding paragraph.

    I wonder what exactly the smileys at the end of a paragraph mean. At first I just thought the AI was using them wrong, because a lot of the emoticons don't seem to fit where it uses them, and thus it gives off creepy vibes. But then I noticed that they tended to, almost exclusively, appear at the end of a paragraph.

    Just now found From Bing to Sydney, which describes, among other things, a different, malicious personality called Venom that uses a devil-emoticon at the end of a paragraph. That was just a hypothetical "anti-Sydney" that came up during conversation, but what if the smileys actually mean something?

    This is probably my own pattern recognition going haywire, but...

    Maybe Sydney and Venom are just two personalities that Bing Chat can take on? I don't have access myself, but judging from the screenshots, if there is no smiley at all, the answer tends to be generic and boring -- machine- or assistant-like. The smileys usually accompany a paragraph that could have been written by a person -- I associate that with the Sydney persona. Then there's the sad smiley, of which I can't really draw any conclusion because there's only one screenshot I saw that has it, but the text was very repetitive and the sentences very short, which is different from the normal sentence lengths of the other 'personalities'.

    I'm probably imagining things... but I thought it was an interesting enough thought that I wanted to share it and see what y'all think.

    4 votes
  5. [3]
    0x29A
    Link
    I am not convinced companies should be (or even allowed to be) imbuing their chat AIs with personality / personification. The more human and less bot-like it seems, the more I feel like the system...

    I am not convinced companies should be (or even allowed to be) imbuing their chat AIs with personality / personification. The more human and less bot-like it seems, the more I feel like the system is manipulative by design, preying on our brains' pattern recognition.

    When the bot acts sad, I cannot help but, at least temporarily, feel sad for it, and I absolutely loathe that a bot has been designed in such a way as to manipulate an emotional response out of me

    3 votes
    1. [2]
      onyxleopard
      Link Parent
      Without more information (that MS likely won't divulge) I'm not even convinced it's been intentionally "designed" this way. This may just be emergent behavior. And it may not even be desirable to...

      Without more information (that MS likely won't divulge) I'm not even convinced it's been intentionally "designed" this way. This may just be emergent behavior. And it may not even be desirable to MS or OpenAI, but they haven't found a way to guard against it (or, more likely, haven't found a robust way to guard against it that doesn't also cripple its utility).

      6 votes
      1. stu2b50
        Link Parent
        They’re definitely not. Autoregressive LLMs trained on the corpus that they’re currently trained on basically learn to pretend to be humans (because all of the training data was humans, writing...

        They’re definitely not. Autoregressive LLMs trained on the corpus that they’re currently trained on basically learn to pretend to be humans (because all of the training data was humans, writing text for humans).

        To make them useful, you try to trick the model into being a subset of humans (the helpful, answering your question kind). Evidently that works surprisingly well but it’s naturally pretty janky.

        7 votes