11 votes

The prompt box is a minefield

54 comments

  1. Gaywallet
    Link
    I think the author outlines some fantastic potential problems with AI with sobering narratives such as the following: These address some of the most salient worries I have around large language...

    I think the author outlines some fantastic potential problems with AI with sobering narratives such as the following:

    a chatbot trained on your profile and capable of addressing you directly while harnessing a far fuller range of the persuasive powers inherent in human language.

    I would speculate that weaponized chatbots deployed at scale could prove far more adept at radicalization of users than YouTube.

    These address some of the most salient worries I have around large language models. For AI in general, I'm worried about over-enthusiastic applications in which people aren't considering just what the model is trained on. As the author points out, current large language models often present compelling speech that is at times upsetting. This is a reflection of the internet and the language it's trained on. If we want it to represent society as it is/was at the time of capture/training for the model, it can do a fantastic job. Doing this on a structurally bigoted set of data only ends up reinforcing that bigotry. Models which show bias towards minority individuals have been popping up in the news and literature almost constantly - gpt3 is racist against muslims, models designed to make care cheaper do a bad job at recognizing acuity in black folks because structures make it harder for them to access care (their needs are dismissed at higher rates), a model designed on targeting foster care parents with additional resources based on whether a model thought the kids would be removed from the parents ended up assigning much higher likelihood to all minorities (a reflection of how the legal system currently treats these individuals), an analysis of IRS data revealed that black men who use EITC tax credits are 20x more likely to be audited, and countless other examples of models trained on the current state being applied and unintentionally making things worse because they were reinforcing existing biases.

    Ultimately I think many individuals who don't understand the large scale implications of the application of AI are far too much in control of AI roll-out. VCs who don't understand the tech see the profit margins and go all in because like the sophists mentioned in this article, they do not see good or bad (in this case, only money). These companies are cropping up and selling products to individuals who are budget strapped and looking for a way to do things more efficiently or effectively - of greatest concern to me here are social services, a section of government which provides an extremely important service yet also one that is chronically underfunded and understaffed. Researchers and those which operate in the space of ethical use of AI (including journalists such as the author of this article) are trying their best to warn the world, but sadly none of them are actually empowered to affect the systems developing or applying these AI models.

    8 votes
  2. skybrian
    (edited )
    Link
    Similarly to how you need to be sophisticated about unreliable information sources to use the Internet, I think we need guides (such as teachers) pushing these chatbots to their limits to show how...

    Similarly to how you need to be sophisticated about unreliable information sources to use the Internet, I think we need guides (such as teachers) pushing these chatbots to their limits to show how they break down. Fortunately that started happening pretty rapidly. As soon as Bing released their agent, there were plenty of people talking about its flaws.

    A sophisticated person might still engage in suspension of disbelief though, much like we indulge in novels or movies or video games.

    I don't know what to tell you. People who drink or gamble usually know what they're getting into and some people can enjoy these things without harmful effects, but they are often addicting.

    This might be a good time to watch BladeRunner 2049 or Her again.

    3 votes
  3. [52]
    Algernon_Asimov
    Link
    Why do people keep calling these chatbots "artificial intelligence"? There's nothing intelligent about them. They're just a souped-up version of an auto-complete algorithm, like the ones that can...

    Why do people keep calling these chatbots "artificial intelligence"? There's nothing intelligent about them. They're just a souped-up version of an auto-complete algorithm, like the ones that can be found in many people's smartphones. They even make jokes about these on Facebook: "Type 'My favourite flavour is...' into your phone and post the results here for everyone to laugh at." We know the results will be humorous because we know the auto-complete algorithm in your phone isn't smart.

    Noone thinks of their smartphone auto-complete program as intelligent, so why do they think these bigger auto-completers are intelligent?

    3 votes
    1. [9]
      Adys
      Link Parent
      This is a very tired and reductive take honestly. LLMs have displayed the ability to execute logic. You can get them to calculate base64 of random text for example; there’s no autocomplete in the...

      This is a very tired and reductive take honestly. LLMs have displayed the ability to execute logic. You can get them to calculate base64 of random text for example; there’s no autocomplete in the world that can do that.

      We call them AI because they are AI. Crows are less intelligent and we still call them intelligent.

      7 votes
      1. [8]
        Algernon_Asimov
        Link Parent
        A basic computer program can execute logic. "If A='Hello', then print 'Hello', else print 'Goodbye'." That's not intelligence. I don't even know what this means. Base64 is a mathematical concept....

        LLMs have displayed the ability to execute logic.

        A basic computer program can execute logic. "If A='Hello', then print 'Hello', else print 'Goodbye'." That's not intelligence.

        You can get them to calculate base64 of random text for example;

        I don't even know what this means. Base64 is a mathematical concept. How does it apply to text?

        3 votes
        1. [7]
          Adys
          Link Parent
          I think you just answered your own question here. An LLM is shown to be applying mathematical rules and executing on them off of natural language only. Yes, a basic computer program can do it. But...

          I don't even know what this means. Base64 is a mathematical concept. How does it apply to text?

          I think you just answered your own question here. An LLM is shown to be applying mathematical rules and executing on them off of natural language only.

          Yes, a basic computer program can do it. But you just called it “an autocomplete”. You need to get your story straight: is it a dumb autocomplete based on word prediction, or is it a computer program that executes logic?

          The answer is it’s both: it’s logic, emergent from a ML model trained on human text.

          Whether you call it “intelligence” is semantics at that point, but it is able to be taught and execute what you teach it, even post training. You shouldn’t confuse consciousness with intelligence either: just because it’s not conscious doesn’t mean it’s not intelligent.

          6 votes
          1. [6]
            Algernon_Asimov
            Link Parent
            I didn't realise these were mutually exclusive. Computer programs run logical operations. I'm sure there's a lot of code in behind ChatGPT that includes things like "if... then... else" and...

            You need to get your story straight: is it a dumb autocomplete based on word prediction, or is it a computer program that executes logic?

            I didn't realise these were mutually exclusive.

            Computer programs run logical operations. I'm sure there's a lot of code in behind ChatGPT that includes things like "if... then... else" and "AND/OR/NAND/NOR" statements.

            And that code, which includes logical operators, results in an algorithm which automatically completes text, based on certain inputs and based on a large corpus of existing text.

            but it is able to be taught and execute what you teach it, even post training.

            Taught? Or just provided with more data in order to change the output?

            2 votes
            1. [5]
              Adys
              Link Parent
              What do you think "teaching" is?

              Or just provided with more data in order to change the output?

              What do you think "teaching" is?

              2 votes
              1. [4]
                Algernon_Asimov
                Link Parent
                See... you start your argument with the assumption that a chatbot is intelligent, so it can be taught. To me, it's not intelligent in the first place, so "teaching" is not a word that can be...

                See... you start your argument with the assumption that a chatbot is intelligent, so it can be taught. To me, it's not intelligent in the first place, so "teaching" is not a word that can be validly applied to it.

                Giving a chatbot more data is just putting more raw material into the same machine.

                2 votes
                1. [3]
                  Adys
                  Link Parent
                  Seems to me like you made your mind up and are completely incapable of challenging your own assumptions, so arguing here has become pointless. Ironically, that shows less malleability than ChatGPT...

                  Seems to me like you made your mind up and are completely incapable of challenging your own assumptions, so arguing here has become pointless. Ironically, that shows less malleability than ChatGPT does.

                  (Frankly, it's exhausting: You have shown to lack an understanding of how LLMs work; you argue from flawed assumptions; you imply other users' posts are "empty" when you fail to understand them... Do you want to learn, or do you just want the world to be as you assume it already is?)

                  3 votes
                  1. [2]
                    Algernon_Asimov
                    Link Parent
                    Funny. I think the same about you. From my point of view, you're assuming that this algorithm is intelligent without sufficient evidence of that. The most charitable explanation I can come up with...

                    you argue from flawed assumptions;

                    Funny. I think the same about you. From my point of view, you're assuming that this algorithm is intelligent without sufficient evidence of that.

                    The most charitable explanation I can come up with is that we are both operating on different definitions of "intelligent", which is what you hinted at in this other comment.

                    Your writing is a lot more original, creative, and unexpected than anything ChatGPT could possibly regurgitate. You can read my comments, understand them, and make counterpoints. There is no way ChatGPT could do that. It can only repeat what it's been told (even if in slightly different phrasing). Therefore, you are intelligent, and it is not.

                    2 votes
                    1. Adys
                      (edited )
                      Link Parent
                      When your main argument is a hyper-simplification of how you think LLMs work, and you refuse to understand that it’s wrong… it’s impossible to have a discussion in good faith. Again you’re doing...

                      When your main argument is a hyper-simplification of how you think LLMs work, and you refuse to understand that it’s wrong… it’s impossible to have a discussion in good faith. Again you’re doing it here “it can only repeat what it’s been told”: This is not how it works, period. Gaywallet explained already so I’m not going to repeat.

                      And just because I’m more creative than GPT3 doesn’t mean I’m more intelligent. Creativity, intelligence, consciousness, all of these are separate concepts although we sometimes confuse them.

                      “It’s just trying to guess the next word” is a very common simplification of LLMs but it’s akin to saying humans are “just trying to reproduce”. Sure, it’s true, but you can’t define the subject’s intelligence from that snippet which applies to a cell as it does to a human — we are built on that simplicity. So when you start talking about your phone’s autocomplete, clearly you missed the point.

                      8 votes
    2. tesseractcat
      Link Parent
      I feel like intelligence/sentience isn't a binary thing, and making claims like this results in some of the confusing discussion I'm seeing elsewhere in this thread. When you say that...

      I feel like intelligence/sentience isn't a binary thing, and making claims like this results in some of the confusing discussion I'm seeing elsewhere in this thread. When you say that auto-complete isn't intelligent, that's a strange claim to me. Obviously auto-complete is more intelligent than a rock, but less intelligent than a bug.

      Of course, I believe in panpsychism, or at least something similar to Integrated Information Theory (the idea that all physical systems have some level of consciousness). The reason I think this is because it's more in-line with the analog nature of other physical/natural laws.

      5 votes
    3. [5]
      Moonchild
      Link Parent
      'Artificial intelligence' is a compound word and a term of art. It does not refer to intelligence which is artificial.

      'Artificial intelligence' is a compound word and a term of art. It does not refer to intelligence which is artificial.

      4 votes
      1. [4]
        Algernon_Asimov
        Link Parent
        I have no idea what point you're trying to make, sorry. Unless there's a meta-message here, about how chatbots construct sentences which are grammatically sound but semantically empty?

        'Artificial intelligence' [...] does not refer to intelligence which is artificial.

        I have no idea what point you're trying to make, sorry.

        Unless there's a meta-message here, about how chatbots construct sentences which are grammatically sound but semantically empty?

        2 votes
        1. [3]
          Moonchild
          (edited )
          Link Parent
          That's rather uncharitable, and I take offense at it. We would not consider (face-to-face) conversation to fall under the umbrella of 'social media', even though it is undoubtedly both a medium...

          Unless there's a meta-message here, about how chatbots construct sentences which are grammatically sound but semantically empty?

          That's rather uncharitable, and I take offense at it.


          We would not consider (face-to-face) conversation to fall under the umbrella of 'social media', even though it is undoubtedly both a medium and social. An 'english muffin' is not simply a muffin which comes from england, and neither is 'french toast' toast from france. These are all compound words, which, while etymologically related to their constituents, are not the sum of their parts. This is a simple fact about the way language works and how we communicate. The same is true of 'artificial intelligence': it does not simply mean intelligence which is artificial (and this is a good thing, in my opinion, as 'artificial' is somewhat nebulously defined, and 'intelligence' even more so, as the rest of this thread aptly demonstrates).

          3 votes
          1. [2]
            Algernon_Asimov
            Link Parent
            I was being charitable, by asking whether you were writing something that was too clever for me to understand. Because, on the surface, saying that "artificial intelligence does not refer to...

            That's rather uncharitable, and I take offense at it.

            I was being charitable, by asking whether you were writing something that was too clever for me to understand. Because, on the surface, saying that "artificial intelligence does not refer to intelligence which is artificial" seems like nonsense to me. That's like saying "this red apple is not an apple and is not red".

            The same is true of 'artificial intelligence': it does not simply mean intelligence which is artificial

            Then what does it mean?

            2 votes
            1. Moonchild
              (edited )
              Link Parent
              It is a subfield of computer science. This, too, is somewhat nebulous, but I would say the primary distinguishing characteristic (from other subfields) is attempting to come up with solutions to...

              Then what does it mean?

              It is a subfield of computer science. This, too, is somewhat nebulous, but I would say the primary distinguishing characteristic (from other subfields) is attempting to come up with solutions to problems where there is a lack of hard notions of correctness.

              3 votes
    4. [14]
      DataWraith
      Link Parent
      I feel reminded of what Kasparov's advisor said about Deep Blue when the chess computer beat the world chess champion: In other words, as you scale up a simple process like auto-completion or...

      Noone thinks of their smartphone auto-complete program as intelligent, so why do they think these bigger auto-completers are intelligent?

      I feel reminded of what Kasparov's advisor said about Deep Blue when the chess computer beat the world chess champion:

      As it goes deeper and deeper, it displays elements of strategic understanding. Somewhere out there, mere tactics are translating into strategy. This is the closest thing I've seen to computer intelligence. It's a weird form of intelligence, the beginning of intelligence. But you can feel it. You can smell it."

      In other words, as you scale up a simple process like auto-completion or principal variation search, it may start to give you results that exceed the sum of its parts (tactics turn into strategy), and compound into intelligent behavior.

      I don't consider the small-scale auto-complete on my phone to be intelligent, but I consider the larger-scale ChatGPT to have the quality of intelligence, in the original sense of the term "Artificial Intelligence" -- i.e. that it can do things that would require intelligence if a human were to do them.

      3 votes
      1. [13]
        Algernon_Asimov
        Link Parent
        I'm not buying it. This task obviously does not require intelligence, because we can instruct a non-sapient computer to do it. ChatGPT is just a probability machine. Even if I didn't already know...

        I'm not buying it.

        This task obviously does not require intelligence, because we can instruct a non-sapient computer to do it. ChatGPT is just a probability machine. Even if I didn't already know it before, this article right here on Tildes told me this.

        ChatGPT is just a regurgitator. If we fed it a million essays with the factually incorrect phrase "apples are blue", it would predict that the most likely word to follow the phrase "apples are..." is "blue". It wouldn't know any different. It's the ultimate "garbage in, garbage out" machine. It's not showing any intelligence. It's just using text that humans wrote to analyse what words are most likely to follow other words. And it does this fast, because it's tied to a lot of processors. So it looks smart, typing out word after word of mostly understandable text.

        But, with each word it predicts, it's doing exactly the same thing that it did with the previous word and the same thing it's going to do with the next word. It's just the same trick, over and over and over again. It's a one-trick pony. There's no strategy, because it's not thinking beyond the single next word. It has no idea where a sentence will go or how it will end when it starts out. It... just... types... one... word... at... a... time... It could as easily end up with "apples are blue" or "apples are sky" as "apples are red", and it wouldn't know the difference.

        If it did this trick with numbers or some other form of data, noone would think it was intelligent. But because it churns out words, which is what humans use to communicate, people think it's talking.

        3 votes
        1. [12]
          DataWraith
          Link Parent
          Ah. I guess this is where we differ in opinion. If we step back from language generation for a second and focus on chess, you can either say 'a chess engine is artificially intelligent', or you...

          This task obviously does not require intelligence, because we can instruct a non-sapient computer to do it.

          Ah. I guess this is where we differ in opinion.

          If we step back from language generation for a second and focus on chess, you can either say 'a chess engine is artificially intelligent', or you can say 'playing chess does not require intelligence at all'. Either could be true, depending on your definition of intelligence.

          Coming back to language modelling, to me it seems distasteful to say that using language well does not require intelligence though, so I'd rather go with "this thing is artificially intelligent" rather than "using language requires no intelligence whatsoever" (because, as you say, we can construct a computer that does it).

          8 votes
          1. [11]
            Algernon_Asimov
            Link Parent
            I agree! Using language does require intelligence! However, my definition of using language includes understanding the meanings of words, and trying to communicate a message via those words - not...

            Coming back to language modelling, to me it seems distasteful to say that using language well does not require intelligence though,

            I agree! Using language does require intelligence!

            However, my definition of using language includes understanding the meanings of words, and trying to communicate a message via those words - not just putting random meaningless words together in a common pattern.

            A baby can use a hammer. So can an adult. But an adult understands that using a hammer on a glass window is not a good thing, whereas a baby doesn't understand that. They're both using a hammer, but one knows what a hammer is for and the other doesn't.

            ChatGPT doesn't know what it's saying when it types "apples are red" or "apples are blue". "Apples", "are", "red", and "blue" are just meaningless chunks of data as far as ChatGPT is concerned. All ChatGPT knows is that, in a large corpus of text, whenever the text "apples are" appears, it is most commonly followed by the text "red" - so it spews out "red" after "apples are" when it's writing its own text. It doesn't know what an apple is or what red is. It's just constructing a sequence of words that copies previous sequences of words that it has seen. If its algorithms accidentally produced "apples are blue", it wouldn't bat a metaphorical eye.

            It's not using language, in the sense of trying to communicate. It's just constructing text, based on patterns found in previous texts.

            3 votes
            1. [7]
              Adys
              Link Parent
              You are making incorrect assumptions about how the model works.

              ChatGPT doesn't know what it's saying when it types "apples are red" or "apples are blue". "Apples", "are", "red", and "blue" are just meaningless chunks of data as far as ChatGPT is concerned.

              You are making incorrect assumptions about how the model works.

              5 votes
              1. [6]
                Algernon_Asimov
                Link Parent
                So ChatGPT does know what an apple is? And it knows that apples are not blue? It actually understands what it's writing?

                So ChatGPT does know what an apple is? And it knows that apples are not blue? It actually understands what it's writing?

                3 votes
                1. [5]
                  Adys
                  Link Parent
                  Do you understand what you're writing? What is "understanding", exactly? You know an apple is not blue because... you haven't seen a blue one, I guess; and you've seen a lot of apples in a variety...

                  Do you understand what you're writing? What is "understanding", exactly?

                  You know an apple is not blue because... you haven't seen a blue one, I guess; and you've seen a lot of apples in a variety of colors. You also have learned that things in nature are not often blue, and you can extrapolate from there. All this is learned from inputs which is stored in your brain in a very compressed form and do you yet see where I'm going with this?

                  ChatGPT knows what an apple is probably far better than you do. It is blind and hasn't tasted one, but it is able, through language alone, to tell you why an apple is not blue. It can also react to a statement such as "The apple is blue", whereas you may or may not recognize it as a weird statement depending on how quickly you read it.

                  ChatGPT has given me a better understanding of how much of my intelligence comes from words alone; how I structure my sentences. I remember somebody on HN mentioning "to get past writer's block, just set a timer for 60 seconds, start writing, don't stop, don't correct, don't edit, just write". And the more I think about it, the more I think this works because we function similarly to GPT.

                  What are some reasons an apple is not blue? There are several reasons why an apple is not blue:

                  Genetics: Apples are genetically programmed to produce pigments that give them their characteristic colors, such as red, green, yellow, and sometimes a combination of these colors. There are no genes that code for blue pigment in apples.

                  Absorption and reflection of light: The color of an object depends on the wavelengths of light that are absorbed and reflected by its surface. Apples absorb light in the blue region of the spectrum and reflect light in the red and green regions. This results in the characteristic red, green, or yellow colors that we see in apples.

                  Evolution: Apples have evolved to be attractive to animals that help in seed dispersal, such as birds and mammals. These animals are more attracted to the red, green, or yellow colors of ripe apples than to blue or other colors.

                  Natural selection: Over time, apples that produce pigments that make them more attractive to seed-dispersing animals have a better chance of reproducing and passing on their traits to their offspring. This natural selection has led to the evolution of the characteristic colors of apples that we see today.

                  Overall, the absence of blue pigment in apples is due to a combination of genetics, the physics of light, and the evolutionary history of the fruit.

                  The apple is blue. I'm sorry, but that statement is not true. Apples are typically red, green, or yellow, but not blue. If you have seen or heard of a blue apple, it may be a product of artistic imagination or a result of special treatments, such as dyeing or genetic modification. However, in reality, apples are not naturally blue.
                  1 vote
                  1. [4]
                    Algernon_Asimov
                    Link Parent
                    Of course I do. I'm creating this text. I'm imagining a message I want to convey to you, then translating that message into words, then organising those words into grammatically sound sentences,...

                    Do you understand what you're writing?

                    Of course I do. I'm creating this text. I'm imagining a message I want to convey to you, then translating that message into words, then organising those words into grammatically sound sentences, which I type into Tildes. The process starts with me imagining something I want to say. The words are the medium for conveying my message to you.

                    I could choose a different medium. I could convey this message with pictures. I could create a cartoon which makes these points pictorially... because I understand what I'm saying, and I'm just choosing the labels that convey that meaning, whether those labels are textual or pictorial.

                    The chatbot has no idea what it's saying. It's just lining up data chunks in their most probabilistic order, according to its algorithms. It has no purpose, no intention. It's not trying to tell you anything. It's not choosing the words according to their meanings, but according to their probabilistic distribution in a corpus of text. Scratch the surface, and there's nothing there.

                    It couldn't suddenly change to convey its message in a cartoon, because it doesn't know what it's saying.

                    What are some reasons an apple is not blue?

                    The apple is blue.

                    I assume those chunks of text came from the chatbot. You assume it knows what it's talking about, because it's producing statements that are true. Meanwhile, I wonder which bits of the corpus it's copying, to get those true statements from. It has no knowledge of apples or blueness.

                    We've had computer experts explain to us how the chatbot works: how it probabilistically puts one word after another. Why do you assume this equates to intelligence? Is that how you work? Do you just think, "Well, the most common word after X is Y, so I'll write X then Y?" Is your writing process that simple?

                    3 votes
                    1. Gaywallet
                      Link Parent
                      While at a high level it 'probabilistically puts one word after another', thinking that this means 'the most common word after X is Y, so I'll write X then Y' is a misunderstanding. Of key note,...

                      We've had computer experts explain to us how the chatbot works: how it probabilistically puts one word after another. Why do you assume this equates to intelligence? Is that how you work? Do you just think, "Well, the most common word after X is Y, so I'll write X then Y?" Is your writing process that simple?

                      While at a high level it 'probabilistically puts one word after another', thinking that this means 'the most common word after X is Y, so I'll write X then Y' is a misunderstanding. Of key note, attention heads are paying attention to the sum of what has been written. This is why GPT3 doesn't fall into the same traps that GPT2 and earlier LLMs did where they'd repeat a phrase ad nauseum. What it suggests for the next word is partially random (through a concept known as temperature), is influenced by the sum of words it has written before this word (attention heads), and is influenced by functional categorization (accomplished by multiple layers). You can draw a lot of analogies from neural networks to how we functionally process information, especially in certain spaces (perceptrons resemble very much like the incremental image processing we see in the visual cortex areas V1-V5).

                      4 votes
                    2. tesseractcat
                      Link Parent
                      Why not? The 'goal' of the model is to learn how to predict the next word as accurately as possible. This is done by tweaking weights/"parameters" in the model so it gets better. The idea is that...

                      We've had computer experts explain to us how the chatbot works: how it probabilistically puts one word after another. Why do you assume this equates to intelligence?

                      Why not? The 'goal' of the model is to learn how to predict the next word as accurately as possible. This is done by tweaking weights/"parameters" in the model so it gets better. The idea is that after a certain point, in order to get better at predicting the next word, you need some understanding or model of what it is you are predicting.

                      Since LLMs have gotten so good at next-word prediction, it's strange to think that this doesn't demonstrate at least some amount of hidden world-modelling.

                      Also, it's very human centric to think that all thought processes and paths to intelligence have to match how humans think.

                      2 votes
                    3. teaearlgraycold
                      Link Parent
                      I’d like to throw in that GPT3 would never be able to be a full model of human intelligence. It’s merely the ears and mouth. It’s got such a commanding understanding of grammar that it treats the...

                      I’d like to throw in that GPT3 would never be able to be a full model of human intelligence. It’s merely the ears and mouth. It’s got such a commanding understanding of grammar that it treats the entire dictionary as part of English grammar. But hooked up to a rewritable memory and sensory bits the conglomerate might start to behave with agency and show more interesting behavior.

            2. [3]
              skybrian
              Link Parent
              Language game fluency is a part of understanding, even if it's not everything. There is a sense in which it probably doesn't understand "apple," but also a sense in which it does. It "understands"...

              Language game fluency is a part of understanding, even if it's not everything. There is a sense in which it probably doesn't understand "apple," but also a sense in which it does. It "understands" how the word "apple" is related to lots of other words like "trees" and "fruit," it could tell you about different kinds of apples, it knows to translate it as "pomme" in French, it can give you a definition, and so on.

              This is something like how people can sort of understand words when they've only heard them used in conversation or in books, but never seen such a thing in real life. You could write a story about Julius Caesar or Batman or a unicorn, and so could ChatGPT.

              Yes, it could be misled by things it's read, but then again, so could you. Reading any reasonable subset of the Internet should be enough to say that apples are conventionally considered red (even though they may be green).

              The part where it gets weird is that as part of its training, it's learned enough to try to imitate anyone on the Internet. To the extent that the personas in a chat session have opinions, they are just those of whatever characters it's attempting to imitate now.

              These personas have the same sort of existence as the characters in an unfinished novel. Or maybe it's like an NPC in tabletop role playing game, since the personas aren't real, but you can "talk" to them and the game master will simulate them for you.

              OpenAI has tried to tame ChatGPT so that it only plays one somewhat boring character, that of an intelligent assistant. But the character it plays is not the same as the thing that simulates the character.

              Going back to "apple" you could ask whether the word "apple" in a novel refers to the same thing as "apple" in real life. How much grounding do you need to make that happen? Suppose the author who wrote the novel had never seen one in real life, and just researched them on the Internet? Maybe it's enough to say that the "apple" entry in Wikipedia does refer to real-world apples, and you've read about it?

              2 votes
              1. [2]
                Adys
                Link Parent
                I don't think this is reasonably correct. In its current form, you can't truly say that GPT has an "opinion". It took me a while to understand why that didn't sound right: An opinion is a very...

                The part where it gets weird is that as part of its training, it's learned enough to try to imitate anyone on the Internet. To the extent that the personas in a chat session have opinions, they are just those of whatever characters it's attempting to imitate now.

                I don't think this is reasonably correct. In its current form, you can't truly say that GPT has an "opinion". It took me a while to understand why that didn't sound right: An opinion is a very human trait, that requires belief or judgement. It can't really be said that GPT has either of those. It does have preferences, driven by its training (just as our preferences are driven by experience, hormones, etc).

                It can act as if it has an opinion, but if you can get it to change that opinion with very little input, that's not a real opinion. A good comparison is: "That politician who was paid to say X, do they really hold that opinion about X, or are they just play-acting that they have it?"

                1 vote
                1. skybrian
                  Link Parent
                  I agree that a language model fundamentally isn't a person and fictional characters aren't real. Still, you can make statements about the opinions of characters in stories and people will know...

                  I agree that a language model fundamentally isn't a person and fictional characters aren't real.

                  Still, you can make statements about the opinions of characters in stories and people will know what you mean.

                  I think people really want to be talking to someone, so OpenAI has created a rather bland corporate character called an AI Assistant and tried pretty hard to get their system to always remain in character, but it doesn't do that naturally.

                  Does everyone really want to talk to a generic servant character? Maybe other companies will decide to simulate different characters, and some of them will be more popular?

                  With better continuity, these fictional characters might start to seem pretty real and that would be pretty weird.

                  1 vote
    5. teaearlgraycold
      Link Parent
      AI doesn’t cover just legitimate artificial general intelligence. It can cover pretty much any self-tuning program. Maybe it shouldn’t be that way but once the marketing people got ahold of the...

      AI doesn’t cover just legitimate artificial general intelligence. It can cover pretty much any self-tuning program. Maybe it shouldn’t be that way but once the marketing people got ahold of the term they warped it. So if you want to talk about human-like intelligence in a computer the term is AGI.

      3 votes
    6. [21]
      PantsEnvy
      Link Parent
      Have you ever read Alan Turings Computing Machinery and Intelligence? I am astounded at how relevant it is now 73 years after it was first published, and how clearly he thinks through the various...

      Have you ever read Alan Turings Computing Machinery and Intelligence?

      I am astounded at how relevant it is now 73 years after it was first published, and how clearly he thinks through the various problems.

      I PROPOSE to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.

      He then goes on to propose a game, which is now known as the turing test. Which you must be familiar with. If you haven't interacted with ChatGPT, you should. ChatGPT is able to converse just like an incredibly versatile long winded amnesiac who thinks he is an AI machine and occasionally makes stuff up.

      3 votes
      1. [20]
        Algernon_Asimov
        Link Parent
        I am certainly aware of the Turing Test. Every time I get slightly curious, I end up at a page requiring me to sign up, and I baulk at creating yet another login for yet another software app, and...

        I am certainly aware of the Turing Test.

        If you haven't interacted with ChatGPT, you should.

        Every time I get slightly curious, I end up at a page requiring me to sign up, and I baulk at creating yet another login for yet another software app, and giving yet more data to yet more big corporations. I'm not that motivated to interact with a text prediction program - especially when the most interesting interactions end up making their way out to the wider internet for me to see.

        Nothing I've seen indicates that ChatGPT could pass the Turing Test. Well, not for an informed and expert tester. I know about the Google engineer who thought Google's chatbot was sentient. I wouldn't be surprised if some people believe ChatGPT is sentient. But anyone with enough common sense can tell that ChatGPT doesn't know what it's talking about. And, the longer any interaction continues, the more off-kilter the chatbot gets: "It can fool some of the people all of the time, and all of the people some of the time, but it can not fool all of the people all of the time." (Abraham Lincoln was obviously interested in artificial intelligence a century before Alan Turing!)

        3 votes
        1. [17]
          PantsEnvy
          Link Parent
          If all you rely on are social media posts, your view is naturally going to be slanted. I think you would struggle to tell human vs machine with greater than 70% accuracy after five minutes of Q&A....

          If all you rely on are social media posts, your view is naturally going to be slanted.

          I think you would struggle to tell human vs machine with greater than 70% accuracy after five minutes of Q&A.

          Would you like to participate in a Tildes version of the imitation game? I am throwing down the gauntlet.

          3 votes
          1. [2]
            Algernon_Asimov
            Link Parent
            Do you consider this thread to be a social media post? Remember: this is the article that was posted at the top of this thread. Is this post a social media post? And this post? These are the types...

            If all you rely on are social media posts,

            Do you consider this thread to be a social media post? Remember: this is the article that was posted at the top of this thread.

            Is this post a social media post? And this post? These are the types of sources I'm getting my information from.

            2 votes
            1. PantsEnvy
              Link Parent
              It wasn't a criticism. I've read the same posts you have. I am simply stating, that interacting with the chatbot is another experience entirely.

              It wasn't a criticism.

              I've read the same posts you have.

              I am simply stating, that interacting with the chatbot is another experience entirely.

              1 vote
          2. [14]
            NaraVara
            Link Parent
            This would be a great recurring megatread where someone gets to draw the short straw each week.

            Would you like to participate in a Tildes version of the imitation game? I am throwing down the gauntlet.

            This would be a great recurring megatread where someone gets to draw the short straw each week.

            1 vote
            1. [13]
              PantsEnvy
              (edited )
              Link Parent
              I was thinking something specific. What is interesting is @Algernon_Asimov (AA) hasn't interacted with GPT, yet is confident it can't pass the test. I was thinking he could ask maybe 20 questions...

              I was thinking something specific. What is interesting is @Algernon_Asimov (AA) hasn't interacted with GPT, yet is confident it can't pass the test.

              I was thinking he could ask maybe 20 questions that can be answered succinctly, and 5 in depth questions, then various users could respond. Some users would give human responses. Some users would give answers from the bots. Then AA can guess which is which. If he is 70% accurate, he is right.

              There are a couple of issues with this test. ChatGPT wasnt trained to pretend to be a human. But I think we can jail break it, or use a GPT trained more to pretend to be human. AA has learned about all of ChatGPTs deficiencies, which goes against the spirit of a brief conversation, so either certain questions are not allowed or responders are allowed to prompt ChatGPT to respond more like a normal human would, as it would if it were trained to pass the turing test. Lastly, AA would have to ask all the questions at once, which again goes against the spirit of a brief conversation, so perhaps AA is allowed some follow up questions.

              Edit: @NaraVara did you have something else in mind? Not sure what you meant by the short straw...

              2 votes
              1. [9]
                Algernon_Asimov
                Link Parent
                No. That's not how to tell if a computer program responds like a human being. I'm not going to ask informational questions like "Why is the sky blue?" or "What is a duck?" which can be answered by...

                I was thinking he could ask maybe 20 questions that can be answered succinctly, and 5 in depth questions, then various users could respond.

                No. That's not how to tell if a computer program responds like a human being. I'm not going to ask informational questions like "Why is the sky blue?" or "What is a duck?" which can be answered by copy-pasting chunks from Wikipedia. That's a format that requires a human to emulate a computer, so of course the computer will answer like a human, because a human would be responding like a computer.

                I would have an open-ended rambling conversation with it, starting with an open-ended non-Wikipedia question like "How was your day?" The point of a Turing Test is that a person can not distinguish between a computer's responses and a human's response's. That means the computer is supposed to be able to respond like a human. So "How was your day?" is a great starting point.

                I just had the thought that the ELIZA bot would be a perfect tool to use in a Turing Test. Why should I have to waste my time interrogating a computer about its feelings, when there's already a computerised tool to do that for me! Let the computer out the computer. :)

                (And there's something the chatbot probably can't do - include a hyperlink when appropriate. It does not know what it's typing, so it doesn't recognise when it's making a reference that might not be commonly known.)

                ChatGPT wouldn't last 5 minutes in my Turing Test.

                Another test would be to put the tester in front of a computer screen with three entities to interrogate (Ada, Bob, Eve): two humans and one chatbot. The tester would not know whether each entity was human or computer. The tester could send any message/s to any entity/s. They could have an extended interaction with each entity. They could switch their attention from entity to entity, and back again, and so on. They could spend a whole day, if necessary, just chatting with Ada and Bob and Eve. Or just five minutes. Whatever they choose. And, at the end of the test, the tester has to work out which one of the three entities is the chatbot.

                2 votes
                1. [8]
                  PantsEnvy
                  Link Parent
                  OK, I will kick off another thread. It will be two humans and two chatbots answering your questions. You will be able to ask 5 minutes of questions. This will be your chance to interact with...
                  • Exemplary

                  OK, I will kick off another thread. It will be two humans and two chatbots answering your questions. You will be able to ask 5 minutes of questions. This will be your chance to interact with ChatGPT without having to share your info with another social media site. Interested?

                  2 votes
                  1. [7]
                    Algernon_Asimov
                    Link Parent
                    Wow. You're a lot more invested in this than I am! I'm truly sorry if I've given you the wrong impression. If I was even half as invested in this chatbot as you seem to think I am, then I would...

                    Wow. You're a lot more invested in this than I am!

                    I'm truly sorry if I've given you the wrong impression.

                    If I was even half as invested in this chatbot as you seem to think I am, then I would have already created a throwaway email address, and signed up to the ChatGPT interface. I'm not that invested. I've read the articles about how it works. I've seen the type of output it creates. My curiosity, such as it is, has been sated.

                    But thank you very much for the offer.

                    2 votes
                    1. [4]
                      cfabbro
                      (edited )
                      Link Parent
                      Really? Could have fooled me, given how much you have been arguing with people in this topic. ;)

                      I'm not that invested.

                      Really? Could have fooled me, given how much you have been arguing with people in this topic. ;)

                      1 vote
                      1. [3]
                        Algernon_Asimov
                        Link Parent
                        There's a difference between spending a couple of minutes typing out a random opinion when I feel like it, and sitting down for a dedicated 20-minute question & answer session (4 test subjects x 5...

                        There's a difference between spending a couple of minutes typing out a random opinion when I feel like it, and sitting down for a dedicated 20-minute question & answer session (4 test subjects x 5 minutes each).

                        There's also the aspect that typing out some random opinions just wastes my time. I don't feel right making someone else do the work that would be required to make such a test happen.

                        Everyone else around here is much more impressed by this chatbot than I am. I'm more interested in trying to discuss the misconception that this text-prediction algorithm knows what it's saying. And, somehow, because of that, I got dragged into doing a Turing-type test that I never asked for and don't want. I keep forgetting how helpful certain people on the internet are, and how much they assume that everyone shares the same interest in technology that they do.

                        2 votes
                        1. [2]
                          cfabbro
                          (edited )
                          Link Parent
                          Have you ever stopped to consider that maybe it's actually you that has the misconceptions about ChatGPT, LLMs, and how they actually work? Because I know for a fact that you have had several...

                          Have you ever stopped to consider that maybe it's actually you that has the misconceptions about ChatGPT, LLMs, and how they actually work? Because I know for a fact that you have had several software engineers and data/computer scientists (who presumable know what they're talking about) reply to you in this topic trying to correct/inform you, and yet despite that you have completely dismissed all of what they had to say and continue to argue with them instead.

                          p.s. And as for people trying to be helpful, if you don't want that to keep happening then stop replying to everyone! Just ignore the people who annoy you, and walk away from the discussion.

                          1. Algernon_Asimov
                            (edited )
                            Link Parent
                            Me? Wrong? What have you been smoking? :P I've also read articles by people like "an AI expert and professor emeritus of psychology and neuroscience" saying “It doesn’t really have a clue what...

                            Have you ever stopped to consider that maybe it's actually you that has the misconceptions about ChatGPT, LLMs, and how they actually work?

                            Me? Wrong? What have you been smoking? :P

                            I've also read articles by people like "an AI expert and professor emeritus of psychology and neuroscience" saying “It doesn’t really have a clue what it’s saying.” At the very least, opinion is divided about ChatGPT's sapience and intelligence.

                            One of the earliest chatbots was ELIZA (which I linked above). It basically took people's statements, converted them into questions, and reflected them back to people. For example:

                            ELIZA: How are you?

                            Person: I'm feeling a bit blue today.

                            ELIZA: Why are you feeling a bit blue today?

                            People thought ELIZA was intelligent / sapient / a real person / etc. So, I'm not surprised they think a similar thing about ELIZA's great-great-great-great-grandkid which has a whole lot more code & data under the hood. That doesn't make them right, though. We anthropomorphise everything from cats to cars - and they can't even talk to us. This bot can talk. Naturally, people respond to it as if it's a real boy. That's just what we do as human beings.

                            3 votes
                    2. [2]
                      PantsEnvy
                      Link Parent
                      @NaraVara, were you interested in being in the barrel?

                      @NaraVara, were you interested in being in the barrel?

                      1 vote
                      1. NaraVara
                        Link Parent
                        Sure I'd give it a spin.

                        Sure I'd give it a spin.

                        1 vote
              2. [2]
                NaraVara
                Link Parent
                Oh the short straw just meant someone gets a turn in the barrel to try their hand at doing the Test.

                Oh the short straw just meant someone gets a turn in the barrel to try their hand at doing the Test.

                1 vote
                1. PantsEnvy
                  Link Parent
                  If Algernon is not interested, I am happy to have you take the test. Or if Algernon is interested, I can do you next. I am not sure about answering 20x2 questions every week however. Unless you...

                  If Algernon is not interested, I am happy to have you take the test. Or if Algernon is interested, I can do you next. I am not sure about answering 20x2 questions every week however. Unless you get the short straw the subsequent week in the case that someone else wants a shot at the barrel :)

                  2 votes
        2. [2]
          Greg
          Link Parent
          I've seen fairly few conversations with ChatGPT that I'd definitively be able to pick out as a bot if asked in a blind test, and even fewer that I'd actively think to call out if I'd been primed...

          I've seen fairly few conversations with ChatGPT that I'd definitively be able to pick out as a bot if asked in a blind test, and even fewer that I'd actively think to call out if I'd been primed to assume they were from a human.

          I've seen plenty that are bad for one reason or another, but I don't see a lot that diverge in a specifically bot-like way rather than ways that could just as easily be a tired/inattentive/confused/distracted human. If your Turing test has a panel of alert, erudite, well-informed expert testers and it involves fooling all of them all of the time, I'd expect a good double digit failure percentage from humans as well.

          1 vote
          1. NaraVara
            (edited )
            Link Parent
            One of the things people who study online disinformation have started to realize is that identifying the bots and paid trolls becomes really difficult because the behaviors of bots, paid...

            One of the things people who study online disinformation have started to realize is that identifying the bots and paid trolls becomes really difficult because the behaviors of bots, paid troll/click farm accounts, and a lot of (probably not very emotionally well adjusted but nonetheless sincere) people can be quite indistinguishable.

            We tend to hold the bots up to the standards of a smart, articulate person who cares about what they're doing and is attempting to be good at it. We don't tend to hold them up to the standards of a marginally interested grunt just trying to shit out a unit of essay or question response or whatever because you told them to. (Or, in the case or maladjusted people, because their content feed has gotten them too high on outrage to think straight anymore).

            Indeed, part of how social media services tend to operate is to condense human interactions--which rely heavily on background knowledge, elements of tone and body language, and a bunch of other extremely subtle nuances--into forms that are "machine readable." Hence, an affirmative nod and polite chuckle when someone cracks a not-very-funny joke turns into a thumbs-up button or an LOL. What may have been a statement like "Yeah you make a good point" turn into an upvote.

            But these are lossily translated. That upvote could be "That's a good point" or it could be "Oh I hadn't thought of it that way before" or it could be "FUCK YES!" or it could be "I don't agree with this but think you laid out your argument really well. But eventually, a few generation trained on interacting this way may not actually have developed the social cue reading muscles to read the nuances anymore. So they'd functionally get worse at being "Turing Test Administrators."

            4 votes