15 votes

What should a layperson know about AI?

Asking for a friend. 😉

In all seriousness, the question was inspired by the news out of Google and specifically @Whom's comment here.

What should non-technical laypeople know about AI?

The info doesn't have to be limited to just this particular news item either. What information would you want included in an AI 101 rundown? What is it currently used for? What will it do in the future? What are its limitations? What are its potentials?

And, of course, how should people interpret stuff like today's big news item?

24 comments

  1. [8]
    Adys
    Link
    I actually want to highlight this comment on HN, which I think perfectly sums up the Google AI situation: https://news.ycombinator.com/item?id=31728348 It's a bit futile to argue about sentience...
    • Exemplary

    I actually want to highlight this comment on HN, which I think perfectly sums up the Google AI situation: https://news.ycombinator.com/item?id=31728348

    It's a bit futile to argue about sentience at this point I think. Because the thing is, we still don't have a good grasp on what sentience is. So answering the question "Did we create, in software, something we don't fully grasp on a biological level yet" is very difficult. The comment, and ensuing discussion, highlight the actual potential more immediate issues to come.

    With that said, what should a layperson know about AI...

    In no particular order:

    • AI is not magic. It's just sufficiently advanced.
    • At its core, machine learning (which is at the heart of the current wave of AI) is not that conceptually different to a Markov Chain. Markov text chains were sometimes eerily "human-like" even decades ago.
    • What is a neural network? - By 3Blue1Brown
    • The Stillwell Brain - vsauce simulates a "human brain" using a handful of actual humans in a football stadium. A fascinating video you should watch now if you've never seen it.

    You see the word "AI" everywhere and it's often a misnomer; a catch-all for a type of software technology (machine learning) which is extremely useful for solving certain types of pattern-recognition problems when you have access to a LOT of data about said pattern.

    Machine learning is a way of reducing a math problem down to "Here are millions, maybe hundreds of millions of examples of The Problem. Now, figure out a good answer for this new variation of The Same Problem that you haven't seen before".

    An early application of this technology was handwriting recognition / OCR: Computer has never seen this particular number 4 being written, but it has seen many many other variations of it, so it is "taught" to react to it.

    The secret behind deep learning is the algorithms to do this highly efficiently -- "compressing" all the examples into trillions of little "triggers" that will very quickly point the algorithm to an answer when reading whole words, whole sentences, etc. Then, it's about "tuning" those algorithms: More data, higher quality data, and tuning of various parameters to give better results for the problem.

    "The Problem" has then evolved to more than just OCR. Maybe you remember using Waifu2x before? This is an AI trained on a lot of manga/anime input, that essentially learned how to upscale comic style drawings. The problems can be very specific such as that one, or more general.

    The state of the art is scary cool now: With GPT, you see text generation based on prompts. With DALL-E, you see image generation based on prompts. Or surely you've heard of AlphaFold, which yields incredible results on protein folding. We haven't "solved" these problems, as much as trained a computer to yield results based on many, many, many examples.

    It's still the same. The "AI" was trained on so, so many gigabytes of text, natural sentences, etc... It now knows how to respond to any prompt, really absurdly well.

    Again, at its core, all these AIs are not any different to the previous, dumber versions, many which would fool people already; it's just the training data has evolved to absurd amounts.

    So where does that leave humans? Well, again, I have to stress that nothing is "solved". AlphaGo didn't learn how to play Go like a human would (over some explanations, and hundreds, then thousands of matches). AlphaGo needed several magnitudes more examples of matches to learn to beat grandmasters. If it had played those matches at the same speed as the grandmasters, it would have needed millions of years to learn.

    Computers have a cheat code: They can go really fast. So it might not matter what is solved and what isn't solved if at the end of the day, humans need not apply.

    12 votes
    1. [2]
      Comment deleted by author
      Link Parent
      1. Akir
        Link Parent
        A while back I heard someone define AI as any machine that is aware of its environment and is capable of reacting to it. It’s so broad that a simple if statement is possible to be an AI

        A while back I heard someone define AI as any machine that is aware of its environment and is capable of reacting to it. It’s so broad that a simple if statement is possible to be an AI

    2. [6]
      whbboyd
      Link Parent
      That's a great set of bullet points. I'll add two more: AGI ("Artificial General Intelligence", i.e. human-level intelligence) is not imminent. It's a good bet that our current paradigms are...

      That's a great set of bullet points. I'll add two more:

      • AGI ("Artificial General Intelligence", i.e. human-level intelligence) is not imminent. It's a good bet that our current paradigms are insufficient to achieve it. For at least the next decade, you should not believe news (or press releases from "AI" companies) that says otherwise.
      • The risks of "AI" are completely unlike Skynet or a paperclip maximizer. Instead, the risk is that humans will use "AI" technologies to do the regular shitty human things more widely and efficiently. And actually, this isn't a risk; it's actively happening as we speak.

      The first bullet is just my professional opinion (disclosure: while I'm a highly experienced software developer, I don't work in "AI"), and would be very difficult to prove (the questions of "what can't our current systems do" and "what unpredictable breakthroughs will happen" are obviously very open-ended), but here's a heuristic argument. AGI fundamentally requires reason: the ability to construct novel facts coherent with the rules governing the intelligence's environment. Modern "AI" techniques structurally don't reason; they are incredibly sophisticated pattern matching (and extension) systems, and generate "new" information only in the same sense (though obviously in a far more sophisticated manner) as a Markov model, or shuffling a deck of cards.

      6 votes
      1. vektor
        Link Parent
        I'll bite: I work in AI. I think you're pretty much spot on about reasoning being a major missing piece. I think the "not in the next decade" heuristic is also pretty good. However, from down here...

        I'll bite: I work in AI. I think you're pretty much spot on about reasoning being a major missing piece. I think the "not in the next decade" heuristic is also pretty good. However, from down here in the trenches it's quite hard to judge. There's a lot of good ideas floating around all the time, and the right combination of ideas could come around any moment now, it feels like. A lot of promising avenues are being explored. However: If that right combination of ideas does materialize, you'll hear it loud and clear. Those will be major breakthroughs and in all likelihood, we'll see a substantial shift in the techniques we use in the field.

        9 votes
      2. arghdos
        Link Parent
        On top of the bias issue you linked, it’s worth recalling that China is attempting to use AI to place an entire ethnicity under surveillance:...

        And actually, this isn't a risk; it's actively happening as we speak.

        On top of the bias issue you linked, it’s worth recalling that China is attempting to use AI to place an entire ethnicity under surveillance:

        https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html

        4 votes
      3. [2]
        Adys
        Link Parent
        I wrote a theory several years ago (some time around the Trump election) that Facebook is a paperclip maximiser for engagement. Of course, thanks to its limited set of tools, it didn’t eat the...

        I wrote a theory several years ago (some time around the Trump election) that Facebook is a paperclip maximiser for engagement.

        Of course, thanks to its limited set of tools, it didn’t eat the universe in its conquest for engagement but I don’t think I was far off. They’ve since corrected explicitly for this as far as I’ve heard.

        3 votes
        1. Macil
          Link Parent
          Ultimately, businesses and other institutions are made up of people that do their reasoning for them, and with enough coordination those people can choose to modify or override the institution's...

          Ultimately, businesses and other institutions are made up of people that do their reasoning for them, and with enough coordination those people can choose to modify or override the institution's goal (profit maximization etc). AGI is scarier because it can do all the reasoning it needs without people in the loop. I see people often compare the issues of capitalism to AI risk, seemingly to imply because we're running into these issues already, AI risk can't be that much worse, but that really doesn't follow.

          Imagine businesses that increasingly cede control at the very top levels to AGI for competitive advantage: there's less and less possibility for people involved to override the institution's goal. AGI risk is like the same problems of capitalism but turned up to nightmare difficulty, with the possibility of institutions that are run by truly inhuman profit maximizers.

          1 vote
      4. onyxleopard
        Link Parent
        Some recent work may change your mind on this. It may be the case that some large language models have enough parameters that they actually can learn to reason. Or maybe they are just...

        Modern "AI" techniques structurally don't reason; they are incredibly sophisticated pattern matching (and extension) systems, and generate "new" information only in the same sense (though obviously in a far more sophisticated manner) as a Markov model, or shuffling a deck of cards.

        Some recent work may change your mind on this. It may be the case that some large language models have enough parameters that they actually can learn to reason. Or maybe they are just overparameterized and what they are actually learning is some sort of ersatz reasoning that isn’t like the symbolic reasoning that we believe humans perform. 🤷

        2 votes
  2. mihaitodor
    Link
    Gary Marcus wrote an article on this yesterday which I think is relevant and I hope doesn't come off as condescending. He's trying very hard to push against all the hype and, while I'm not in any...

    Gary Marcus wrote an article on this yesterday which I think is relevant and I hope doesn't come off as condescending. He's trying very hard to push against all the hype and, while I'm not in any way experienced in the state of the art of AI, I think more people with his background need to speak up against this because it creates the wrong incentives, where people will end up using it as is for mission-critical applications (i.e. self-help apps or self-driving cars) and they'll cause real harm to real people. If you'd like a more detailed version of it, Sean Carroll interviewed him recently for the Mindscape podcast: https://www.preposterousuniverse.com/podcast/2022/02/14/184-gary-marcus-on-artificial-intelligence-and-common-sense/

    9 votes
  3. [3]
    skybrian
    Link
    That's a very broad question but I'll try. The present First, it's good to learn how easy it is to fool people using chatbots that aren't intelligent, particularly if they want to believe. This...

    That's a very broad question but I'll try.

    The present

    First, it's good to learn how easy it is to fool people using chatbots that aren't intelligent, particularly if they want to believe. This has been true for a long time. One of the earliest projects was Eliza, a therapist chatbot that was built in the 1960's. It was very simple by modern standards, but it was able to fool a few people using simple tricks.

    It's particularly easy to fool people who don't have the chance to try it out for themselves. Someone who is trying out a chatbot or an image generator will try a bunch of things, keep whatever they like, and discard the rest. You should assume that any output that people share is much better than average output. We naturally like to share successes rather than failures. Only someone who is being very conscientious is going to try to do an unbiased test.

    I also think it would be helpful for people to actually try these things out for themselves, under supervision. A good teacher would be able to demonstrate a chatbot's known limitations, showing people things that they might not think to try on their own. This seems sort of like explaining a magic trick? First you show how impressive it is, and then show them how it fails. (And maybe Eliza would be a good place to start?)

    And actually, teaching people about magic tricks seems valuable in itself.

    The future

    There's unfortunately little we can say definitively about the future since we don't know what people will invent. Science fiction stories tend to be more imaginative than predictive. What we can say, though, is that there will be plenty of "magic tricks" that aren't quite as good as they seem.

    But true intelligence is not necessary for this technology to be very dangerous, and we can't put any bounds on how bad it will get. It's sort of like asking "how bad can a virus be, really?"

    Learning how to protect your computer accounts is important. Learning how to keep backups is important.

    It's also important to learn to be skeptical of what you read, without descending into nihilism and then latching on the next crazy theory that comes along. But I don't know how that can be taught.

    6 votes
    1. vektor
      (edited )
      Link Parent
      This is a very good question that maybe deserves its own discussion. But to give a bit of a starting point, I think it can be practiced by asking the following questions and using the resources at...

      It's also important to learn to be skeptical of what you read, without descending into nihilism and then latching on the next crazy theory that comes along. But I don't know how that can be taught.

      This is a very good question that maybe deserves its own discussion. But to give a bit of a starting point, I think it can be practiced by asking the following questions and using the resources at your disposal to seriously try to answer them. Say you read the article above. Start with the following: Who is saying this, and why should I believe them? Who is doing the reporting of what is being said? Do I trust that entire chain of telephone game? Can I get closer to the source material? Can I find contrasting accounts of the same topic? What motive would the people I have to trust here have to lie to me? Unfortunately, all of those might not get you very far if it boils down to "a self-proclaimed expert says we should all panic", but maybe that tells you enough as it is.

      Edit: What to practice it on? Things that you find suspicious, but you don't know why they're wrong. A recent example of my personal experience is all the Germany bashing regarding Ukraine. In that example, I'd take an article that claims Scholz messed up this or that and see what's actually there and what is journalistic fabrication. That way, I'd have a vague notion (from german media) what the ground truth would be, which is important if you're just starting to practice this: You can check your work. Basically anytime you see reporting that arrives at conclusions you know to be bullshit, you know to dig deeper. Not to dismiss an article you already know to be BS, but to find out what to look for in articles that you can't verify/dismiss outright.

      3 votes
    2. skybrian
      Link Parent
      One thing I wonder is how do you teach people to live with impending doom? That is, the terrible scenarios that might happen. When I grew up it was nuclear war. It didn't affect me too much, but...

      One thing I wonder is how do you teach people to live with impending doom? That is, the terrible scenarios that might happen. When I grew up it was nuclear war. It didn't affect me too much, but some people got seriously bent out of shape by it.

      Today it might be climate change, and I think AI takeoff is a reasonable concern. I don't think it's something kids should be worrying about, though; that's just likely to lead to psychological problems.

      You might say that we are, all of us, already pre-doomed because we will die someday. (That's a 99% chance I'd guess - I'm an optimist.) Learning to live with catastrophic scenarios that haven't happened seems similar?

      1 vote
  4. [2]
    rogue_cricket
    (edited )
    Link
    A lot of people more proficient than me are speaking to the technical and philosophical details of AI and speculation on its future that I feel under-qualified to comment on. What I haven't seen...

    A lot of people more proficient than me are speaking to the technical and philosophical details of AI and speculation on its future that I feel under-qualified to comment on. What I haven't seen mentioned yet though, and what I think is absolutely something a layperson should know about current commercial AIs, is the often heavy involvement of human-powered intelligence throughout the process (sometimes veering into AI claims being outright lies). I don't just mean the programmers creating the ML models, I mean the humans involved in feeding those models - those humans often get ignored or hidden behind a curtain.

    Mechanical Turk is an example of a service which provides humans en masse to complete surveys, categorize data, and answer questions about input and output. It is run by Amazon and it is mind-numbing, low-paying work. Well - someone's got to do it, right?

    But sometimes companies go a bit further than using it to do this work to train an AI and just sell you human work dressed up in a robot suit outright while the tech catches up with their aspirational business model. Many services supplement AI with human work at the very least, and while I don't think that's a bad thing necessarily, it makes me uncomfortable when they are not open about this because I think it does contribute to an unrealistic perception of what computers are currently capable of.

    (ETA: Not to mention the privacy concerns.)

    6 votes
    1. MimicSquid
      Link Parent
      Yeah. I interviewed at a firm that was claiming that they were going to be the next big thing in bookkeeping because they had cracked machine learning for financial data entry. Turns out there was...

      Yeah. I interviewed at a firm that was claiming that they were going to be the next big thing in bookkeeping because they had cracked machine learning for financial data entry. Turns out there was a good bit of "oversight" being done overseas, but they talked a good game.

      1 vote
  5. [6]
    vektor
    Link
    Honestly, I didn't read that article beyond the title, so consider that my advice on how to interpret the news item: not at all. I think an important thing to note is that to date, AI is generally...

    Honestly, I didn't read that article beyond the title, so consider that my advice on how to interpret the news item: not at all.

    I think an important thing to note is that to date, AI is generally only ever used as an automation tool for (mind-numbingly) boring tasks. AI is not nearly at the level where it can seriously compete with human intelligence, and that will stay that way for the next few years.

    Another thing is that AI is currently all about data and very little about computation. It's pattern matching. It's this xkcd. You take a table of X and Y, call X the query and Y the response. You take a nebulous computational structure that shuffles these data around an awful lot according to what parameters W tell you. You get answer Y'. Y' is wrong. Dang it. Well, what did you expect? Turns out, your computational structure is differentiable. So you can compute the gradient of Y' with respect to W. And you can use that to change W to make your new Y' slightly less wrong. You do that on a lot of data X and Y and you get "AI".

    The problem with this approach is that we haven't really figured out how to make AI generalize over different problem instances (X,Y) yet. If you provide X1, X3, Y1 and Y3, there's no guarantee that the AI can figure out that Y2 belongs with X2 at all. We're getting better at this, but slowly and only where there's relatively strong and straightforward connections between X1, X2 and X3. Deriving a complex relationship between X and Y and applying that to new X successfully is super hard.

    And I should add that this entire process, because it generalizes to X2 so poorly is incredibly data and computation hungry. Like, ridiculously hungry. If you can't find 10s of thousands of pairs of (X,Y), well, I hope your task is very simple. If you've got a complex task, you better bring millions of data points. Or find a clever way of creating (X,Y)-pairs out of unlabeled data.

    Meta: I think a the prompt is a bit vague as of now, so I gave a very rough rundown. Feel free to fill it out with more concrete questions, or ask those in the comments.

    2 votes
    1. [5]
      vektor
      Link Parent
      To maybe elaborate a bit more on the news item in question, and because I don't want to rework the flow of my original post: In light of what I wrote about generalizability and training data,...

      To maybe elaborate a bit more on the news item in question, and because I don't want to rework the flow of my original post:

      In light of what I wrote about generalizability and training data, consider this an interpretation of the news: I find it extremely likely that whatever resembles sentience in the chat logs was not "authored" by the AI, instead, what you're listening to is a remix of all the sentient humans represented in its training data. The AI is just the DJ.

      7 votes
      1. [4]
        onyxleopard
        Link Parent
        I don’t mean to be judgmental, but I think you may be falling victim to the AI effect? Where is the separation between a DJ and a musician? Do we have a reliable assay for sentience? What if my...

        I don’t mean to be judgmental, but I think you may be falling victim to the AI effect? Where is the separation between a DJ and a musician? Do we have a reliable assay for sentience? What if my own language competence is just “remixing” as you call it? Surely the vast, vast majority of all sentences uttered are using words from a shared lexicon that the utterer did not invent. On a surface level, one might call all speech “remixing”.

        1. [3]
          vektor
          (edited )
          Link Parent
          If I am falling victim to that, then the entire field is. Language models are well studied and it's understood that they have a strong bias towards repeating fragments of the training data. Which...

          If I am falling victim to that, then the entire field is. Language models are well studied and it's understood that they have a strong bias towards repeating fragments of the training data. Which is to say, they repeat what they've previously heard, irrespective of whether it's appropriate. Their skills lie in finding which fragment to repeat and in merging different fragments seamlessly. Their weakness is that they have little understanding of what they are saying.

          I'd assume that this model is no different. That would be an extraordinary claim, requiring extraordinary evidence.

          I hope that clarifies what I mean with DJ. Spotify's Recommender System is a DJ as well, but not a musician. The "remixing" we humans do happens on a way lower level.

          As for a reliable assay for sentience, there's a few ideas that are a lot more promising than text chat. I've noted my ideas here - TL;DR: Training data for text models is contaminated with human sentience, so we have to strip it away and train intelligence otherwise. In the case of all current language models, sadly that leaves nothing behind. Oops. If an AI would develop evidence of sentience without being exposed to human sentience in training, then that's a very strong case, comparatively.

          1 vote
          1. [2]
            onyxleopard
            (edited )
            Link Parent
            You are correct that there is evidence that larger language models do tend to memorize and there are studies that show we can do prompt engineering to get these models to regurgitate text they’ve...

            You are correct that there is evidence that larger language models do tend to memorize and there are studies that show we can do prompt engineering to get these models to regurgitate text they’ve memorized during training. I’m not sure that’s as strong as your claim that they have a strong bias to plagiarize, though. Human beings that we consider intelligent also regurgitate text they’ve memorized. Just because someone can prompt me to quote Family Guy or Shakespeare, does that mean I’m not intelligent?

            I agree that we should not confuse the ability to read and write bits of grammatical, coherent natural language with general intelligence or sentience. At the same time, I also don’t think it’s necessarily impossible for such models to actually exhibit intelligence. Humans who we consider intelligent can learn about domains they have no other experience with except via reading. Maybe a human would have to read a lot about some arbitrary domain that they have no real-world experience with in order to talk about it intelligently, but I don’t think we can say for certain that they can’t be intelligent or sentient categorically. Like, if I ask you to read the Harry Potter series and prompt you to answer questions about it and I ask a large language model to do the same thing, why should we give the human being more credit than the machine in that situation if they both perform equally well in the QA task? You may say that we need a more rigorous and general test than QA about J.K. Rawling’s fiction, and I’d agree. But, I think we need a better definition of intelligence so we can assay for general intelligence (which I think is where you’re hung up?).

            There is some evidence that large language models may be capable of performing symbolic reasoning when you prompt them to explain how they arrived at their answers. Do you consider symbolic reasoning skills the same as DJing?

            Ultimately, I think your claim that current language models are contaminated with human sentience is too strong—human beings don’t learn language in a vacuum either, so we are just as contaminated (even if we may require less training data, currently).

            1. vektor
              Link Parent
              That is exactly my point when I say that text chat is insufficient. Like, no, I don't think anymore that we can assess intelligence by proxy of natural language question answering. That problem is...

              You may say that we need a more rigorous and general test than QA about J.K. Rawling’s fiction, and I’d agree.

              That is exactly my point when I say that text chat is insufficient. Like, no, I don't think anymore that we can assess intelligence by proxy of natural language question answering. That problem is solved(ish) and I don't see the AI takeover yet. And in the case of Lamda, same thing. What looks like introspection and consciousness of the AI is remixed introspection and consciousness of the humans that provided the training data. Hell, the Turing test for text chat can be considered solved, but that's not because we have AI but because that format of Turing test is flawed. Maybe if you were to ask all the right questions you could use it, but that's too complicated to figure out imo. Better to go for a better format instead. Hence the artificial environments. You could, without betraying human consciousness, entice AI agents in a completely artificial world to speak english(just so we can understand them). Once they start having consciousness-revealing conversations amongst them, then you've got a good case for consciousness. And yes, I'm deliberately avoiding the question of how to define intelligence of consciousness. I leave that to the people who study that in humans and do my best to give them a environment where their methods can be applied directly.

              The paper on symbolic reasoning is interesting, but without diving in deep, probably oversold. Language models (i.e. transformers) are quite limited in their computational structure. For example, I think the length of any reasoning chain they can perform is tightly constrained by the model size. What would impress me is if we sidestepped that issue, for example with a rewrite system, where the transformer would rewrite the steps of that reasoning chain to break it down, thus tackling very long reasoning chains. Kind of like what humans do when they write down intermediate results. Basically, have the transformer loop over its notes, simplifying expressions to save memory while adding more expressions to solve additional steps or incorporate information that previously didn't fit. That would impress me.

              There's an interesting artifact by the way, in the fact that the prompt to justify an answer changes the answer. That is a useful property, but it also calls into question the method: You're not justifying what you did previously, you're doing it again, but this time a different way. I think that kind of betrays that TFs don't do symbolic reasoning unless you ask for it.

              Ultimately, I think your claim that current language models are contaminated with human sentience is too strong—human beings don’t learn language in a vacuum either, so we are just as contaminated (even if we may require less training data, currently).

              That is probably fair, and we have to account for that when we make up a benchmark for sentience. We learn at least some aspects of how we express consciousness socially, so it's probably unreasonable to expect the same degree from a machine, unless we give it's society the same amount of time to develop the social component. That said, don't think of it as demanding too much; think of it as establishing a sufficiency limit. I'm not setting this as the lower benchmark, but the upper benchmark. If machines can do what I ask, they are (if you believe my argument, and maybe refine my setup a bit) definitely sentient. I'm not saying that a machine that can not do that is definitely non-sentient, but that I would not pretend to know.

              3 votes
  6. [2]
    Eric_the_Cerise
    (edited )
    Link
    I'd like to put in one tidbit that may or may not have been mentioned already in this thread (I've only skim-read it so far), but which usually gets overlooked. AGI means "human-level...

    I'd like to put in one tidbit that may or may not have been mentioned already in this thread (I've only skim-read it so far), but which usually gets overlooked.

    AGI means "human-level intelligence" (or something like that; definitions vary) ... but it does not mean "human-like intelligence" ... they will lack (true) emotions and hormones and a gut microbiome, and a host of other things we may not even know about yet, that contribute to human thinking and behavior.

    Quite likely, the first alien intelligence humans meet, will be an AGI of our own making, and it will quite likely think and behave in ways that are very different from what humans expect from "intelligent life". That could be either good or bad, but at the very least, it will lead to many misunderstandings, miscommunications, and bad assumptions on both sides of the aisle.

    ETA: Loosely speaking, think about how humans from different cultures often misunderstand each other, occasionally with catastrophic consequences, and multiply that by at least a thousand, perhaps as much as a million.

    2 votes
    1. vektor
      Link Parent
      I have to disagree a bit. It certainly could be the case, and a priori it should be. But considering how we use enormous amounts of data to train state-of-the-art AIs these days, and that data...

      I have to disagree a bit. It certainly could be the case, and a priori it should be. But considering how we use enormous amounts of data to train state-of-the-art AIs these days, and that data betrays a lot about human-like intelligence, if we want to make the AI human-like, we certainly can. And considering how we currently train AIs, we couldn't even avoid it, except by excluding vast amounts of "human-contaminated" data.

      However, our current tools might not be relevant anymore by the time we create the first AGI, in which case maybe we won't need as much data and instead rely (as I previously wrote in the other thread) on artificial environments. In that case, your argument will apply. Again, unless we decide (and manage) to bake human-likeness into the AI.

      3 votes
  7. lou
    Link
    That's an interesting post. I noticed that no one suggested books on the subject. Would anyone be willing to suggest a book on the subject that is adequate for laypersons? Thanks!

    That's an interesting post. I noticed that no one suggested books on the subject. Would anyone be willing to suggest a book on the subject that is adequate for laypersons? Thanks!

    2 votes
  8. onyxleopard
    Link
    Personally, I think it’s really tough for laypoeple to know much of anything about AI. It’s hard enough for domain experts to keep up with the field. To garner any productive take-always from the...

    Personally, I think it’s really tough for laypoeple to know much of anything about AI. It’s hard enough for domain experts to keep up with the field.

    To garner any productive take-always from the Blake Lemoine fiasco I think one needs to have some foundations in various fields including at least philosophy of mind, psychology, ethics, and linguistics in addition to machine learning. For a layperson, I think it’s infeasible to expect to be informed in all of these specialized fields. Laypeople are not expected to know much about other specialized fields, such as nuclear energy or quantum physics either (even if such fields have impact on their daily lives), so it’s not necessarily a problem for laypeople to be uninformed about AI.

    Where systems are claimed to possess qualities like intelligence or sentience, I think we need to first develop reliable, objective measures before we can engage in conversations about how we should think about systems like LaMDA. There are real experts at Google (and elsewhere) who are seriously thinking about these problems, so I take some comfort in that. E.g., François Chollet’s proposal for measuring intelligence, generally: On the Measure of Intelligence.

    1 vote