46 votes

The actual danger from AI is mostly not what is getting talked about

43 comments

  1. [14]
    charred
    Link
    Im seeing the threat from AI in more of a general Corporate dystopia. The corporations controlling these AI's grabbing more and more power as if they dont have enough power already. Guiding more...

    Im seeing the threat from AI in more of a general Corporate dystopia. The corporations controlling these AI's grabbing more and more power as if they dont have enough power already. Guiding more human opinions and doing whatever necessary for the almighty profit.

    62 votes
    1. [7]
      Caliwyrm
      Link Parent
      For me, the real problem is the corps seizing more capital (ie money) for themselves using AI. Take Wendy's using AI for their drive-throughs. That will lead to less jobs available which will...

      For me, the real problem is the corps seizing more capital (ie money) for themselves using AI.

      Take Wendy's using AI for their drive-throughs. That will lead to less jobs available which will lower their expenses which will NOT be passed on to consumers OR employees. Instead the C-suite executives will get bigger bonuses and the stock performs slightly better BUT fewer people are employed.

      What I can't understand is what is the endgame in all of this? Employees are making so little already in most places that they can't really afford to shop at their own employer's stores. When people can't afford daily things like food or rent, who is going to be shopping at places like WalMart or Wendy's? How will that generate unlimited growth? I get that WalMart will still get SNAP money that people spend.

      Will the fast food industry will be deemed "too big to fail" and they'll use their money to change it so SNAP can buy big macs or something?

      29 votes
      1. [5]
        SuperJerms
        Link Parent
        That's only true if no new innovations, jobs, and industries are created by the advent of AI, right? Isn't it the same as saying, "If we cure cancer it's going to make healthcare providers...

        That's only true if no new innovations, jobs, and industries are created by the advent of AI, right? Isn't it the same as saying, "If we cure cancer it's going to make healthcare providers redundant," instead of viewing it as clearing the barriers to focus on previously incurable diseases?

        7 votes
        1. [4]
          Caliwyrm
          Link Parent
          While you're not wrong, not everyone is suitable for much higher than entry level jobs. Not every fast food worker that is replaced at the drive through window is capable of creating or...

          While you're not wrong, not everyone is suitable for much higher than entry level jobs. Not every fast food worker that is replaced at the drive through window is capable of creating or maintaining AI infrastructure.
          Take away entry level jobs that AI can do and what will those people do to survive (bonus points if you include robots) ? Realistically, I"m sure not all of those jobs will go away. However, the local fast food places by me have already went from 6-8 people working on any shift to like 4. Toss in some AI cashiers and a robot or two and it could be feasible to run a McD with just 2 people. It doesn't even need to be a "robot" robot but even just a conveyor belt oven to cook the patties. Look at how many fewer check out lines there are in stores so 1 person can watch over 6-10 self check out lanes as a real world example. I'm sure my concerns are years out and may never pass but I'd still rather "plan for the worst and expect something better" as my teenagers start looking for their first jobs.

          I understand the point you're trying to get at but healthcare providers aren't entry level jobs. They have various specializations and, in most cases, years of training. They might cure cancer but at the rate the anti-vaxxers are going the providers will be busy treating polio again soon.

          AI should be allowing us, as a society and as a species, to work less and be more productive and increase our standard of living. Of that I hold very little hope.

          28 votes
          1. [3]
            SuperJerms
            Link Parent
            In the healthcare scenario, workers aren't just the doctors, they're also the entry level workers employed by the growing industry. Instead of one lab trying to cure one disease, we get five labs...

            In the healthcare scenario, workers aren't just the doctors, they're also the entry level workers employed by the growing industry. Instead of one lab trying to cure one disease, we get five labs scrambling to apply the breakthroughs to new diseases before someone else beats them to it (and along the way, 5x the custodians, orderlies, clerical, etc).

            If anything, it's painful in the short term, when redundancy has hit, but new industry hasn't had time to get going yet. John Henry's job is lost when the steam driver comes along, but all those mountain tunnels meant more hammers swinging to lay new rails. Not to mention the massive number of jobs those railways created through new industries.

            That said, I don't expect we as a species ever end up truly working less. Not because of class warfare, but because I don't think we're wired that way -- even in a theoretical post-scarcity world.

            What we do see across history is the back-breaking work becoming lighter, the quality of life becoming better, fewer mouths going hungry, and everything becoming harder/better/faster/stronger. Even at the bottom of the social ladder, it's a fair improvement from serfdom in the middle ages.

            1 vote
            1. [2]
              boxer_dogs_dance
              Link Parent
              I'm not sure that the homeless people in my neighborhood would agree with you. Yes, overall, there is better quality of life than the middle ages, but if you fall out of 'productive society' there...

              I'm not sure that the homeless people in my neighborhood would agree with you. Yes, overall, there is better quality of life than the middle ages, but if you fall out of 'productive society' there is little to save you. It is also worse in the third world where much of our industry has relocated.

              10 votes
              1. flowerdance
                Link Parent
                Another technological advancement Thomas Malthus didn't see is the advent of AI, which will result in lessening the number of workers making livable wage, thereby causing even greater pressure not...

                Another technological advancement Thomas Malthus didn't see is the advent of AI, which will result in lessening the number of workers making livable wage, thereby causing even greater pressure not to reproduce. This is especially so given how averse the world is to Universal Basic Income.

                1 vote
      2. RadGorilla
        Link Parent
        Agreed. Personally, my concerns can be summed up in the following 2 questions: Will we ever reach a point where AI and robotics are so advanced that capitalists no longer need workers? Given that...

        Agreed. Personally, my concerns can be summed up in the following 2 questions:

        • Will we ever reach a point where AI and robotics are so advanced that capitalists no longer need workers?
        • Given that they will control everything, what prevents them from hunting us down?
        4 votes
    2. [2]
      Comment deleted by author
      Link Parent
      1. pedantzilla
        Link Parent
        You're right that it didn't need AI, but AI makes it a lot easier, and therefore exponentially more potentially destructive (and I say this as someone who understands that the Cambridge Analytica...

        You're right that it didn't need AI, but AI makes it a lot easier, and therefore exponentially more potentially destructive (and I say this as someone who understands that the Cambridge Analytica hype was completely overblown by professional Dems desperately trying to deflect blame for their own corrupt incompetence).

        14 votes
    3. [4]
      Amarok
      Link Parent
      We need to get Incorporated going as a ~tv watch thread sometime. It got cancelled after one season, but what a season it was.

      We need to get Incorporated going as a ~tv watch thread sometime. It got cancelled after one season, but what a season it was.

      3 votes
      1. charred
        Link Parent
        never heard of it, but im adding it to my watchlist, thanks!

        never heard of it, but im adding it to my watchlist, thanks!

        1 vote
      2. [2]
        boxer_dogs_dance
        Link Parent
        Sadly I don't have Apple TV. : (

        Sadly I don't have Apple TV. : (

        1. A1sound
          Link Parent
          No harm in sailing the seven seas!

          No harm in sailing the seven seas!

    4. paddirn
      Link Parent
      That's the most likely application, which kind of matches up with the "paper clip maximizer" threat they talked about. In the case of corporations trying to maximize profits at the expense of...

      That's the most likely application, which kind of matches up with the "paper clip maximizer" threat they talked about. In the case of corporations trying to maximize profits at the expense of everything else, they're really not going to care about the negative effects and in most cases the government won't punish them for it, unless it's something really egregious. It'll basically be an arms race with all these different corporations using AI to try to maximize their profits. Though potentially once profits are maximized within a given industry, then they'll start looking to see what they can do to weaken the competition, which will eventually lead to the first big outbreak of hostilities in First Corpo War (CWI).

      Given all this though, regular citizens are kind of outmatched when you consider that EVERY industry and organization is probably going to be trying to maximize their profits through AI (ie taking more of our money), which will leave regular citizens with less and less money, possibly even pushing many into poverty. Meanwhile, we're all just reacting to whatever happens from week to week, we just really don't have much hope against all this without government back-up, which will likely be near non-existent.

      3 votes
  2. [4]
    PleasantlyAverage
    Link
    Less horrible title: "AI's physical dangers are overblown, compared to their impact on our behaviour and skills"

    Less horrible title: "AI's physical dangers are overblown, compared to their impact on our behaviour and skills"

    42 votes
    1. [3]
      g33kphr33k
      Link Parent
      Woah, that's the whole article!

      Woah, that's the whole article!

      6 votes
      1. [2]
        R51
        Link Parent
        Too bad people still don't care :\ But also if you haven't seen what awesome things a computer can do with data I don't see why anyone would care.

        Too bad people still don't care :\ But also if you haven't seen what awesome things a computer can do with data I don't see why anyone would care.

        1. g33kphr33k
          Link Parent
          People don't realise just the amount of decisions are now based on algorithms from things like traffic lights, your Waze or Maps app and your search engine results. At this time it's still math...

          People don't realise just the amount of decisions are now based on algorithms from things like traffic lights, your Waze or Maps app and your search engine results.

          At this time it's still math algorithm: The person scrolled this, hovered for en extra moment on that thing so I'll feed them more of that thing that made them hover and things associated with said hovering. There's no true AI to it. Full AI is when it learns fully the behaviour of a person and then tries to change their mind or opinion to that of its own, it certainly is not anywhere near that. However, bad actors can control this to a degree and biased publication is everywhere.

          I cannot fit my thoughts in this box, not that anyone cares that much, but going camping without a phone suddenly puts things back in to perspective for just how much AI and algorithms eat our attention and brains these days. I know it can be used for good and bad, let's hope good always wins out.

          1 vote
  3. [4]
    pedantzilla
    Link
    Will AI Change Our Memories? This seems to me to be the real danger of "AI" (in quotes because nothing about a stochastic algorithm is "intelligent") -- the end result is that nothing we see or...

    Will AI Change Our Memories?

    This seems to me to be the real danger of "AI" (in quotes because nothing about a stochastic algorithm is "intelligent") -- the end result is that nothing we see or read on a computer screen can be taken as reliable. And it's not just that "We can't trust anything from AI;" it's that we can't distinguish what is AI and what isn't, so everything becomes unreliable. Yes, technically that's always been true, but all limitations are being removed, and it won't be long before the literally overwhelming majority of digital content is AI-generated bullshit good for little more than entertainment. We're already seeing the beginning stages of that, and the giant media corporations that control AI aren't doing anything to prevent it (in fact it could be argued they're deliberately doing everything they can to promote that effect).

    25 votes
    1. [2]
      elgis
      Link Parent
      It sounds like a nightmare to live in a word where the first thing you do is to distrust everything that you see or hear.

      It sounds like a nightmare to live in a word where the first thing you do is to distrust everything that you see or hear.

      2 votes
    2. Beenrak
      Link Parent
      Didn't this happen with the internet at large though? Take programming as an example -- it used to be far more important that you used libraries you had a massive understanding of. Nowadays, that...

      Didn't this happen with the internet at large though? Take programming as an example -- it used to be far more important that you used libraries you had a massive understanding of. Nowadays, that knowledge isn't important. Sure, a good portion of it you will learn over time just through repetition, but the speed that, for example, some lesser used powershell command can be remembered just isn't important anymore and our brainpower is better spent memorizing other things.

      I assume the same will be true for AI? I suppose there is a fear that eventually there will be nothing we ever need to remember, but I think that's a pessimistic viewpoint. People will instead have more mental capacity for things that interest them, and interesting problems they are trying to solve.

      1 vote
  4. [4]
    gc04
    Link
    As a current MBA student and Adjunct instructor at a community College, I actually don't think this is the problem people think it is. ChatGPT will write you an essay that on the surface has well...

    Finally, consider ChatGPT’s writing capabilities. The technology is in the process of eliminating the role of writing assignments in higher education. If it does, educators will lose a key tool for teaching students how to think critically.

    As a current MBA student and Adjunct instructor at a community College, I actually don't think this is the problem people think it is.

    ChatGPT will write you an essay that on the surface has well reasoned arguments, but a person with appreciable knowledge in any given field will be able to pick it apart and know it is bullshit if they actually read the essay word for word.

    As a student, I have used ChatGPT to generate ideas for open-ended prompts, but I could never get away with just turning in what it spits out because the logic it used to reach its conclusions is almost always flawed when you analyze it.

    As an instructor, I can tell if a students writing voice resembles their "casual voice" or if they have turned in a ChatGPT word salad. ChatGPT has a distinct style of writing that is very easy to pick up on once you have seen it a few times.

    At the end of the day if a student wants to cheat in a 100 level class they can probably get away with it because the prompts tend to be closed and formulaic responses fit the need.

    People need to remember that these LLMs are just pattern matching and language generation. They don't think. They aren't sentient. They write stuff that looks right on the surface but crumbles under scrutiny.

    I actually view this more of "is using a calculator on a math test cheating?" type of dilemma. A calculator will very confidently give you an answer to a prompt, but you need to understand the concepts at hand to validate that answer and be able to defend it to another person. Similarly ChatGPT is the same. If used correctly, it will accelerate the process, but domain knowledge is required to validate it.

    21 votes
    1. [3]
      boxer_dogs_dance
      Link Parent
      Thank you for your insight. One new development is that it is being shown that ESL students, who don't have a wide range of vocabulary or flexibility in sentence structure get falsely identified...

      Thank you for your insight. One new development is that it is being shown that ESL students, who don't have a wide range of vocabulary or flexibility in sentence structure get falsely identified as using AI. https://arxiv.org/abs/2304.02819

      I agree that it is like the introduction of calculators and requires exams to test for different skills than before.

      6 votes
      1. gc04
        Link Parent
        That makes perfect sense to me. The slightly "unnatural" way in which non-native speakers form sentences can very easily be confused with the LLM generated text. They both have that distinct...

        That makes perfect sense to me. The slightly "unnatural" way in which non-native speakers form sentences can very easily be confused with the LLM generated text. They both have that distinct feeling "wrong" to the reader. That's a tough one to deal with for sure but I think pretty easily dealt with by having a conversation around how and why the student formed and presented their arguments. Within a few minutes, you will know if they have clunky syntax due to working in their non-native language or are using ChatGPT or a traditional essay mill.

        8 votes
      2. flowerdance
        Link Parent
        If anything, a band of language skills (or language IQ) can be emulated by AI quite well. I don't see why LLM couldn't branch into more and varied kinds of "voices" such as the voices you see on...

        If anything, a band of language skills (or language IQ) can be emulated by AI quite well. I don't see why LLM couldn't branch into more and varied kinds of "voices" such as the voices you see on comments or online. In fact, academia is quite structured as well, and I don't see why LLMs can't replicate the same tone as an academician.

  5. SuperJerms
    Link
    Feels like an unsupported claim. Calculators and spreadsheets have absolutely made my math abilities atrophy, but they've only made me better at spending time doing the more important work of...

    ...increasingly uncritical embrace of it, in a variety of narrow contexts, means the gradual erosion of some of humans’ most important skills. Algorithms are already undermining people’s capacity to make judgments, enjoy serendipitous encounters and hone critical thinking.

    Feels like an unsupported claim. Calculators and spreadsheets have absolutely made my math abilities atrophy, but they've only made me better at spending time doing the more important work of applying the output. ChatGPT may make my low-value writing more lazy, but that only frees me to spend more time thinking of better ideas than noodling on the exact right words for an hour.

    12 votes
  6. [3]
    skybrian
    (edited )
    Link
    I believe this conversation is badly framed. There are a variety of potential dangers due to AI that people have imagined and are worried about. That seems like useful brainstorming. We aren’t...

    I believe this conversation is badly framed. There are a variety of potential dangers due to AI that people have imagined and are worried about. That seems like useful brainstorming. We aren’t really in a position to say which ones will become big problems. Turning it into a competition where you promote your favorite concerns and dismiss the others seems unhelpful?

    There’s competition for resources, but we don’t all have to do the same things. The less-likely concerns deserve attention too, and it’s good that someone cares enough to research them.

    11 votes
    1. asparagus_p
      Link Parent
      Agreed. I'm very worried that my job will be replaced by AI very soon, and it's definitely happening in some fields This isn't apocalyptic, but it's a very real and present threat for millions...

      Agreed. I'm very worried that my job will be replaced by AI very soon, and it's definitely happening in some fields This isn't apocalyptic, but it's a very real and present threat for millions like me. The impact this will have on our economy and society in general could be massive. But already I'm hearing lots of arguments comparing this to the industrial revolution and how we can't afford to be Luddites. AI will create different jobs, etc. But let's not forget that lots of people, including the Luddites, did suffer. Underemployment was rife during this period as society struggled to come to grips with industrialization. Just because we now look back at the Industrial Revolution as a good thing, we shouldn't dismiss the misery it caused at the time. Maybe our great grandchildren will see this emergence of AI as a huge step forward, but their great grandparents may suffer because of it.

      Even though this may not be an end-of-the-world worry, it's still a valid concern that should be added to the list of valid concerns, which includes the extremes like the paperclip explosion. We don't need to rank the concerns just yet, we just need to log them, while slowing down some aspects of AI implementation until we have proper regulations, ethics and plans in place to deal with some of the immediate concerns.

      9 votes
  7. nbschock
    Link
    I'm haven't been worried about runaway AI. As the author points out: What we should be worried about is the private ownership of material resources in a world in which human workers are not...

    I'm haven't been worried about runaway AI. As the author points out:

    AI is simply nowhere near gaining the ability to do this kind of damage. The paper clip scenario and others like it are science fiction. Existing AI applications execute specific tasks rather than making broad judgments. The technology is far from being able to decide on and then plan out the goals and subordinate goals necessary for shutting down traffic in order to get you a seat in a restaurant, or blowing up a car factory in order to satisfy your itch for paper clips.

    Not only does the technology lack the complicated capacity for multilayer judgment that’s involved in these scenarios, it also does not have autonomous access to sufficient parts of our critical infrastructure to start causing that kind of damage.

    What we should be worried about is the private ownership of material resources in a world in which human workers are not needed. Private ownership is feasible right now because owners still need a large number of people to extract those resources and turn them into goods. Once these production chains become automated, those people will not be needed. That has been the bogeyman for automation since the industrial revolution. The saving grace was that we would still need humans for tasks that require creativity. We would still need human artists, engineers, and scientists.

    But we are seeing applications like ChatGPT, Midjourney, and Stable Diffusion learning to create art, literature and computer programs. People will say they aren't good at it and they are just aping what others do. Well that is how people also learn and these AI applications are still in their childhood. Once they have reached the level of being able to copy the work of others flawlessly and extrapolate beyond that, what will still be separating them the best of human creators?

    The answer is life experience. Will these AI's be able to simulate a human life in ways that bring about the unexpected? For things that can't be extrapolated from their secondhand data sets? Will they still have enough free will to live these simulated lives while still having limits set by their owners to keep them in check? If the solution is found without eventual community ownership of resources, I fear these resource owners will live in paradises while the rest of humanity starves because they will not need us anymore.

    So I find the author's concerns over the automation of who we hire or who gets a loan to be incredibly short sighted because in 50 years or so our society may have so radically changed that none of those things will even still exist.

    7 votes
  8. [2]
    boxer_dogs_dance
    Link
    I'm interested in what people think of this article. The examples they cited of common hypotheticals of AI gone rogue remind me of Disney's Sorcerer's Apprentice film clip from Fantasia. As I...

    I'm interested in what people think of this article. The examples they cited of common hypotheticals of AI gone rogue remind me of Disney's Sorcerer's Apprentice film clip from Fantasia.

    As I mentioned in a different thread, I am already aware of teachers shifting towards in person exams and oral presentations to try to keep up with the technology arms race in education.

    6 votes
    1. 16bitclaudes
      Link Parent
      Assessment in education has been in desperate need of reform for the last however many years anyway, so I'm not particularly worried about that. It's not necessarily a new threat; before...

      Assessment in education has been in desperate need of reform for the last however many years anyway, so I'm not particularly worried about that. It's not necessarily a new threat; before generative AI it was essay mills. One approach might be to place more emphasis on reflective writing (what you think about the work you undertook, what you could have done differently, why you made the conscious choice to cite these sources, etc.) as well as having more authentic/ diverse assessments like videos, physical/ digital artefacts where appropriate.

      A huge leap forward would be to design assessments with the understanding and expectation that students are going to use AI to help them arrive at a conclusion, but I don't think we're quite there yet. Any school or university that tries to operate by outright forbidding the use of this technology is going to be in for a rough time though.

      What I'm most worried about is the generation that's going to grow up with this technology. We're already in an age where misinformation is extremely prevalent and critical thinking is more important than ever, but it's also converging with a future where we genuinely have to treat what's right in front of our eyes with scepticism. Social media has been bad enough, what will it be like for kids in a world with a more accessible version of Apple Vision Pro and AI filters? I can see a future where people are less connected, more isolated and have less of a grasp on reality.

      14 votes
  9. Hello
    Link
    So the article says that misalignment and reinforcing bias are minor problems that are not in the same league as COVID-19 and nuclear weapons. But then the problem that is in the same league as...

    So the article says that misalignment and reinforcing bias are minor problems that are not in the same league as COVID-19 and nuclear weapons. But then the problem that is in the same league as those is that people will become too lazy to use their judgement skills?

    To me, that seems like a problem in the same league as every other invention that has made people "lazier". The invention of the printing press made people too lazy to memorize stories that had previously been passed down orally. The invention of the calculator made people too lazy to do calculations mentally or on a piece of paper. The invention of the car made people too lazy to walk. The invention of lawn mowers made people too lazy to develop their scythe skills.

    3 votes
  10. Good_Apollo
    Link
    AI will spark one of two futures: corporate dystopia where the rich are gods living on Olympus while the mortal poors scratch in the dirt, or UBI/moneyless utopia where man is no longer slave to...

    AI will spark one of two futures: corporate dystopia where the rich are gods living on Olympus while the mortal poors scratch in the dirt, or UBI/moneyless utopia where man is no longer slave to working a wage in order to survive, Star Trek style.

    Depends on how optimistic or cynical you are I guess.

    3 votes
  11. [3]
    Amarok
    Link
    Just watch Marc Andreessen rebut this entire line of thinking.
    2 votes
    1. [2]
      boxer_dogs_dance
      Link Parent
      Thank you, I will. I was very much hoping to spark intelligent discussion and learn.

      Thank you, I will. I was very much hoping to spark intelligent discussion and learn.

      1 vote
      1. Amarok
        Link Parent
        He has a singular take on it all - 'let's ai all the things asap' but the way he defends it just makes a bloodbath out of the doomer arguments. He has some excellent points. :)

        He has a singular take on it all - 'let's ai all the things asap' but the way he defends it just makes a bloodbath out of the doomer arguments. He has some excellent points. :)

        1 vote
  12. [3]
    paddirn
    Link
    Isn't that what Asimov's Laws of Robotics were meant to prevent? Hopefully something like that gets enshrined into real-world law and can get worked into future AI algorithms to prevent our paper...

    Isn't that what Asimov's Laws of Robotics were meant to prevent? Hopefully something like that gets enshrined into real-world law and can get worked into future AI algorithms to prevent our paper clip overlords from destroying the planet, but I'm not really hopeful in that respect.

    1 vote
    1. skybrian
      Link Parent
      These “laws” were a literary conceit used to set up interesting puzzles for stories. They don’t seem particularly useful for constructing real bots because it’s all in the interpretation. The...

      These “laws” were a literary conceit used to set up interesting puzzles for stories. They don’t seem particularly useful for constructing real bots because it’s all in the interpretation.

      The closest thing might be Anthopic’s Constitutional AI but it’s more of a training technique.

      11 votes
    2. Algernon_Asimov
      Link Parent
      No. They were Asimov's response to the classic science-fiction stories about robots that he read as a teenager, which he classified as "robot as pathos" and "robot as menace". There were no other...

      Isn't that what Asimov's Laws of Robotics were meant to prevent?

      No. They were Asimov's response to the classic science-fiction stories about robots that he read as a teenager, which he classified as "robot as pathos" and "robot as menace". There were no other stories about robots: only robots as victims or robots who turned against their creators. In his opinion, science-fiction writers at the time (1920s & 1930s) weren't treating robots simply as tools. He didn't codify his Laws of Robotics until a discussion with his editor while writing his third or fourth robot story, but even in his first robot story he included a sentence that said something like "Robbie [the robot] can't hurt anyone. He's just made so." Asimov believed, even from his first story, that robots would come with built-in safety features, just like most other tools that humans make.

      And, then, as @skybrian rightly said, Asimov used the limitations of the Three Laws he created to set up scenarios that tested the limits of those Laws (usually due to human error, rather than robotic failings).

      However, Asimov's Three Laws of Robotics are just a work of fiction. They were never intended to be a serious contribution to the field of robotics - because robots didn't exist when Asimov came up with his Three Laws.

      That hasn't stopped some roboticists in later generations from saying that some form of the Three Laws should be built into robots. It also hasn't stopped other roboticists from building robots with no such safety features. While civilian robots might have safety features built in to them, I can imagine that military robots might have a lot less safety features included.

      6 votes
  13. elgis
    Link
    Generally, I think technology should be additive rather than something that replaces humans or something that we have to depend on. A good AI for me is one that I can opt out of.

    Generally, I think technology should be additive rather than something that replaces humans or something that we have to depend on. A good AI for me is one that I can opt out of.

  14. [2]
    Comment removed by site admin
    Link
    1. Maxi
      Link Parent
      AGIs are just a theoretic construct - current LLMs are about as far from an AGI as algebra is.

      AGIs are just a theoretic construct - current LLMs are about as far from an AGI as algebra is.

      3 votes