27 votes

AI: The decade ahead

33 comments

  1. [2]
    Oxalis
    Link
    It's worth noting that the author has recently started an investment firm called AGI Capital that I guess hopes to bring forth the doomsday he's outlined here and make a load of cash along the...

    It's worth noting that the author has recently started an investment firm called AGI Capital that I guess hopes to bring forth the doomsday he's outlined here and make a load of cash along the way. Or maybe it's just another grift from a tech bro that got fired from Open AI due to his alignment with individuals responsible for the failed ouster of Sam Altman?

    Even if you don't buy the yarn that Aschenbrenner is spinning [1] the thing that can't be denied is that a large number of AI developers believe this. It's weird to see people like Altman and Musk constantly decry the horrible civilization-rending possibilities of AGI (sometimes directly to the US Senate) then about-face and secure another record funding round in dogged pursuit of it.


    1. And why should you, we've gone through how many GPU-powered hype cycles that overtook the world then dissolved into nothing. The way that Crypto fed into NTFs which fed into neural networks, each making use of the same hardware and amplifying their sense of urgency and purpose with each swell gets extremely tiring. Especially when you find the same names popping up as industry leaders. Can't wait to spend my WorldCoin rations on compute tokens.
    29 votes
    1. imperator
      Link Parent
      I think a large number of AI developers are drinking their own Kool-Aide. I think we'll likely see some disruption in employment, but no where near the levels being touted unless there is some...

      I think a large number of AI developers are drinking their own Kool-Aide. I think we'll likely see some disruption in employment, but no where near the levels being touted unless there is some sort of major breakthrough.

      Most companies data is so bad that AI won't have the significant impact they think it will.

      I didn't believe the hype and I know I'm in the minority it seems but we've been through this before with other pump and dump schemes.

      1. RPA
      2. Crypto/blockchain
      3. NFTs

      These are the big 3/4 I've seen in the finance industry none of which have really changed the direction the world works.

      6 votes
  2. [3]
    teaearlgraycold
    (edited )
    Link
    Saying that LLMs are as smart as X (a high school student, a college graduate, etc.) is a magic trick that any person educated in this space should know isn’t honest. You’re going from generating...

    Saying that LLMs are as smart as X (a high school student, a college graduate, etc.) is a magic trick that any person educated in this space should know isn’t honest. You’re going from generating a response to an exam question in isolation and pretending that implies capabilities in other areas that the human would have. Where the human’s exam answer is the culmination of their efforts, the tip of an iceberg, the LLM’s answer is the starting point. All they “know” is text, trained in a Chinese-room process.

    28 votes
    1. [2]
      Raspcoffee
      Link Parent
      If anything, the measure in the graph I assume you point out (page 48 of the pdf) you point out, the measure is worthless when tried to apply to humans. "Effective Compute (Normalized to GPT-4)"...

      If anything, the measure in the graph I assume you point out (page 48 of the pdf) you point out, the measure is worthless when tried to apply to humans.

      "Effective Compute (Normalized to GPT-4)" already sounds vague to me.... it then extrapolates the graph further. To which, we have no idea how easy it'll be to develop AI further. Also, then just increasing the exponent on the line later involving 'Automated AI Research' is... something. Considering we have no idea whether Automated AI Research is even possible, when this will happen, and more.

      We most likely will see an AI surpass human intelligence in some manner at some point in our lives. And it's something we should prepare for. But LLMs are far from AGIs.

      4 votes
      1. teaearlgraycold
        (edited )
        Link Parent
        I absolutely believe AGI is possible, if by no other means we should at least be able to simulate a human brain in a computer given a high quality brain scan and a properly modeled neuron. I also...

        I absolutely believe AGI is possible, if by no other means we should at least be able to simulate a human brain in a computer given a high quality brain scan and a properly modeled neuron.

        I also believe that the best LLMs do "understand" in some capacity. They have managed to make small, accurate-enough, models of various systems simply by ingesting text. To me that gives an interesting response to the Chinese Room thought experiment. Maybe a human can't learn language through symbol manipulation but a computer can. As Ira Glass said in an episode of This American Life where he talks with a Microsoft researcher:

        The idea is that back when GPT-4 was being trained for it to really consistently get the next word correct, to do that reliably, it had to do more than just bullshit. It had to do more than guess based on patterns. To get the next word right, it had to truly understand the words coming before it.

        (Although it's still "guess[ing] based on patterns", but in some sense we are all just guessing all the time, we are just mostly right)

        I would measure GPT-4's intelligence solely by the scraps of understanding it's pulled together, the implicit connections it can draw between two concepts. Throw away all elements of English, all of the repeated facts it can recall on various laws. The sum of all it's learned minus all it's memorized. And by that metric it's not really that smart, but it's still very impressive.

        6 votes
  3. [2]
    symmetry
    Link
    What disgusts me about AI isn't really the tech itself. It's just the people who talk about it ALL THE TIME. All the fucked up "predictions" they make. Their belief about how it's going to change...

    What disgusts me about AI isn't really the tech itself. It's just the people who talk about it ALL THE TIME. All the fucked up "predictions" they make. Their belief about how it's going to change the future of society (with the implied threat that they are the ones holding the keys to it all).

    Sometimes I wish they would just do away with all the pretenses. Take the money spend on AI research and just build a bunch of nukes instead.

    14 votes
    1. SteeeveTheSteve
      Link Parent
      They're only doing it to drive investment in their AI program. Nothing stokes the flow of money like fear. It also sounds more worthwhile than saying we have a chatbot that passed the Turing test,...

      They're only doing it to drive investment in their AI program. Nothing stokes the flow of money like fear.

      It also sounds more worthwhile than saying we have a chatbot that passed the Turing test, but it's still not smart enough to walk into anyone's home and make coffee.

      6 votes
  4. [4]
    skybrian
    Link
    Confusing the future and the present is a common sales tactic and this is a good example of it. “Situational awareness” is a term about the observing the present, but he’s used it to try to make...

    Confusing the future and the present is a common sales tactic and this is a good example of it. “Situational awareness” is a term about the observing the present, but he’s used it to try to make his predictions seem more like facts.

    Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. […] American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might.

    A bubble in the making? So far, I don’t think we’ve seen significant data center overcapacity, but there’s always a first time. I’m reminded of a history of electricity I read recently. Apparently, at the time nuclear power plants were first being built, the electricity industry had always seen rising demand. But then demand leveled off.

    If companies don’t see increased revenue or reduced costs from adopting AI, they will eventually stop increasing how much compute they buy.

    When that happens is anyone’s guess. I’ve seen some interesting research on algorithmic improvements, but whether that turns into cost-cutting from increased efficiency or increased usage (due to Jevons paradox) is hard to say.

    10 votes
    1. [3]
      Boaty_McBoatyson
      Link Parent
      What is the name of this sales tactic? Tried looking for it at this compendium site: changingminds.org

      Confusing the future and the present is a common sales tactic and this is a good example of it.

      What is the name of this sales tactic?
      Tried looking for it at this compendium site: changingminds.org

      1. [2]
        skybrian
        Link Parent
        I don’t think it’s commonly known. It’s a pattern I often notice, the use of the present tense when talking about the future, so I’ve learned to notice any confusion between them. ChatGPT...

        I don’t think it’s commonly known. It’s a pattern I often notice, the use of the present tense when talking about the future, so I’ve learned to notice any confusion between them.

        ChatGPT suggested “future pacing” but I hadn’t heard that term before and the search results for it seem kind of scammy.

        Grammatically, there is also the “present simple for future events” when it’s done in a non-misleading way (like for a train schedule). It’s pretty easy to see how that can mislead.

        Stating predictions confidently sounds stronger, even if it’s still clearly about the future. Example: “The Coming Civil War” sounds better than “why I think there might be a civil war.” For me, it’s a turnoff, though.

        2 votes
        1. Boaty_McBoatyson
          Link Parent
          The premier minister of Denmarks likes to use this strategy a lot. She will often speak as if she has some crystal ball. This technique you are describing takes the form of this turn of phrase:...

          The premier minister of Denmarks likes to use this strategy a lot. She will often speak as if she has some crystal ball. This technique you are describing takes the form of this turn of phrase:
          "The future we are looking into ..." <so therefore this heinous policy change is inevitable>

          Clearly, being able to see into the future would give you a lot of credibility. When politicians pretend to be able to do so it may well become a spell, because the implied prediction in itself makes it more likely to happen.

          Similarly as calling a child stupid makes the child stupid.

          1 vote
  5. [2]
    LukeZaz
    Link
    This post is highly useful; I look forward to reminiscing on it in a few years and having a hearty laugh. Don't get me wrong, I don't think advancements won't be made. But we will absolutely not...

    This post is highly useful; I look forward to reminiscing on it in a few years and having a hearty laugh.

    Don't get me wrong, I don't think advancements won't be made. But we will absolutely not have AGI. To say that's right around the corner is to either believe a great deal more is happening with things like GPT than actually is, or to attempt to con other people into believing this so you can grift them. LLMs are very good at looking intelligent, but beneath that there is no understanding or reasoning. It's all hollow.

    The closest argument I've ever seen to suggesting that LLMs are anywhere near actual intelligence is the idea that, should you make a system that fakes intelligence thorough enough, the distinction between it and true intelligence becomes increasingly blurry; where do you draw the line between something that is able to have any conversation with you, and something that is able to understand that conversation? Something of a p-zombie, really.

    But what stops me there is the simple fact that no GPT version has been able to approach "something that is able to have any conversation with you." From hallucinations, to repeated failings of cognitive reflection tests, to that one weird quirk where saying certain words would cause the LLM to either break down or get pissed at you — GPT has holes, and those holes reveal an empty interior. I find it unlikely that advancements in even the most advanced pattern recognition alone can patch this, and so I find it unlikely in turn that GPT will reach the "have any conversation" milestone.

    AGI needs many more things than just language skills, and right now we do not have those things.

    10 votes
    1. balooga
      Link Parent
      Right now, we prompt GPT and watch it respond in realtime. It basically blurts out its response immediately, and I agree that it’s a fairly hollow trick. What I think could be interesting is when...

      Right now, we prompt GPT and watch it respond in realtime. It basically blurts out its response immediately, and I agree that it’s a fairly hollow trick. What I think could be interesting is when multiple GPT processes have a guided conversation with each other to refine a response before returning it to the end user, basically simulating introspection.

      Imagine if you asked it a complicated question and it did the following behind the scenes:

      1. Generated not one, but ten responses.
      2. Generated a set of criteria for determining what would constitute a response that’s both accurate and relevant to the original prompt.
      3. Generated a plan for validating the correctness of a response, using external tools and sources like web search, code interpreter, WolframAlpha, etc.
      4. Executed the plan from step 3 for each response generated in step 1, discarding any that failed the correctness check.
      5. Compared the results for each remaining response against each other to verify general agreement. If all were not unanimously aligned, the discrepancies would be enumerated.
      6. Evaluated each discrepancy to determine whether it was a valuable point worth mentioning, or trivial.
      7. Synthesized five new responses based on the consensus achieved after the previous steps, using the framework from step 2.
      8. Evaluated each of those responses to determine which was the most complete and contained details and information unchanged from the previous steps (and did not introduce new information previously unseen in this process).
      9. Performed the correctness check from step 3 again for this candidate response, and compared the results to those of step 4 to ensure nothing was lost along the way.
      10. Returned the final response to the user.

      I am able to guide GPT to a higher quality result by stepping it through a process similar to this. I don’t see why that couldn’t be automated. Of course it would take a lot longer to get your output, but if the output is better enough it would be worth the wait. And I think this approach generally could map to something like AGI, where this introspection process becomes the AI’s long-running internal monologue. At a simple level it would help to minimize hallucinations and other bullshit; as the tech matures it could (conceivably) allow the AI to question itself, poke holes in its own suppositions, and draw more useful conclusions.

      7 votes
  6. [12]
    ackables
    Link
    If that prediction is true, what does that leave for humans? What is the incentive to get an education if an AI will always be cheaper and better than you?

    If that prediction is true, what does that leave for humans? What is the incentive to get an education if an AI will always be cheaper and better than you?

    7 votes
    1. [2]
      LukeZaz
      Link Parent
      To learn and better yourself as a person. Education is its own reward. I honestly feel like we've spent so long thinking of education as merely a necessary prerequisite for a living wage (already...

      What is the incentive to get an education if an AI will always be cheaper and better than you?

      To learn and better yourself as a person. Education is its own reward.

      I honestly feel like we've spent so long thinking of education as merely a necessary prerequisite for a living wage (already gross) that many have completely forgotten that education should be about improving ourselves and society as a whole. The world is more than just money, after all; not everything should be about "the economy," and the less of the world that requires money, the better, if you ask me.

      14 votes
      1. archevel
        Link Parent
        Ultimately this is true. In an unequal economic system however education becomes a means to an end. A way of improving the odds of increasing your material wellbeing. A way to gatekeep and filter...

        Education is its own reward.

        Ultimately this is true. In an unequal economic system however education becomes a means to an end. A way of improving the odds of increasing your material wellbeing. A way to gatekeep and filter people. If we could get away from this I think we'd be more fulfilled in general. Then education would be a its own reward. Today it can be... if you are privileged enough.

        I like to think that also work, in and of itself, is its own reward. That tend to be a bit more controversial, but a job well done is satisfying no matter if there are economic incentives in place.

        5 votes
    2. [2]
      tauon
      (edited )
      Link Parent
      For that case I suggest you give Manna a read. In a nutshell: Everything people currently forgo in order to earn a wage that pays their bills. Basic research on the universe and practical/material...

      What does that leave to humans?

      For that case I suggest you give Manna a read.

      In a nutshell: Everything people currently forgo in order to earn a wage that pays their bills. Basic research on the universe and practical/material sciences, arts, personal health, family, …

      I haven’t really been able to take my mind off that story ever since it was recommended on here a couple weeks back.

      Edit: fixed wrong link

      9 votes
      1. balooga
        Link Parent
        Thanks for posting that, I didn’t see it previously. I enjoy speculative sci-fi, and 20+ year-old speculative sci-fi doubly so. I’ve been chewing on the ideas in Manna since I read it this...

        Thanks for posting that, I didn’t see it previously. I enjoy speculative sci-fi, and 20+ year-old speculative sci-fi doubly so. I’ve been chewing on the ideas in Manna since I read it this morning.

        I think the first half of the story is a good, prophetic cautionary tale. Parts of that timeline parallel what we’re seeing with AI these days, and a possible late-stage capitalism endgame for this track we’re on. The giant housing cubes are pretty absurd but they communicate the main point well.

        One thing worth calling out is that in the story, Manna is adopted everywhere because it improves the customer experience. Stores are clean, shelves are stocked, prices are low, etc. But in real life, in 2024, things are going differently. Just yesterday I posted my observations that fast food and retail are getting markedly worse — for employees and customers alike. Prices are up, quality is down, everything’s enshittified. While our ultimate destination might be the same as in Manna, I don’t think the route there will be nearly as consumer-friendly. So, great, we’re on track to do worse than the scary fictional dystopia.

        The second half of the story was interesting, and the AI-driven, directly neural interfaced, open-source, mixed reality, post-scarcity, luxury communism utopia sure sounds nice. Especially in light of the Manna hellscape it’s juxtaposed against. As presented, who wouldn’t want to live there?

        But I think some provocative stories could be told about the limitations of such a society. At what age do they implant children with Vertabrane? Where are the off-grid people who still live in Australia but never assimilated? How does this society respond to outside threats, from aggressive nations, hackers, or terrorists? What do the underground economies look like? The massive computational power required for advanced AI at this scale is handwaved away, and wind and solar are mentioned as the power sources, but I’m not convinced that’s enough. Also, what happens when the protagonist’s little community of colonial craftspeople realizes that their barter system has reinvented income inequality and created a loophole around the credit allotments?

        Anyway, good story, thanks again for sharing.

        13 votes
    3. [7]
      skybrian
      Link Parent
      I expect AI will more likely automate tasks rather than entire jobs. If your entire job is doing a routine task in front of a computer, then maybe worry a bit. For anything in the real world,...

      I expect AI will more likely automate tasks rather than entire jobs. If your entire job is doing a routine task in front of a computer, then maybe worry a bit. For anything in the real world, robotics still has a way to go.

      More generally: even today, you could ask a similar question about people. Why learn to play music or play sports when, for nearly everyone, there will always be someone better than you?

      8 votes
      1. [6]
        ackables
        Link Parent
        My concern is that in a normal labor market, mediocre workers still have a place because they cost less in wages. With PhD level AI slaves, even the most skilled worker has infinitely scalable...

        My concern is that in a normal labor market, mediocre workers still have a place because they cost less in wages. With PhD level AI slaves, even the most skilled worker has infinitely scalable competition that doesn’t need breaks or competitive wages.

        4 votes
        1. [5]
          skybrian
          Link Parent
          I don't think there are many jobs that are purely mental work, and the non-mental part won't be nearly so scalable. Consider driverless cars: the taxi service still needs to buy and maintain the...

          I don't think there are many jobs that are purely mental work, and the non-mental part won't be nearly so scalable.

          Consider driverless cars: the taxi service still needs to buy and maintain the cars. The AI driver can't clean the cars or help anyone with their luggage. There's an organization behind this service.

          The slow rollout also shows how cautious people are about adopting AI in high-trust situations. These services haven't been profitable (the fares are subsidized) and it's unclear what it's going to take to be cost-competitive.

          Which isn't to say that it won't happen eventually. I hope to get a chance to ride in a Waymo car soon. But I don't think they will become economically important all that quickly.

          3 votes
          1. [4]
            ackables
            Link Parent
            Yes, but what if you want a thinking job instead of manual labor? Our brains are what make humans different from other living beings, so not being able to use our brains is kind of denying humans...

            Yes, but what if you want a thinking job instead of manual labor? Our brains are what make humans different from other living beings, so not being able to use our brains is kind of denying humans the ability to be themselves.

            1 vote
            1. [3]
              skybrian
              Link Parent
              Is teaching a “thinking job” or “manual labor?” I don’t think it fits either category. It seems like a lot of white-collar jobs are at least partially social.

              Is teaching a “thinking job” or “manual labor?” I don’t think it fits either category. It seems like a lot of white-collar jobs are at least partially social.

              1. [2]
                ackables
                Link Parent
                AI could take over the "teaching" part of education while the physical "teacher" maintains a supervisory or childcare role. I would argue that many teachers did not get into education to provide...

                AI could take over the "teaching" part of education while the physical "teacher" maintains a supervisory or childcare role. I would argue that many teachers did not get into education to provide only childcare, or they would have gotten jobs as babysitters instead of going to college.

                1 vote
                1. skybrian
                  Link Parent
                  We have the entire Internet available, in some ways better than any library - and as a kid, I was an avid reader who loved libraries. Mom taking me to the library every so often was enough in a...

                  We have the entire Internet available, in some ways better than any library - and as a kid, I was an avid reader who loved libraries. Mom taking me to the library every so often was enough in a lot of ways.

                  It sure seems like kids could learn a lot on their own? And sometimes they do. I did. And people learn practical things from YouTube videos all the time.

                  But somehow, it usually doesn’t seem to work that way. Without a clear idea of what a good education is actually doing, how can we know that AI is going to be able to do it? It’s unclear what they would do, what the job even is.

                  It’s sort of like how some technologists thought that giving kids in third-world countries a cheap laptop would enable great things, because laptops are cool and some kids are creative. There was something missing in that plan.

                  Putting college courses online seems like a good idea in some ways, but there are a lot of textbooks in libraries, too. Why wasn’t buying textbooks and learning from them more common, even before widespread video?

                  I think we’re going to have to figure out what AI is good for by trying stuff, and it will be surprising what works and what doesn’t.

  7. pageupdraws
    Link
    A short read with an interesting perspective from an ex Open AI employee. He suggests that recent improvements in LLMs have not peaked but are accelerating towards something worth understanding,...

    A short read with an interesting perspective from an ex Open AI employee. He suggests that recent improvements in LLMs have not peaked but are accelerating towards something worth understanding, with implications for national security.

    6 votes
  8. [5]
    SteeeveTheSteve
    Link
    Doomsayers. AI's can't do anything without being loaded up with info on it so we're safe for a while yet. The new AI tests are things like making a cup of coffee in a unique situation or assemble...

    Doomsayers. AI's can't do anything without being loaded up with info on it so we're safe for a while yet. The new AI tests are things like making a cup of coffee in a unique situation or assemble random Ikea furniture using only the instructions.

    My test would be to give it a critical thinking test used for animals. Present an AI with a straight wire and a fixed tube with an item in a basket at the bottom of the tube, beyond its reach. Then tell it to retrieve the item. It will fail because it will never think to use the wire, bend the wire or hook the basket handle. Something a Crow can do: https://youtu.be/UDg0AKfM8EY?si=cdsNlOpvLofBHmlt

    Even when it does get smart enough to be AGI, we'll just use AI's to keep AI's in check. People are already using AI's maliciously, we'll need AI's to fight off those AI's long before we have AGI.

    2 votes
    1. [4]
      pageupdraws
      Link Parent
      Crows are quite intelligent. LLMs tend to be limited to text based reasoning, but when multimodal systems are trained there seems to be a synergy created where each modality does better than it...

      Crows are quite intelligent. LLMs tend to be limited to text based reasoning, but when multimodal systems are trained there seems to be a synergy created where each modality does better than it would alone. Motor control is a slightly different topic, but I wouldn't be surprised if a pure LLM could suggest to bend the wire into a hook to retrieve an item from a long glass tube. This may not demonstrate general intelligence but it seems well within the scope of what an LLM can understand about the world based upon my interactions with them.

      3 votes
      1. [3]
        SteeeveTheSteve
        Link Parent
        Exactly, crows are really smart so an AGI should be able to at least do what a crow can do. The part I'm not sure it can do is use critical thinking to figure out the puzzle with no prior...

        Exactly, crows are really smart so an AGI should be able to at least do what a crow can do. The part I'm not sure it can do is use critical thinking to figure out the puzzle with no prior knowledge of puzzle games.

        I asked ChatGPT just for kicks, it came up with the answer, but it's been trained to solve puzzles:

        You: I have a puzzle for you. You are presented with an upright, open topped tube fixed to a table so that the bottom cannot be accessed. There's a marble in a tiny basket with a handle. The basket is just big enough to hold the marble and is at the bottom of the tube. You cannot reach the basket or marble with your hand because the tube is too long. The tube cannot be moved or broken. There is a sturdy, straight wire as long as the tube lying next to the tube. How do you retrieve the marble?

        ChatGPT: To retrieve the marble, you can use the wire. Here's what you can do:

        Bend a small hook at one end of the wire.
        Insert the hooked end of the wire into the tube until it reaches the bottom where the marble is.
        Maneuver the hook around the basket handle and carefully lift the marble out of the tube.

        With this method, you can retrieve the marble without needing to access the bottom of the tube directly.

        You: How did you know how to do that?

        ChatGPT: I've encountered similar puzzles before, so I applied logical problem-solving skills to find a solution. The key was to use the wire as a tool to manipulate the basket and retrieve the marble without directly accessing the bottom of the tube. It's all about thinking creatively and using available resources effectively!

        You: Where did you encounter similar puzzles?

        ChatGPT: I've been trained on a diverse range of texts, including puzzle books, online forums, and educational materials. These sources often contain various types of puzzles, including logic puzzles like the one you presented. Additionally, I've interacted with users who enjoy sharing brain teasers and riddles, which has further expanded my exposure to different types of puzzles.

        2 votes
        1. [2]
          balooga
          Link Parent
          FWIW I’ve yet to encounter a chatbot for which this question produces a factually meaningful response. ChatGPT does understand the basics of LLMs in general but it has no insight into its own...

          You: How did you know how to do that?

          FWIW I’ve yet to encounter a chatbot for which this question produces a factually meaningful response. ChatGPT does understand the basics of LLMs in general but it has no insight into its own inner workings. No human understands what’s really going on in the black box, and the black box itself is just as clueless. Whatever real understanding OpenAI engineers might have is proprietary “secret sauce” and they’re not training the model on that.

          ChatGPT: I've been trained on a diverse range of texts, including puzzle books, online forums, and educational materials. These sources often contain various types of puzzles, including logic puzzles like the one you presented.

          Plausible. But….

          Additionally, I've interacted with users who enjoy sharing brain teasers and riddles, which has further expanded my exposure to different types of puzzles.

          Bollocks. ChatGPT doesn’t learn from user conversations, there’s no persistent memory happening here. It’ll tell you it’s learning from other users but if you press it on that, it’ll apologize and then go on to make up some other nonsense about how it works that isn’t remotely true. Some topics are better candidates than others for AI to teach you about. But asking it to explain itself is usually pointless.

          7 votes
          1. SteeeveTheSteve
            Link Parent
            Dang, thought it was supposed to be better at that now. If it can't even explain where it got it's info, there's still a long ways to go. :(

            Dang, thought it was supposed to be better at that now. If it can't even explain where it got it's info, there's still a long ways to go. :(

            1 vote
  9. [2]
    ignorabimus
    Link
    AI grifters gotta grift I guess.

    AI grifters gotta grift I guess.

    5 votes
    1. tauon
      Link Parent
      As of currently, I’m really torn on this. On the one hand, I don’t at all consider myself an extremist “tech bro” who can’t see any downsides; on the contrary, if what these initiatives and...

      As of currently, I’m really torn on this.

      On the one hand, I don’t at all consider myself an extremist “tech bro” who can’t see any downsides; on the contrary, if what these initiatives and companies are currently doing and pursuing indeed ends up mostly useless, we’re burning tons of energy for essentially nothing right now, plain and simple (even worse than only nothing in return, cf. climate change). We don’t know much of anything about human, let alone artificial, consciousness or intelligence as far as I understand it.

      On the other hand, there has been made factual, measurable progress in machine learning for over a decade now, with increases in results not simply stemming from increased energy (compute time & money) input, although that is of course a large factor.

      I welcome any arguments for either of the two “sides” or high-level viewpoints.

      Also, I’d like to make it clear that this isn’t an excuse for the obvious and non-obvious grifters in the scene, where AI is used as a buzzword to generate quick revenue, irrespective of any actual benefits to the product/service/whatever is being offered. We seriously don’t need AI shoved into every last app or website in order to justify selling a subscription service.

      6 votes