16 votes

Rabbit R1 it's a scam

13 comments

  1. [2]
    Rudism
    Link
    I think a good rule of thumb (at least for the next several years) is that if an "AI powered" device comes out that looks and sounds too good to be true, it almost certainly is. LLMs have only...

    I think a good rule of thumb (at least for the next several years) is that if an "AI powered" device comes out that looks and sounds too good to be true, it almost certainly is. LLMs have only just exploded into public consciousness so you're going to get the usual gamut of scammers and corner-cutters trying to be the first to market with the "next big thing" in order to score a huge pay day while there's still very little competition. There's no reason that AI devices are going to follow a path any different than other consumer tech like computers, cell phones, etc. where we start off with basic functionality and they improve gradually over time. Anyone claiming to have created a sci-fi conversant AI assistant that interacts with arbitrary external agents on your behalf at this stage in the game is pretty much guaranteed to be full of shit.

    22 votes
    1. papasquat
      Link Parent
      The most lazy implication of this kind of thing I've seen so far: Someone on a cybersecurity subreddit pitched this "AI vulnerability scanning app" he developed for Cisco IOS devices. The way the...

      The most lazy implication of this kind of thing I've seen so far:
      Someone on a cybersecurity subreddit pitched this "AI vulnerability scanning app" he developed for Cisco IOS devices. The way the app worked was you pasted your config into it, and it would tell you notable security flaws in your configuration. The way it worked?

      Just passed your entire configuration to GPT-4 after appending "is this configuration vulnerable to cyber attacks?" to it.

      At least this project was open source and free. I have to imagine that a large portion of companies selling these amazing new innovative AI apps aren't doing much better, you just can't look under the hood as easily.

      19 votes
  2. [8]
    papasquat
    Link
    The annoying thing about AI is that because of how unintuitive a computer that seems to be able to think is to most people, they unconsciously chalk the whole thing up to magic, and start...

    The annoying thing about AI is that because of how unintuitive a computer that seems to be able to think is to most people, they unconsciously chalk the whole thing up to magic, and start believing that it can do basically anything, and that it's perfect.

    A few minutes of critical thought would suggest that a device that can basically do whatever you want it to do with regaeds to online services just by asking isn't possible. The interfeces used to interact with those services are brittle, and even designing a third party web client that reliably interacts with door dash or Google calendar is difficult.

    When you throw in an LLM, which are notoriously error prone intonthe mix, it was pretty clear to anyone who had any idea how the guts of this stuff works that this product wouldn't work without a ton of constant engineer massaging to make the LLM work with only a few bespoke APIs.

    It was never going to be the "do anything magical personal assistant," that was pitched. That kind of thing just isn't possible in the current landscape of how internet services work.

    13 votes
    1. [2]
      EgoEimi
      Link Parent
      This is my favorite new accidental word for describing shitty interfaces.

      interfeces

      This is my favorite new accidental word for describing shitty interfaces.

      18 votes
      1. papasquat
        Link Parent
        Caused by one of my most hated interfeces of all time: the touchscreen. Very fitting!

        Caused by one of my most hated interfeces of all time: the touchscreen. Very fitting!

        10 votes
    2. Rudism
      Link Parent
      I think ignorance of how computers and software work in general is a huge contributor (not just AI or computers that can think). To someone like you and me, who understand how software works, the...

      I think ignorance of how computers and software work in general is a huge contributor (not just AI or computers that can think). To someone like you and me, who understand how software works, the amount of engineering feats well inside the realm of miraculous that would be required to create something like what the R1 claims to be is absurdly outlandish; But to most people computers are already these magic pocket screens that can do all kinds of outlandish things, so using LLMs to basically still do those things but via voice commands now instead of tapping a screen probably doesn't seem like that huge a leap.

      8 votes
    3. [4]
      hobbes64
      Link Parent
      Yes. Not only magic, but just something more sophisticated than it really is. "AI" companies are taking advantage of the fact that people tend to anthropomorphize objects and project emotion and...

      Yes. Not only magic, but just something more sophisticated than it really is. "AI" companies are taking advantage of the fact that people tend to anthropomorphize objects and project emotion and intention onto things. It's good that we have "theory of mind" so we can empathize with other people, but we also tend to do also empathize with our cars and rocks that have google eyes glued on and lots of other inanimate objects.

      When I see some article that claims that AI "hallucinates", I assume it is a manipulation of our feelings. A large language model doesn't hallucinate, it just has some bad data.

      4 votes
      1. [3]
        ThrowdoBaggins
        Link Parent
        I thought “hallucination” was the term developed my PR firms who didn’t like the earliest reports where people found LLMs would “lie” and “make stuff up” and decided they needed a nicer name for...

        I thought “hallucination” was the term developed my PR firms who didn’t like the earliest reports where people found LLMs would “lie” and “make stuff up” and decided they needed a nicer name for when “the AI constructed bullshit from nothing”

        5 votes
        1. Wes
          Link Parent
          The term was coined by Andrej Karpathy, one of the foremost computer scientists in the field, in his 2015 blog post The Unreasonable Effectiveness of Recurrent Neural Networks. It wasn't created...

          The term was coined by Andrej Karpathy, one of the foremost computer scientists in the field, in his 2015 blog post The Unreasonable Effectiveness of Recurrent Neural Networks. It wasn't created by a PR firm nor intended for spin.

          It's a common misunderstanding to think that hallucinations are a bug or in some way unexpected. LLMs are sophisticated token generation machines. That's all they're really designed to do. The fact that emergent behaviours appear after significant training is very impressive, but it doesn't change their nature.

          It's probably easier if you think of every line of text being produced as a hallucination. It just so happens that some of it is accurate. Karpathy wrote about this as well.

          9 votes
        2. GunnarRunnar
          Link Parent
          Yeah it's a marketing term that the media ate up (why, I would like to know). It's there to support the narrative that AIs are more sophisticated than they actually are.

          Yeah it's a marketing term that the media ate up (why, I would like to know). It's there to support the narrative that AIs are more sophisticated than they actually are.

  3. [2]
    mtgr18977
    Link
    I posted this on Hacker News (an I got flagged) to see how the discussion goes. The majority of people there agree that the rr1 is a SCAM, or at least, a misleading product. The whole point around...

    I posted this on Hacker News (an I got flagged) to see how the discussion goes. The majority of people there agree that the rr1 is a SCAM, or at least, a misleading product. The whole point around the rr1 is the LAM, a AI model that it's suposed to deliver an automation for your everyday tasks.

    But, the thing is: there is no LAM in rabbit r1. Or, you can say that an automation is a LAM (it's not).

    Bringing up the definition of the Silvio Savarese’s article, the first one to mention the LAM model:

    To be clear, an LAMs job isn’t just turning a request into a series of steps, but understanding the logic that connects and surrounds them. That means understanding why one step must occur before or after another, and knowing when it’s time to change the plan to accommodate changes in circumstances. It’s a capability we demonstrate all the time in everyday life. For instance, when we don’t have enough eggs to make an omelet, we know the first step has nothing to do with cooking, but with heading to the nearest grocery store. It’s time we built technology that can do the same.

    The definition provided by Silvio Savarese highlights the ability of a LAM to not only transform a request into a series of steps but also to understand the underlying logic that connects and surrounds these steps. This includes the ability to adjust the plan as circumstances change.

    Based on this definition, claiming that rabbit r1 is a LAM-oriented assistant seems to be inaccurate. If it does not demonstrate the ability to understand and adapt to contextual changes in a logical and effective manner, it cannot be classified as a genuine LAM.

    For a true LAM, it is crucial that the technology not only follows a predefined sequence of steps but also understands the logic and purpose behind each step, adjusting as necessary to achieve the desired goal. If rabbit r1 does not meet these criteria, its classification as a LAM indeed needs to be reviewed.

    And, with that in mind I can assure you that rabbit r1 it's not a LAM oriented assistant as they claim.

    4 votes
    1. Minori
      Link Parent
      For anyone else wondering what this stands for: LAM is an abbreviation of Large Action Model. The idea being an AI model that understands actions and puts them together (imagine something like an...

      LAM

      For anyone else wondering what this stands for: LAM is an abbreviation of Large Action Model. The idea being an AI model that understands actions and puts them together (imagine something like an AI that knows every step to make coffee then simply has to put them together).

      8 votes