20 votes

DeepSeek FAQ

6 comments

  1. [4]
    creesch
    Link
    Interesting read with some nice background information. Though this specific bit Seems out of place for something that is otherwise a fairly well reasoned and sourced article. In fact, I'd say...

    Interesting read with some nice background information.

    Though this specific bit

    So are we close to AGI?

    It definitely seems like it. This also explains why Softbank (and whatever investors Masayoshi Son brings together) would provide the funding for OpenAI that Microsoft will not: the belief that we are reaching a takeoff point where there will in fact be real returns towards being first.

    Seems out of place for something that is otherwise a fairly well reasoned and sourced article. In fact, I'd say this reads more like wishfull thinking. Both R1 and o1 do demonstrate some neat ability to "reason" in very specific scenarios and a very specific way of querying them but not nearly to the degree that I'd think AGI is around the corner. The author does mention o3, but it remains to be seen how well it actually performs. Even more so as it turns out that OpenAI had access (although they pinky promised to not use it) to the the data of one of the benchmarks it is measured by.

    Mind you, I am not dismissing the entire premise of the article. As I said, I think it is an overal interesting and worthwile read. I am just mentioning this because I have noticed the tendency from some here to think that if you have on critical note you are dismissing everything. I just think it is good to remain critical towards articles in general, certainly when they surround subjects that are the center of a hype.

    14 votes
    1. [2]
      Raspcoffee
      Link Parent
      Two days ago I compared the current advancements to electronics - that many developments are often unexpected, and more about clever usage of what's available rather than raw power/capital. Now,...

      Seems out of place for something that is otherwise a fairly well reasoned and sourced article. In fact, I'd say this reads more like wishfull thinking. Both R1 and o1 do demonstrate some neat ability to "reason" in very specific scenarios and a very specific way of querying them but not nearly to the degree that I'd think AGI is around the corner. The author does mention o3, but it remains to be seen how well it actually performs. Even more so as it turns out that OpenAI had access (although they pinky promised to not use it) to the the data of one of the benchmarks it is measured by.

      Two days ago I compared the current advancements to electronics - that many developments are often unexpected, and more about clever usage of what's available rather than raw power/capital. Now, the past is rarely a perfect indicator for the future, but getting things so general so quickly is easier said than done. Especially when AGI is a broad term so I could see the goalpost moving a few times too.

      I have to say though, the way it 'reasons' is interesting and has a lot of potential in both good and bad. Though much of my thoughts on that matter have already been said many times.

      3 votes
      1. onceuponaban
        (edited )
        Link Parent
        I've already said my piece on what I think of AGI and the advent thereof within the current context (especially since I suspect disingenuous marketing claims on the matter and uninformed people...

        I've already said my piece on what I think of AGI and the advent thereof within the current context (especially since I suspect disingenuous marketing claims on the matter and uninformed people getting misled as a result are going to become the equivalent of a child constantly asking "Are we there yet?" during a road trip, which will further poison the discourse in most discussion spaces) so I won't restate it, but yeah, we don't need to reach the level of "we found how to create a truly sapient digital being" for LLMs and machine learning algorithms in general, even in their current state, being able to cause massive damage. All you need is putting one, deliberately or not, in a position to make harmful decisions. In fact, this is already happening, no true intelligence needed here.

        And on the other end, I do see the potential of AI being capable of lowering the barrier of access to the "minimally viable" amount of knowledge for someone to get interested and invested in a subject, which, at scale, would be beneficial to humanity. If kept in check, LLMs aren't just misinformation engines, and machine learning can help automate things we couldn't reliably do before. The field itself is far from new (one of my school projects recently was about training a neural network, in this case for a text-to-speech engine, and my primary source of information introducing using neural networks for this was a paper from the 80s), but while most people's eyes are on generative AI specifically, things are improving on that front as well, for better and for worse regarding the societal impacts.

        EDIT: ...Added an entire paragraph. one day I'll manage to post a comment without instantly realizing I wanted to say more several times in a row, but that's not today.

        5 votes
    2. DynamoSunshirt
      Link Parent
      I feel the same. Seems like we've moved the goalposts on AI in the last few years (what OpenAI etc are calling 'AI' now is not the scifi concept I had always imagined). Specifically, I grew up...

      I feel the same. Seems like we've moved the goalposts on AI in the last few years (what OpenAI etc are calling 'AI' now is not the scifi concept I had always imagined). Specifically, I grew up imagining AI as a general, humanlike intelligence capable of independent thought, reasoning, and some level of motivation. 'AI' chatbots in the industry today have no independence whatsoever, let alone true autonomy.

      And it's likely that we'll see them shift the AGI goalposts as well before long. All so they can make some timelines to brag about during quarterly reports.

      But hey, we already saw the same thing happen with self-driving cars. Companies love to brag about how many millions of (remote human-assisted) (mostly Bay Area and Phoenix) miles their 'self-driving cars' have driven. But IMO it's not a self-driving car until I can get in one at my driveway, take a nap for 6 hours, and wake up in NYC metro area (preferably a train station to get into the urban core, for sanity and speed's sake). Reasonable weather, construction, and lighting conditions -- essentially anything besides a blizzard or horrendous downpour -- shouldn't be an issue.

      1 vote
  2. onceuponaban
    Link
    Thank you for sharing this article. Making a parallel to another thread on the topic where I expressed my confusion in the comments, the points addressed really helped me understand why investors...

    Thank you for sharing this article. Making a parallel to another thread on the topic where I expressed my confusion in the comments, the points addressed really helped me understand why investors thought DeepSeek R1 was significant enough for NVIDIA to be impacted the way it was (less so the scales involved, but that wasn't within the scope of the article and that's probably mostly just me overthinking things anyway).

    2 votes