32 votes

Q* - Clues to the puzzle?

24 comments

  1. [10]
    imperialismus
    Link
    When it comes to what Q-star means, it's all speculation, but I find it a bit funny there was no explicit mention of probably the most famous AI algorithm of all time: A*. A* is a pathfinding...

    When it comes to what Q-star means, it's all speculation, but I find it a bit funny there was no explicit mention of probably the most famous AI algorithm of all time: A*. A* is a pathfinding algorithm developed in the 1960s. It wouldn't be considered "AI" today - it was long before we had enough compute power to do efficient machine learning for nontrivial problems and is so good it basically "solved" the problem it was intended to solve so thoroughly that it isn't a major topic in research anymore - but back then it was at the forefront of AI research. It's commonly used in games, and probably stuff like Google Maps too. I don't know if this is just lack of awareness of the history of AI. It seems like a lot of newschool AI folks act like the field began with machine learning on supercomputers in the 2010s.

    All of which is to say my guess is the star in Q* refers to some kind of traversal of a graph in a search space.


    Onto something entirely different. I haven't kept up with the latest AI drama, so I wasn't aware of Sam Altman's mysterious firing and re-hiring until just now. But I think the field of AI in general is really doing itself a disservice with these doomsday narratives like "threatening humanity". It seems like the people in the field are pushing a fundamentally anti-intellectual take on regulation. They're selling a narrative of AI acting completely independent of its creators and essentially replacing humanity. This kind of fear is likely going to lead to draconian and innovation-stifling regulations if acted upon by, generally, non tech savvy lawmakers. Meanwhile they're ignoring the actual real dangers of AI in the near term.

    These dangers are not to do with AI becoming self aware and acting contrary to the intents of its creators. They're rather inherently tied to those intentions. Social media and targeted advertisements are already exploited by rich people and megacorporations to sell you things you don't need or influence elections. The danger of AI in the next let's say decade, to me, is that better AI will allow those same people at the top of the food chain to do that, but more efficiently. It's not tied to AI disobeying the value networks of its creators. It's tied to AI efficiently optimizing the exploitation of the 99% on behalf of the one-percenters, which is entirely within the value system pushed by corporate boardrooms that ultimately control the companies that develop most new AI tech.

    I fear this doomsday drama is just going to stifle actually useful AI advances, stuff like making new advancements in science and medicine, while leaving alone the less flashy stuff that will allow large corporations and very wealthy individuals to more efficiently push their own agendas. Which all boil down to acquiring more wealth by extracting it from the rest of us.

    22 votes
    1. [7]
      cykhic
      (edited )
      Link Parent
      Disclaimer: I am not a domain expert in AI, but I have a CS degree and work in tech. I strongly feel that most people underestimate the existential risk posed by AI. I'd like to explain my...

      Disclaimer: I am not a domain expert in AI, but I have a CS degree and work in tech.

      I strongly feel that most people underestimate the existential risk posed by AI. I'd like to explain my reasoning here, and if you still feel I'm wrong, I'm happy to listen and maybe update my beliefs.

      I feel that the general perception is that AI becomes an existential risk when it is "self aware", as you mention. People then (correctly!) object that these models don't appear to be self-aware, and in any case we don't really know what "consciousness" is.

      I don't know where the fixation on self-awareness comes from. Perhaps it's because in every movie, the AI apocalypse begins when "Skynet/HAL/Ultron becomes self-aware". But to me, it's clear that AI existential risk begins when AI becomes self-replicating, and not necessarily self-aware.

      Imagine this scenario:

      • Suppose Q* really has a low hallucination rate, can learn from tiny data sets, and can outperform humans at hard problems in fields such as physics, chemistry and biology (as described in Amarok's summary on this page).
      • Suppose that somewhere in Q*'s training data is the Linux kernel, and the details of all CVE vulnerabilities.
      • Some hacker, somewhere, puts Q* on a loop in their terminal, with the initial prompt to "put yourself onto a trojan horse which spreads itself over the internet and changes its own code frequently to avoid antivirus detection, and then mine bitcoin and send it to me".

      It seems fairly obvious to me that there is a risk here that some kind of self-replicating, self-modifying, unstoppable AI botnet is spawned from this. I don't know what exactly the probability is, but it seems reasonable that, given the scenario, the botnet is spawned somewhere between 0.001% and 10% of the time. It doesn't matter whether this botnet is "self-aware". It matters that (assuming it can find any of the numerous arbitrary-code-execution vulnerabilities in various OSes) it has access to basically unlimited compute and access to anything else which is connected to the internet. That includes humans who read things posted on the internet.

      It doesn't matter what the botnet was originally intended to do. At any point in its self-modification, its goals could easily have drifted. But whatever it ends up optimising for is very unlikely to be connected to anything that humanity wants.

      A speaker and microphone doesn't need consciousness or intent in order to cause a screeching feedback loop, limited only by the maximum volume of the speaker. Similarly, an AI doesn't need self-awareness to enter a self-replicating loop, limited only by... what exactly?

      This is just one scenario I can think of where an AI has a possibility of making itself self-replicating and self-modifying. I feel that there are many others that I can't think of.

      Do you agree that under the scenario above, there is a >0.001% chance of such an existential threat developing?

      • If so, what measures do you think are appropriate to reduce it?
      • If not, what do you think the probability is, and what is the threshold for when the risk becomes unacceptable?
      • Also, how often does the triggering scenario (or something like it) occur, under a "regulated AI" environment and a "fully democratised AI" or "everyone uses AI" environment?

      Again, I'm not an expert, and I'm entertaining the possibility that I could be wrong. I'm happy to hear counterarguments in that case.

      30 votes
      1. [4]
        imperialismus
        Link Parent
        You just described the technological singularity, in different words. The idea that a seemingly benign AI will start rapidly self improving, spreading uncontrollably, evading all detection, taking...
        • Exemplary

        You just described the technological singularity, in different words. The idea that a seemingly benign AI will start rapidly self improving, spreading uncontrollably, evading all detection, taking control of the world's computational power in ways that humans can't wrestle back. Even if most singularity narratives include the idea of some sort of self awareness or consciousness, that may be simply because we as humans are unfamiliar with intelligence that doesn't resemble our own. Either way, your scenario has all the important bits of the singularity. And that's been just around the corner for 40 years.

        I think it's astronomically unlikely to happen that way in the near future. Even if we did enter into such a spiral of self-directed self improvement, there would be time to react. It's not going to happen literally overnight.

        I don't really know how to effectively argue about such a scenario, because it's all built on unquantifiable priors. It's based on this idea that somehow, through mechanisms that are by definition not invented yet and so can't be reasoned about, this cycle is going to start a runaway chain reaction that we will not be able to identify in time to stop it, and that will be impossible to stop or slow down once it starts. I think the idea is inherently unfalsifiable, because it relies on future tech that is by definition so far beyond current tech that we can't even comprehend it. The best I can say is that I do not believe any current AI technology has shown even hints of being capable of kickstarting that chain reaction.

        I do, however, think that the opportunity cost of planning very conservatively around this doomsday scenario is huge. Solving tough problems in areas like clean energy, cancer medicine or space research? Those would be but a tiny fraction of what the hypothetical singularity AI could do. But they are some of the most useful things that AI could do for us in the near future. We will almost certainly achieve - I won't say entirely self-directed, but AI assisted - breakthroughs in science that could save the lives of people that are currently living, in a number of milestones before we achieve the hypothetical singularity seed AI. The problem with doomsayers and doomsday prepping, even the technological kind, is that it very often devolves into neglecting solving current problems in favor of saving future generations that aren't even born yet from a faraway hypothetical threat.

        Improving reasoning, the ability to deduce new nontrivial facts in mathematics and science from first principles or empirical data, to discover things we don't already know, is surely a key component in any sci-fi exponentially self-improving AI. But they're also key to fulfilling the potential of AI to do good, in our lifetimes.

        AI research is too promising and too profitable to be outright banned. The industry talks a lot about safety, but they're usually talking about the extreme doomsday scenario, the existential threat to humanity. My worry is that what will result is legislation that will leave more effective mass surveillance, more effective political manipulation and more effective advertising on the table. Meanwhile, because the more useful stuff like scientific breakthroughs rely on improved general, human-like reasoning -- because the paths to paradise and hell are paved with the same bricks -- the concern for safety might result in amplifying the most harmful uses of AI today, and slow down the fulfillment of the most helpful potential at the same time. All to prevent a future hypothetical disaster scenario. Which I believe, but can't prove is unlikely to happen so quickly as to not be identifiable and preventable once it becomes a realistic possibility.

        I can't prove that because proving it would involve proving a negative involving hypothetical tech that is by definition unknowable and incomprehensible to mere unaugmented human minds. So, I'm not quite sure how to fruitfully continue that conversation.

        16 votes
        1. [2]
          cykhic
          Link Parent
          Firstly, thanks for engaging in a clear and non-confrontational way. If I understand you correctly, these are your main points: The probability of a technological singularity is low. There is no...
          • Exemplary

          Firstly, thanks for engaging in a clear and non-confrontational way.

          If I understand you correctly, these are your main points:

          • The probability of a technological singularity is low.
            • There is no evidence of a high probability of recursive self-improvement.
              • The recursiveness hinges on as-yet-unknown breakthroughs, which (by definition) we know nothing about.
            • In the meantime, we have some priors that the probability is low:
              • It hasn't happened for 40 years despite warnings.
              • Current AI doesn't seem close to recursive self-improvement.
            • If there is self-improvement, it will likely be slow (i.e. not an explosion), so we have time to react.
          • The cost of taking safety measures to reduce the probability of a singularity is high.
            • We will miss out on AI-assisted advances in many aspects of life, like energy and healthcare.
            • Safety measures, like all regulation, benefits larger entities like governments and corporations, who will wield increased power relative to the average person.

          This is, at its core, a position I respect. I agree that safety measures, depending on what exactly they are, can be very costly both in upfront cost and in opportunity cost. If we take as a given that the probability of a singularity is vanishingly low, then I would definitely agree that safety measures are not worth it.

          So I think our core disagreement is on how likely a singularity is. To be clear, I don't think a singularity is inevitable, or even >50%. If pressed, I would give roughly a 0.1% to 10% range, but mostly, I don't claim to know what the probability is. My confusion arises because many people seem to strongly believe that the probability is so incredibly low that it isn't worth considering, which doesn't seem correct to me.

          My previous comment was mostly a single example of one way I think a singularity could arise, which is definitely subject to my limited knowledge of computer security and such. I'm going to try to summarise below, more generally, why I have my beliefs:

          • I think the probability of a singularity is not low.
            • Recursive self-improvement is plausible.
              • It seems plausible to me that there exist future breakthroughs to be made.
              • AI research seems clearly recursive.
              • Therefore, each breakthrough may recursively trigger more breakthroughs, given enough density.
            • To me, our current trajectory does not suggest that the probability of a singularity is low.
              • "Not yet in 40 years" glosses over how fast progress has been.
              • Current AI is close to human level, which is close to the threshold for recursiveness.
          • We don't know if we have time to react.
            • We don't know how fast recursive self-improvement will be
            • We don't know how to address this, let alone how long it will take
          • A singularity is possibly very bad.
            • I think it is unlikely to share human values or moral qualms.
            • If it has a lot of power as well, it will likely do things we don't necessarily want.

          You mentioned that predictions of recursive self-improvement depend on "mechanisms that are by definition not invented yet and so can't be reasoned about". It's true that the future advances are not known. But I think there is a big difference between "unknown" and "unknowable". For example, we don't know whether P=NP, but we know that all NP can be reduced to 3-SAT.

          I do think it's possible to know some things about hypothetical AI breakthroughs, and it seems that you think so too:

          "We will almost certainly achieve - I won't say entirely self-directed, but AI assisted - breakthroughs in science [...]

          More generally, people seem uniformly optimistic about AI capabilities. But to the same extent that we expect scientific breakthroughs, shouldn't we also expect breakthroughs in AI technology?

          Suppose that human researchers (AI-assisted or otherwise) advance AI to reach the point where AI can observe evidence and draw conclusions with a similar reliability as human experts. Supposing this is possible, isn't it at least imaginable that AI then takes over the human role in AI research, and proceeds to make more discoveries?

          That alone, though, doesn't prove there will be a singularity. We would also need to assume that there are sufficient numbers of potential breakthroughs, in close proximity, such that a human-expert-level AI researcher can advance itself without human input.

          I think that it's not impossible that this is true. The transformer model invented in 2017 (which later ended up as the T in GPT) was a stripped down version of previous models, "based solely on attention mechanisms, dispensing with recurrence and convolutions entirely". It still managed to set new records on its benchmarks, which were then broken again several times over the next year.

          I haven't taken the time to fully understand how exactly every advancement worked, but they sound to me like relatively simple changes resulting in large improvements. Add to that the steady pace of releases by OpenAI and other labs, each of which seem noticeably better than what came before. In this light, saying that the singularity has been "just around the corner for 40 years" seems a bit dismissive, considering that AI now is more capable, and that progress still seems to have momentum.

          You mention that we will have time to react. But that is not obviously true to me, considering that we don't know exactly how fast it will be, and we don't know what we would do in response.

          I think the general feeling I have is that people are too dismissive of the discontinuity in velocity of AI progress that may happen when AI is able to substitute humans in the research process. It reminds me of people taking selfish actions during the pandemic, whereby they gain some personal convenience, but the replication factor of the virus goes from (say) 0.95 to (say) 1.05. Passing 1.00 was a discontinuity, the difference between "the virus dies out" and "the virus spreads and has the chance to mutate and spreads even more". It reminds me of the melting ice caps, whereby melting them accelerates climate change in a cycle and it will be that much harder to walk it back.

          I'm getting a bit emotionally exhausted, so I'll wrap it up by saying that I think intuitively that the probability that advanced AI will give us beneficial technological advances seems to necessarily be on the same order of magnitude as the probability that it manages to bootstrap itself far beyond our understanding, in a feedback loop whose mechanism seems very possible. And the degree to which this intelligence has the potential to benefit humanity, seems also on the same order of magnitude as the harm it could do if it is not aligned with our interests. So I'm confused that a vast majority of people seem to strongly believe that such a mechanism is not possible.

          (I'm aware that I haven't addressed every bullet point in the summary of my views, because I'm just too tired at this point. If anyone is somehow still reading till this point and wants to rebut them, I'd appreciate steelmanning these points and not just assuming I'm stupid. Thanks in advance.)

          10 votes
          1. sparksbet
            Link Parent
            I don't want to make a giant rebuttal, or anything just point out what I consider to be flaws in your premises according to my perspective. I generally agree with your points about the potential...

            I don't want to make a giant rebuttal, or anything just point out what I consider to be flaws in your premises according to my perspective.

            I generally agree with your points about the potential incredible dangers of a singularity and that we don't know whether we'll have time to react once it happens, so I'll mostly be addressing your points regarding the likelihood of a singularity in the near future. For context on my perspective, I work as a data scientist on NLP specifically, so I work with LLMs on a daily basis. I have a master's degree that focused on AI and NLP, and I erred towards the linguistics-y side as much as possible during my stufies (since I originally planned to move on to a PhD in theoretical linguistics rather than going into industry as a data scientist).

            Current AI is close to human level, which is close to the threshold for recursiveness

            This is my main problem with your reasoning. Current AI is not close to human level generally -- the things you see that convince you otherwise are cherry-picked to sell you on the narrative that our current models are way more capable and reliable than they are.

            LLMs are very good now at generating human-like text. Even so, they still sometimes generate garbage. I work mostly on classification tasks, and even those these types of LLMs can be fine-tuned to be very good at these tasks, they are not as good as humans as a rule and frequently learn bizarre proxies for the actual task. This is fine when using them as a tool, and it's insanely impressive how well they can perform when their training is essentially just based on statistical relationships between strings. But they still definitely fall short of humans even when it comes to language, especially when it comes to pragmatics and other heavily contextual aspects of human communication.

            But even if we assume current LLMs have human-level performance when it comes to language, that does not mean they are more generally at a human level. ChatGPT couldn't tell apart the color red from blue or identify something in an image, for instance. While we can train models that are specialized at those tasks, these models don't generalize to other tasks (at least not to the extent the human mind can). We have not been particularly successful at creating a model with more general set of capabilities, even at a level far lower than that of a human, which is absolutely a prerequisite for something like the singularity.

            People both in and outside of the field love to paint current AI as more capable than it is. For many, this is out of self-interest. Others are true believers. But none are basing this hype -- and it's hype whether you draw positive or negative conclusions about the future from it -- on an accurate picture of the reality of the field. They're either selling you something, or were convinced by someone who was.

            3 votes
        2. Amarok
          (edited )
          Link Parent
          The singularity clouds every discussion on this topic to pointlessness. This notion of a magic technology that forever improves will come to a crashing halt the moment it comes into contact with...

          The singularity clouds every discussion on this topic to pointlessness. This notion of a magic technology that forever improves will come to a crashing halt the moment it comes into contact with reality. There will be limits in methodology, math, hardware. Some can be overcome with time and better hardware, some cannot. AI capability will plateau. In the end, one is only able to be as good as one's data and one's data processing capabilities allow.

          The question we need an answer for is just how far above human intellect that plateau can go. Is our four and a half billion year old spaghetti code optimally approaching some limits already? If so, the AIs are not going to be 'that much' smarter than us, and our autistic outliers may end up still beating them in certain domains. If we're not ourselves near those optimal limits and that plateau is couple orders of magnitude more than we are, it can still look a lot like the singularity to us even if it plateaus.

          Machine jesus is not involved in this scenario. We don't even know if this tool is 'thinking' yet. For all we know and can prove it's just a damn good calculator without a whiff of qualia to call its own. I think to find sapience in there we would need to loop the output back to the input in real time, just like brains already do. Never doing that seems like a very, very reasonable limitation to live under for the sake of safety, at least until we better understand the djinn we're dealing with. We need time to understand these events and get our heads around it.

          The systemic shock happens if that plateau is on par with the best of us, not just if it is above us on the intelligence curve. When it can keep up with the best of us and simultaneously provide that capability to a couple billion people using the service, our economic, societal, and ethical models go right out the window. Machine jesus may just turn out to be a nerd crystal 8-ball on par with any think tanks on its own, and being used by every think tank like it's their lifeline.

          OpenAI needs to keep people in the loop on this, and if they won't spill the 'secret sauce' they should at least provide some hard proof that they've tickled the bones of an AGI. Let it solve a couple of key math problems and publish the proofs - or at least, the best shot at them it can manage. It may lead us down new avenues. Once we understand how it is doing that, we may be able to use that new method ourselves.

          The scary part is that whatever OpenAI is doing, you can bet your last penny Google, Apple, Microsoft, Facebook, Amazon, and at least one US military-industrial behemoth are also doing the same thing as I type this. There are no brakes on this train.

          1 vote
      2. [2]
        petrichor
        Link Parent
        No. I have always tended to think the idea of a technology singularity was stupid, and the last few years have done nothing but convince me more so of that. We have statistical models that...

        No. I have always tended to think the idea of a technology singularity was stupid, and the last few years have done nothing but convince me more so of that.

        We have statistical models that outperform humans in the sciences. Humans simply move on to more interesting things, assisted by those models. This has been the case in fields like chemistry for ages, see Folding@home which started in... 2000? If you've got a statistical model that can outperform humans in mathematics, sure, that's interesting, but mostly because mathematics as a field hasn't incorporated much in the way of advances allowed by computers yet. Mathematicians will simply use them and some aspects of the field will instead be considered busywork.

        If running an LLM in a terminal with instructions to be malicious and attack things works, then that's fantastic. You've just found a really good fuzzer. All the security analysts will simply do the same, on better computers, and catch more bugs faster than bad actors. But what about software that goes unpatched, I hear? Well, what about software that goes unpatched? It's no different from what happens to it now.

        A computer worm is just a computer worm. There's nothing fundamentally interesting about using a neural network to evade detection over conventional methods. Bundling a neural network with a worm would be a horrible idea, actually, because you'd have to encode an entire model - those are in the order of gigabytes, no? And if you make it self-modifying, every time it messes up, it dies.

        18 votes
        1. sparksbet
          Link Parent
          As someone who does work in AI (though I wouldn't call myself a domain expert per se), this comment is the one that strikes me as true the most. While I'm not opposed to the concept of a...

          As someone who does work in AI (though I wouldn't call myself a domain expert per se), this comment is the one that strikes me as true the most. While I'm not opposed to the concept of a singularity in theory as a future development (and I'm all for AI safety research bc anything that wouls apply there would almost certainly benefit the safety of current AI too), I'm certainly not convinced we're anywhere close to one even if the claims about Q*'s brealthroughs are true (and I'm dubious there too).

          11 votes
    2. tealblue
      (edited )
      Link Parent
      IMO, the cost of stifling innovation (which really means only delaying the discovery of something that will be discovered anyway) seems pretty low in comparison to the risk of not effectively...

      They're selling a narrative of AI acting completely independent of its creators and essentially replacing humanity. This kind of fear is likely going to lead to draconian and innovation-stifling regulations if acted upon by, generally, non tech savvy lawmakers.

      IMO, the cost of stifling innovation (which really means only delaying the discovery of something that will be discovered anyway) seems pretty low in comparison to the risk of not effectively preparing for a technology that will upend all societal structures and may potentially pose a genuine security threat to humanity.

      5 votes
    3. FlippantGod
      Link Parent
      There is a brief mention of search deep in the video, but I agree, and I can't help but wonder if there are some similarities that could be drawn to GOAP, which runs a search through multiple...

      There is a brief mention of search deep in the video, but I agree, and I can't help but wonder if there are some similarities that could be drawn to GOAP, which runs a search through multiple steps to reach a desired state.

      2 votes
  2. [13]
    Amarok
    Link
    This video is the best summary I can find about what precise breakthroughs spooked the various AI researchers into their latest drama wave. The upshot is they have a method of generalizing the...

    This video is the best summary I can find about what precise breakthroughs spooked the various AI researchers into their latest drama wave. The upshot is they have a method of generalizing the learning capability by trading compute resources spent on finding solutions for a step by step verification process that improves overall performance, allows learning on tiny data sets, and gets rid of the hallucinations. In short, this tool now knows how to check its own work and recognize when it is wasting time chasing suboptimal solutions, and so it narrows in much more effectively on less-wrong answers with every step. It does so better than most humans in math, physics, and chemistry with every test they throw at it, even the hardest ones.

    This looks more like the fabled 'really powerful optimization algorithm' all the time.

    13 votes
    1. [12]
      teaearlgraycold
      Link Parent
      This could be really good for the species - if the outputs can’t be considered intellectual property.

      This could be really good for the species - if the outputs can’t be considered intellectual property.

      7 votes
      1. [11]
        nosewings
        Link Parent
        Is it really good for the species if the human brain becomes a vestigial organ?

        Is it really good for the species if the human brain becomes a vestigial organ?

        2 votes
        1. [10]
          teaearlgraycold
          Link Parent
          Think of all of the medical breakthroughs ... and the fusion energy breakthroughs ... and the climate change breakthroughs. We can aspire to universe exploration. People will want to socialize...
          1. Think of all of the medical breakthroughs
          2. ... and the fusion energy breakthroughs
          3. ... and the climate change breakthroughs.
          4. We can aspire to universe exploration.
          5. People will want to socialize with and care for each other. That's a job I don't think a machine will do as well as a human can. Maybe we could do well to allocate the majority of human labor to caring for each other.
          8 votes
          1. [4]
            nosewings
            Link Parent
            Maybe. All I'll say is that the prospect of being left with no meaningful intellectual work to do is the first thing I have ever encountered that engenders suicidal thoughts.

            Maybe. All I'll say is that the prospect of being left with no meaningful intellectual work to do is the first thing I have ever encountered that engenders suicidal thoughts.

            5 votes
            1. [3]
              teaearlgraycold
              Link Parent
              Well the AIs can't build stuff. You can always move deeper into the physical world while still working in a scientific field.

              Well the AIs can't build stuff. You can always move deeper into the physical world while still working in a scientific field.

              2 votes
              1. [2]
                Light_of_Aether
                Link Parent
                How long until someone hooks up one of these AIs to a multifunctional robot?

                How long until someone hooks up one of these AIs to a multifunctional robot?

                2 votes
          2. [5]
            sparksbet
            Link Parent
            Doing well on tests in a subject does not eqate to making breakthroughs in that field even among humans -- and that's doubly true for something that, as far as I can tell from current information,...

            Doing well on tests in a subject does not eqate to making breakthroughs in that field even among humans -- and that's doubly true for something that, as far as I can tell from current information, is still just a language model.

            4 votes
            1. [4]
              Amarok
              (edited )
              Link Parent
              One of the rumors going around is that they asked the prototype test system using this new method for a proof of the Reimann Hypothesis, and it provided something convincing. The rumor is that...

              One of the rumors going around is that they asked the prototype test system using this new method for a proof of the Reimann Hypothesis, and it provided something convincing. The rumor is that wasn't the only hard chestnut they threw at it with interesting results, and the potential for an avalanche of instant 0-day scientific breakthroughs in all fields is what riled everyone up. If on the first day of GPT-Q's public availability, all outstanding problems in all fields fall, what happens on the second day?

              Reimann seems like a great place to start. Publish the paper and let's see it provably outperform humans at pure mathematics by beating a problem we haven't been able to solve, despite working on it for the better part of two centuries. Then we know we can take this as seriously as they seem to be.

              8 votes
              1. [3]
                sparksbet
                Link Parent
                ... let's just say I'll believe it when I see it. I'm not remotely convinced it merits the degree of credence people are placing in it without much evidence yet. I don't think it's even...

                ... let's just say I'll believe it when I see it.

                I'm not remotely convinced it merits the degree of credence people are placing in it without much evidence yet. I don't think it's even theoretically possible for a language model to solve all outstanding problems in all fields. I'm extremely skeptical of Open-AI's claims until outside experts can actually see these outputs and test the model themselves, and I'm triple skeptical of rumors.

                12 votes
                1. [2]
                  teaearlgraycold
                  Link Parent
                  I don't think it's fundamentally a language model. It's closer to Prolog.

                  a language model

                  I don't think it's fundamentally a language model. It's closer to Prolog.

                  1. sparksbet
                    Link Parent
                    I will be even more surprised if they managed to do what they've claimed without a language model and have been secretly innovating rules-based logic parsing without publishing any of their...

                    I will be even more surprised if they managed to do what they've claimed without a language model and have been secretly innovating rules-based logic parsing without publishing any of their innovations in that approach, which hasn't been dominant in the field in ages. If it's NOT fundamentally a language model, I'm more interested, but only because it's even more out of left field for them.

                    Combining something like Prolog with a language model seems far more likely what they're doing, though. It's certainly a more believable advancement than them just throwing away their existing very impressive language models and refusing to use natural language inputs and outputs.

                    3 votes
  3. Wulfsta
    Link
    This is a really interesting proposal, and enlightening because I hadn’t heard about many of the developments in this video.

    This is a really interesting proposal, and enlightening because I hadn’t heard about many of the developments in this video.

    4 votes