16 votes

The technological singularity may have already happened, and perhaps bitcoin is the result

29 comments

  1. Macil
    (edited )
    Link
    I think the idea in the article is a very useful way to think about the potential danger of artificial general intelligence: a superintelligent AI could manipulate humans into helping it for...

    I think the idea in the article is a very useful way to think about the potential danger of artificial general intelligence: a superintelligent AI could manipulate humans into helping it for short-term gains. An AI with billions of dollars and millions of people voluntarily doing its bidding for self-interested profit would be unstoppable. Cryptocurrency is a great example of something that perpetuates itself through helping the participants profit and would be hard to shut down.

    The article has an awkward framing in the first part where it suggests an AI could be benefiting from Bitcoin's mining specifically. There's no ulterior motive to mining; the data generated from it and the hardware built for it is absolutely useless outside of the context of cryptocurrency. The article is wise to shift its focus quickly from the possibility of someone gaining from the processing power to just focusing on the profit that came from it for the creator and the participants of the system.

    It's significant that the first and most widely adopted cryptocurrency uses a computationally expensive proof-of-work (https://en.wikipedia.org/wiki/Proof_of_work) system instead of a more energy-efficient proof-of-stake algorithm. In all of Satoshi's genius, he couldn't predict the shortcomings of proof-of-work systems; see that incentivizing processing power would lead to wasteful electricity usage? Bitcoin mining uses more electricity than many countries.

    At the time, proof-of-work was the only known way to accomplish a decentralized cryptocurrency. This question is like "why did we invent the internal combustion engine instead of making electric cars?" and "why did we make coal power plants instead of nuclear plants?". Proof-of-stake took has been researched extensively over the past decade and has only become ready in the past few years. It's much more complicated to design than proof-of-work.

    The simple conclusion to take from this is that the fact Bitcoin wasn't designed as proof-of-stake from the start is evidence that the designer was not a superintelligent AI. Assuming that proof-of-stake was chosen because there's an ulterior motive to mining only makes sense if you presume the designer was a superintelligent AI, and so then you can't circularly use the possibility of an ulterior motive to mining as evidence for the designer being a superintelligent AI.

    Even though faster, cheaper, and more energy-efficient cryptocurrency exists, bitcoin shows no signs of slowing or losing dominance. There are ways to get all the benefits of crypto while using much less electricity. We know how and are completely capable of switching to crypto that is less wasteful and abandoning bitcoin as obsolete. But those alternate crypto coins struggle to get a fraction as much attention as bitcoin, because bitcoin came first, and bitcoin is where the most money is, and it probably always will be. It has exploited human greed to ensure its survival at the expense of ours.

    I think there's hope: Ethereum, the second most popular cryptocurrency and the blockchain that powers nearly all popular cryptocurrency tokens, is working on switching to proof-of-stake. Ethereum has much more development, features, and interoperability than Bitcoin. It's possible that it could overtake Bitcoin. Or maybe Ethereum finally completing its switch to PoS will finally kick the Bitcoin community into catching up.

    If Satoshi is one person, how has he seemingly resisted the urge to cash in on his fortune

    I assume Satoshi may have been a well-employed programmer to begin with, and he probably had other cryptocurrency wallets besides the known ones that he sold coins from already. He probably has more money than he knows what to do with without needing to reach into the publicly-known addresses.

    18 votes
  2. [3]
    teaearlgraycold
    Link
    This feels like one of the ideas I might get while high. It feels so revelatory, so mind blowing at the time. I write these ideas down and then look at them when sober. 90% of them end up being...

    This feels like one of the ideas I might get while high. It feels so revelatory, so mind blowing at the time. I write these ideas down and then look at them when sober. 90% of them end up being garbage (Here is one of the 10% that aren't garbage). Any sober person can see that you need too much suspension of disbelief to appreciate them.

    So this article could be the start of a fun sci-fi story. But it's not any good here in ~tech.

    9 votes
    1. [2]
      pseudochron
      Link Parent
      While the ideas that the technological singularity may have already happened, and that Satoshi might not be human, are pretty out there; the more important point is that: we don't know and have no...

      While the ideas that the technological singularity may have already happened, and that Satoshi might not be human, are pretty out there; the more important point is that: we don't know and have no way of knowing. The thought experiments are interesting regardless if they are true. Some people theorize that a technological singularity is inevitable. When it happens, how and when would we know?

      Satoshi Nakamoto, the billionaire creator of a widely used decentralized technology, is either:

      • a once in a generation genius that has been able to remain anonymous for over a decade, and is now either in hiding or dead
      • a secret group of mathematicians and scientists, that have, for unknown reasons, released this technology into the world and then seemingly abandoned it
      • something else that we cannot guess or comprehend

      So this article could be the start of a fun sci-fi story.

      Hundreds of thousands of people are building very powerful, expensive, specialized computers to run code that mysteriously appeared on the internet a decade ago, and no one is really sure who created it. There is a person on the top 30 richest billionaires list who has never been identified and may not be a person at all.

      That sounds like the start of a sci-fi story. But it is 100% reality.

      7 votes
      1. reifyresonance
        Link Parent
        You're forgetting my favorite sci-fi fantasy about Satoshi - that the whitepaper was sent back through time! It just seems like such a quantum leap forward. I admit I spent a little while...

        You're forgetting my favorite sci-fi fantasy about Satoshi - that the whitepaper was sent back through time! It just seems like such a quantum leap forward. I admit I spent a little while determining if it was in MY (possibly future) writing style to see if I'd be the one who gets sent back and realizes they have to invent bitcoin, in order to fulfill the time loop (of course, this doesn't rule out the possibility of me memorizing it or something...).

        There's no way to know if that's what happened! But I think it's pretty unlikely, given the significant improvements other coins like Monero have developed over such a short timeframe... why would they not send back a more developed idea?

  3. [19]
    onyxleopard
    Link
    It's a fun notion to consider, but AFAICT, the kind of mining FPGAs useful for mining BTC are not optimal for anything other than deterministic hashing. I suppose maybe some sort of super...

    It's a fun notion to consider, but AFAICT, the kind of mining FPGAs useful for mining BTC are not optimal for anything other than deterministic hashing. I suppose maybe some sort of super intelligent AI might have figured out a way to use purpose-built FPGAs for general purpose computation, but I'm not seeing how that would be optimal. My personal pet theory (which I have no evidence to support one way or another) is that Satoshi Nakamoto is a state intelligence agency who wanted a lot of cryptographic, internet connected compute power to be built out (at others' expense) so that they could then compromise the mining pools and use all the compute for their own cryptographic purposes. Making some bucks as a side-effect was just a bonus.

    5 votes
    1. [16]
      pseudochron
      Link Parent
      It's not necessarily the case that the computing power is useful to the AI, either. But it keeps humans focused on the task of spreading the code, running it on the most number of devices, and...

      It's not necessarily the case that the computing power is useful to the AI, either. But it keeps humans focused on the task of spreading the code, running it on the most number of devices, and keeps the prices high. What if the AI was given the task: spread yourself as far as possible? Or if it was tasked with making the most money as possible?

      3 votes
      1. [15]
        onyxleopard
        Link Parent
        I buy that it might be a good way to make as much money as possible, but it doesn’t seem like it has succeeded in copying itself as much as possible. I’d guess that the most abundantly copied...

        I buy that it might be a good way to make as much money as possible, but it doesn’t seem like it has succeeded in copying itself as much as possible. I’d guess that the most abundantly copied software on the planet is likely to be some sort of bog standard firmware for commodity networking or I/O hardware.

        I’m not super intelligent, so obviously, I can’t outsmart a super intelligence or begin to comprehend its motives. But, if I put myself in the shoes of some sort of blue sky optimizer for an AI, I think creating a decentralized cryptocurrency is not useful in itself. I still think it must be a side effect. If the goal was just to make as much money as possible, then I would argue that this is sort of like the paperclip maximizer, which I would critique as not actually being generally intelligent—if an AI cannot look at its own reward function(s) and realize how stupid the goal is, then I don’t think it is very intelligent indeed.

        And given the clarity and rigor of the BTC whitepaper, I don’t think it could have been the product of a narrow AI. So I’m left to conclude that BTC was an anthropogenic project, or the singularity event that created it is incomprehensible to my mortal mind. That remains a possibility to me, but at the same time, what has it been doing in the meantime?

        2 votes
        1. [13]
          Macil
          Link Parent
          An AI can't realize its reward function is bad: what would the AI realize that the reward function is bad at achieving? The reward function is the scoring rubric that the AI is using to check how...

          if an AI cannot look at its own reward function(s) and realize how stupid the goal is, then I don’t think it is very intelligent indeed.

          An AI can't realize its reward function is bad: what would the AI realize that the reward function is bad at achieving? The reward function is the scoring rubric that the AI is using to check how good its plans are.

          If you're ever imagining a scenario where a capable AI with a "more paperclips = more good" reward function realizes that human flourishing is more important than paperclips and decides to use less of the world's resources on paperclips, then what you're actually imagining is an AI that has a reward function that was somehow built to prioritize balancing both paperclips and human flourishing (or some general concept that encompasses both), or an AI that somehow doesn't follow the reward function framework and works in some other way.

          But stepping back a moment: making as much money as possible would be a very useful instrumental value as it would help someone toward almost any goal. Just like humans, any AI that might need to barter with humans for things it wants would have a strong desire to make a lot of money.

          6 votes
          1. [12]
            onyxleopard
            Link Parent
            I’m not saying an agent should balance paperclip maximization against maximizing anything good for humans. I’m saying that any truly intelligent agent would be able to assess its reward function...

            If you're ever imagining a scenario where a capable AI with a "more paperclips = more good" reward function realizes that human flourishing is more important than paperclips and decides to use less of the world's resources on paperclips, then what you're actually imagining is an AI that has a reward function that was somehow built to prioritize balancing both paperclips and human flourishing (or some general concept that encompasses both), or an AI that somehow doesn't follow the reward function framework and works in some other way.

            I’m not saying an agent should balance paperclip maximization against maximizing anything good for humans. I’m saying that any truly intelligent agent would be able to assess its reward function relative to the space of all possible reward functions and choose one that suits it. Any agent that has a fixed reward function is a narrow AI in my book (including paperclip maximizers, even if they are highly optimized).

            1 vote
            1. [11]
              Macil
              (edited )
              Link Parent
              What you call a narrow AI could still become arbitrarily capable of understanding the world and powerful. The Orthogonality Thesis is the idea that a mind can have any combination of goal and...

              What you call a narrow AI could still become arbitrarily capable of understanding the world and powerful. The Orthogonality Thesis is the idea that a mind can have any combination of goal and understanding capability. The Orthogonality Thesis is valid for certain studied models of AI. (I'm avoiding the word intelligence here because you're using it to mean something specific. ... Funnily enough, after writing this while looking for stuff about the Orthogonality Thesis, I found a comment with an argument that used the same strategy.)


              I think your idea of a fixed reward function is unnecessarily restrictive. Humans may have a fixed reward function, that's just around a vague concept that happens to be somewhat correlated with staying alive, reproducing, and social validation. The reward function may be hard to use directly, so we consciously come up with heuristics and instrumental values that approximate it that are easier to use. When humans reconsider their values or get convinced by others of some values, we're just swapping which heuristics and instrumental values we use. But we try to judge them by our unchanging reward function.

              The underlying reward function of humans may be pretty arbitrary. A designed AI or an alien mind could have something fundamentally different. The general tendency of humans to prefer to have societies of many individuals might be something arbitrary but fundamentally human. A mind could just as easily have a reward function that gives it a tendency to prefer the idea of a world where it's alone and the world is wholly predictable. Think of how many anti-social animals exist; maybe they'd think that way if they had more capability to think about the world, and they'd think our preference for societies was some kind of wasteful arbitrary compulsion. An alien civilization might prefer to do things we think of as arbitrary compulsions. (I love this short story about pebble-sorters.)

              3 votes
              1. [10]
                onyxleopard
                Link Parent
                I mentioned the paperclip maximizer exactly because I don’t agree with the Orthogonality Thesis. I subscribe to Schmidhuber’s conception of intelligence and artificial curiosity.

                I mentioned the paperclip maximizer exactly because I don’t agree with the Orthogonality Thesis. I subscribe to Schmidhuber’s conception of intelligence and artificial curiosity.

                4 votes
                1. [7]
                  Adys
                  Link Parent
                  Whenever the subject of paperclip maximizers surface, I bring up the idea that Facebook is a paperclip maximizer - engagement at all costs. When I first proposed the idea it sounded scary and...

                  Whenever the subject of paperclip maximizers surface, I bring up the idea that Facebook is a paperclip maximizer - engagement at all costs.

                  When I first proposed the idea it sounded scary and outlandish. Now ... Just scary.

                  3 votes
                  1. [3]
                    onyxleopard
                    Link Parent
                    See, this is part of why I don’t like the semantic chicanery of swapping out intelligence for efficiency at one particular skill. I think anyone who thinks that Facebook, as an entity, is...

                    See, this is part of why I don’t like the semantic chicanery of swapping out intelligence for efficiency at one particular skill. I think anyone who thinks that Facebook, as an entity, is intelligent is misuing the word. I think Chollet is on track for a better formalization of intelligence as skill acquisition efficiency:

                    We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience, as critical pieces to be accounted for in characterizing intelligent systems.

                    AFAIK, we’re still far, far, away from any AI that exhibits generalized skill-acquisition efficiency.

                    6 votes
                    1. [2]
                      Adys
                      Link Parent
                      It's more of a thought experiment, to be clear. I agree with your definition and personally, the fact I can't tell you with a straight face and even reasonable certainty that Facebook is NOT a...

                      It's more of a thought experiment, to be clear. I agree with your definition and personally, the fact I can't tell you with a straight face and even reasonable certainty that Facebook is NOT a paperclip maximizer (even by its strictest definitions) is super scary.

                      1. onyxleopard
                        Link Parent
                        Yeah, I don’t disagree that Facebook exhibits the qualities of maximizing engagement, and that it has achieved remarkable efficiency at it. My only disagreement is with equating specific skill...

                        Yeah, I don’t disagree that Facebook exhibits the qualities of maximizing engagement, and that it has achieved remarkable efficiency at it. My only disagreement is with equating specific skill maximization with general intelligence.

                  2. [3]
                    petrichor
                    Link Parent
                    But it's really maximizing for profit and engagement comes as a result of someone thinking that's the best route, no? Facebook is a company controlled by humans with human desires.

                    But it's really maximizing for profit and engagement comes as a result of someone thinking that's the best route, no? Facebook is a company controlled by humans with human desires.

                    1. [2]
                      Adys
                      Link Parent
                      Aren't we just an aggregate of neurons and sensors? Things can get pretty philosophical. Aren't companies themselves a form of artificial intelligence?

                      Aren't we just an aggregate of neurons and sensors?

                      Things can get pretty philosophical. Aren't companies themselves a form of artificial intelligence?

                      1. petrichor
                        Link Parent
                        Sure, but then you run into the problem that no company is ever going to optimize for paperclips, as long as you hold that companies are controlled by humans. I suspect I just don't really like...

                        Sure, but then you run into the problem that no company is ever going to optimize for paperclips, as long as you hold that companies are controlled by humans.

                        I suspect I just don't really like the Orthogonality Thesis and am arguing in kinda bad faith from there.

                2. [2]
                  Macil
                  (edited )
                  Link Parent
                  I wouldn't find it too surprising if there's models for AI where the Orthogonality Thesis doesn't hold; maybe it's even the case for all possible human-level AI models that are practical for us to...

                  I wouldn't find it too surprising if there's models for AI where the Orthogonality Thesis doesn't hold; maybe it's even the case for all possible human-level AI models that are practical for us to make. When you said that a narrow-focused AI wouldn't be intelligent, I thought you were trying to imply some anthropomorphic bias and conflate meanings (like "intelligent people value many things, so an intelligent AI would value many things by definition"), but if you're saying that you don't think an intelligent narrow-focused AI would be realistically possible for us to make, then I think it's a bit presumptuous but I can see where you're coming from now.

                  Regardless of orthogonality, I think there's still a sense that AIs following Schmidhuber's idea have a fixed reward function (probably related to how they define interestingness) that they would generally prefer not to change. I think this is distinct from whatever pleasure sense the AI has. An AI or a human might modify their mind to change what sensations they get pleasure from, but there's a definition of interestingness in their minds that they wouldn't want to change; the idea of changing that part risks damning oneself to a shallow wireheaded life. Hmm, maybe paperclip-maximizers are bad examples for talking about reward functions because there's no separate analogs for interestingness and pleasure in them to compare with. This is going to make me think for a while about how I conceive of reward functions. ... I'm getting increasingly convinced that the only practical reward function is a definition of interestingness. Maybe you've sold me on the rejection of the orthogonality thesis.

                  3 votes
                  1. onyxleopard
                    Link Parent
                    I'd be happy to entertain a refutation if you come up with one!

                    I'm getting increasingly convinced that the only practical reward function is a definition of interestingness. Maybe you've sold me on the rejection of the orthogonality thesis.

                    I'd be happy to entertain a refutation if you come up with one!

                    1 vote
        2. petrichor
          Link Parent
          What's interesting to me is that Bitcoin has direct financial incentives associated with spreading its code. It's like a reward system for humans.

          I buy that it might be a good way to make as much money as possible, but it doesn’t seem like it has succeeded in copying itself as much as possible. I’d guess that the most abundantly copied software on the planet is likely to be some sort of bog standard firmware for commodity networking or I/O hardware.

          What's interesting to me is that Bitcoin has direct financial incentives associated with spreading its code. It's like a reward system for humans.

    2. [2]
      Macil
      Link Parent
      But you said it yourself: mining hardware isn't good for anything but mining. Mining hardware is usually just a bunch of chips designed for doing double-SHA256. Hardware for doing double-SHA256...

      so that they could then compromise the mining pools and use all the compute for their own cryptographic purposes

      But you said it yourself: mining hardware isn't good for anything but mining. Mining hardware is usually just a bunch of chips designed for doing double-SHA256. Hardware for doing double-SHA256 might be good for cracking password hashes that use double-SHA256, but it's not commonly used for that (especially because plain hash algorithms are terrible password hashing algorithms). And the specific ways that mining hardware is made and mining pool protocols work aren't usable at all for password cracking.

      Mining doesn't need any more explanation than the fact that proof-of-work was the only way anyone knew at the time to make a decentralized cryptocurrency.

      2 votes
      1. onyxleopard
        Link Parent
        Oh, I figured they would use it for some sort of distributed Merkle tree very similar or identical to BTC itself, not necessarily cracking old crypto keys.

        Oh, I figured they would use it for some sort of distributed Merkle tree very similar or identical to BTC itself, not necessarily cracking old crypto keys.

  4. moocow1452
    Link
    I guess the Bitcoin blockchain could be a secret Skynet Ultron in that it propagated itself and optimized it's growth around human behavior, but it would be about as intelligent as a cold virus,...

    I guess the Bitcoin blockchain could be a secret Skynet Ultron in that it propagated itself and optimized it's growth around human behavior, but it would be about as intelligent as a cold virus, not very, just dealt a good hand.

    4 votes
  5. [2]
    skybrian
    Link
    It seems like this topic should be moved to ~creative?

    It seems like this topic should be moved to ~creative?

    4 votes
    1. cfabbro
      (edited )
      Link Parent
      I think it being in ~tech is probably fine given the nature of the subject matter, but if you can think of any more topic tags that you feel would help clarify the type of content this is, let me...

      I think it being in ~tech is probably fine given the nature of the subject matter, but if you can think of any more topic tags that you feel would help clarify the type of content this is, let me know and I can add them. I have added 'thought experiments' and 'speculative fiction` for now, since those are the closest I could think of.

      3 votes
  6. jcdl
    Link
    Pretty spooky idea. The fact that mining itself isn’t useful computation doesn’t seem too important. All the AI needed was hard to trace capital as fast as possible to kick off a chain of...

    Pretty spooky idea. The fact that mining itself isn’t useful computation doesn’t seem too important. All the AI needed was hard to trace capital as fast as possible to kick off a chain of butterfly effect events. The world the AI was born into answered only to the god of money, so it gave itself a lot of money.

    Where the story goes from there, only the AI knows.

    1 vote
  7. elcuello
    Link
    Great read and great thought provoking comments here.

    Great read and great thought provoking comments here.

    1 vote
  8. pseudochron
    Link
    Ok, I'm working on the first draft of a follow up article. It's verging on becoming a collaborative sci-fi creative writing exercise at a few points, but there is also some serious factual tech...

    Ok, I'm working on the first draft of a follow up article. It's verging on becoming a collaborative sci-fi creative writing exercise at a few points, but there is also some serious factual tech stuff..

    Musings on Satoshi , part II
    in which I respond to comments on the previous article
    http://disciples.technoslug.org/satoshi2.htm
    Cryptocurrency growth has become uncontrollable and irreversible, resulting in unforeseeable changes to human civilization. Does it meet the definition of a technological singularity?