9 votes

The messy, secretive reality behind OpenAI’s bid to save the world

2 comments

  1. Death
    Link
    A very long read, but I think it tells an important story both of the difficulties inherent to chasing these world-changing technologies, and managing such a company with an altruistic goal within...

    A very long read, but I think it tells an important story both of the difficulties inherent to chasing these world-changing technologies, and managing such a company with an altruistic goal within a system that does not necessarily reward altruism.

    The parts about the GPT-2 limited release particularly spoke to me. I was one of the people who was offended by OpenAI's secretiveness surrounding the initial release, I felt it went against the company's initial promises of openness and that it wasn't a good way forward to jealously guard research instead of sharing it and allowing other researchers to tackle the issue of AI disinformation. Within my peer group people disagreed with this position, and I can't say part of it wasn't motivated by a gut feeling that the release was a partly a publicity stunt to drum up more investment, so I figured I'd let myself be proven wrong. I'm still not sure I was actually wrong, after reading this article.

    Another part that stood out to me:

    There are two prevailing technical theories about what it will take to reach AGI. In one, all the
    necessary techniques already exist; it’s just a matter of figuring out how to scale and
    assemble them. In the other, there needs to be an entirely new paradigm; deep learning, the
    current dominant technique in AI, won’t be enough.
    Most researchers fall somewhere between these extremes, but OpenAI has consistently sat
    almost exclusively on the scale-and-assemble end of the spectrum. Most of its
    breakthroughs have been the product of sinking dramatically greater computational
    resources into technical innovations developed in other labs.

    When GPT-2 was finally released in full there seemed to be a weird lack of enthusiasm amongst peers and academics around me. I didn't have time to dive deep into the code, but I had a sneaking suspicion that the model wasn't as much an leap forward in terms of paradigm shifts or improvements on current research, but rather an incredibly luxurious and large implementation of existing techniques. This doesn't make the model worthless or poorly designed, but it's just weird to see another gut feeling of mine being partly right after I had tried so hard to tell myself I was just missing something and letting my own biases get in the way of properly appreciating the work.

    I still kind of hope I'm just reading too much into this. And maybe OpenAI really will produce the kind of astronomical developments needed to pull the field ahead.

    5 votes
  2. skybrian
    (edited )
    Link
    It doesn't seem like there was much new in this article? It's more of a recap of recent history. They have a lot of money and ambition, it's a little culty, and they're not sure what they're...

    It doesn't seem like there was much new in this article? It's more of a recap of recent history. They have a lot of money and ambition, it's a little culty, and they're not sure what they're doing, so they will try stuff.

    If they did find something interesting, they would have to decide who tell about it and how to communicate. Just publishing it on the Internet would be irresponsible, so they are figuring out how to keep things secret and do some kind of more gradual sharing. That's awkward for anyone used to a culture of openness. Maybe it's a bit premature, since they didn't come up with something that interesting yet, but they're thinking ahead?

    1 vote