21 votes

Is OpenAI today’s Netscape? Or is it AOL?

6 comments

  1. [4]
    skybrian
    Link
    An important difference is that AOL was targeted towards consumers only, while all the big AI companies have API’s, allowing other companies to build on them. There do seem to be some successful...

    An important difference is that AOL was targeted towards consumers only, while all the big AI companies have API’s, allowing other companies to build on them. There do seem to be some successful companies going after specialized markets, like Harvey for law and OpenEvidence for doctors.

    These companies could switch to a different LLM when a new model appears,either because it’s better or because it’s cheaper.

    I suspect that, at least for businesses, using general-purpose chatbot for specialized queries isn’t going to last.

    2 votes
    1. [3]
      Eji1700
      Link Parent
      I think smarter companies are using this time to "cache" their AI output. Take the most common requests/results, look at them, hard code those, congrats you're no longer using "AI", or more...

      I think smarter companies are using this time to "cache" their AI output. Take the most common requests/results, look at them, hard code those, congrats you're no longer using "AI", or more accurately LLMs, for those requests.

      It leads me to wonder at what point the AI companies are going to turn their API's into "blackbox" models where you just have them process on their side and don't get to see the output so they can keep the reliance.

      1 vote
      1. [2]
        skybrian
        Link Parent
        I doubt that caching requests would work very well because requests are rarely exactly the same. A better approach might be to improve their documentation so that an LLM is needed less often....

        I doubt that caching requests would work very well because requests are rarely exactly the same. A better approach might be to improve their documentation so that an LLM is needed less often. Having better documentation can also improve LLM query results, because LLM's can search the documentation too.

        They should also be coming up with their own benchmarks that they can use to see how well LLM's do on the problems they care about. Also, maybe collect training data to do their own fine-tuning, using an open weights LLM.

        4 votes
        1. Eji1700
          Link Parent
          That's kinda why I had cache in quotes, as I realize it's more akin to what you're suggesting, although it depends a bit on what you're seeing the LLM used for. You might improve docs, expand a...

          That's kinda why I had cache in quotes, as I realize it's more akin to what you're suggesting, although it depends a bit on what you're seeing the LLM used for.

          You might improve docs, expand a library, etc. Hell i'm seeing cases where they have an AI generating API calls EVERY TIME THEY NEED THEM, where it takes a single developer to say "hey let me just save that code and execute it next time we need this". We're talking "do this process" is the input, so it would be trivial to map.

          2 votes
  2. [2]
    EgoEimi
    Link
    There's one more solution: MCPs. They actually work really well in empowering LLMs to exit their silos — but really only for developer applications: referencing the latest documentation for a...

    Certainly the tech industry knows about this problem – and it has devised a solution: Agents. The next wave of AI innovation centers on “the agentic web,” with personalized agents that will do our bidding in every imaginable way. Every major AI company has announced agentic products, but unfortunately, they don’t work, because the ecosystem in which they operate is hostile to their success.

    There's one more solution: MCPs. They actually work really well in empowering LLMs to exit their silos — but really only for developer applications: referencing the latest documentation for a certain tech, getting designs from Figma, getting and updating tasks and tickets, etc.

    Unfortunately, they're a bit too 'technical' for consumers.

    2 votes
    1. skybrian
      Link Parent
      Sometimes companies will provide web API's that customers can use to perform actions automatically. With LLM's and MCPs, there is more incentive to do this. Also, a nice thing about MCP is that...

      Sometimes companies will provide web API's that customers can use to perform actions automatically. With LLM's and MCPs, there is more incentive to do this. Also, a nice thing about MCP is that the company can just update the API and the LLM's will adapt; they don't need to maintain backward compatibility to the same extent as with traditional programming.

      It seems promising when everyone cooperates, but often they don't. An example of a query that most chatbots fail on is "summarize this YouTube video." I've found that the best way is to go to the YouTube page, press the "Ask" button, then "Summarize this video." So, Google can do it, but there's no API for this.

      2 votes