29 votes

OpenAI is a bad business

28 comments

  1. [4]
    tauon
    Link
    After recent discussion on how AI companies may use your data for profit, here is a breakdown of OpenAI’s financials… And while I knew they weren’t profitable yet, I wasn’t aware it’s quite that...

    After recent discussion on how AI companies may use your data for profit, here is a breakdown of OpenAI’s financials… And while I knew they weren’t profitable yet, I wasn’t aware it’s quite that bad, just a guesstimate feeling of “sure, every aspect of LLMs is expensive“.

    It’s a great read, but quite long if you’re only “adjacently interested“ in the topic, so here are some excerpts. Each separated quotation block means I left out a […] for the sake of reading flow:

    OpenAI's monthly revenue hit $300 million in August, and the company expects to make $3.7 billion in revenue this year (the company will, as mentioned, lose $5 billion anyway), yet the company says that it expects to make $11.6 billion in 2025 and $100 billion by 2029, a statement so egregious that I am surprised it's not some kind of financial crime to say it out loud. For some context, Microsoft makes about $250 billion a year, Google about $300 billion a year, and Apple about $400 billion a year. To be abundantly clear, as it stands, OpenAI currently spends $2.35 to make $1.

    (double emphasis not mine, but accurate)

    Collectively, this means that OpenAI — the most popular company in the industry — can only convert about 3% of its users.
    While there's a chance that OpenAI could have a chunk of users that aren't particularly active, one cannot run a business based on selling stuff you hope that people won't use.
    OpenAI's primary revenue source is one of the most easily-commoditized things in the world — a Large Language Model in a web browser — and its competitor is Mark Zuckerberg, a petty king with a huge warchest that can never, ever be fired, even with significant investor pressure. Even if that wasn't the case, the premium product that OpenAI sells is far from endearing, still looking for a killer app a year-and-a-half into its existence, with its biggest competitor being the free version of ChatGPT.
    And so, [OpenAI] has two options [for the necessary growth]: Either it relies on partnerships and external sales channels, allowing it to potentially increase the gross number of customers, but at the expense of the money it makes, or it can build a proper sales and marketing team.
    Both options kinda suck. The latter option also promises to be expensive, costly, and has no guarantees of success.
    Let’s go back to Twilio — a company that makes it easy to send SMS messages and push notifications. Over the past quarter, it made around $1bn in revenue. That’s what OpenAI made from renting out its models/APIs over the past year. Twilio also made roughly $4bn over the past four quarters — which is more than OpenAI’s projected revenue for the entirety of 2024. OpenAI, I remind you, is the most hyped company in tech right now, and it’s aiming for a $150bn valuation. Twilio’s market cap is, at the time of writing, just under $10bn.

    And I cannot express enough how bad a sign it is that its cloud business is so thin. The largest player in the supposedly most important industry ever can only scrounge together $1 billion in annual revenue selling access to the most well-known model in the industry. This suggests a fundamental weakness in the revenue model behind GPT, as well as a fundamental weakness in the generative artificial intelligence market writ large. If OpenAI cannot make more than a billion dollars of revenue off of this, then it’s fair to assume that there is either a lack of interest from developers or a lack of interest from the consumers those developers are serving.

    Around the halfway mark, before some of the above had even been mentioned, the thing that spoke to me the most:

    A note on “free” products: Some of you may suggest that OpenAI having 350 million free users may be a good sign, likely comparing it to the early days of Facebook, or Google. It’s really important to note how different ChatGPT is to those products. While Facebook and Google had cloud infrastructure costs, they were dramatically lower than OpenAI’s, and both Facebook and Google had (and have) immediate ways to monetize free users.

    Both Meta and Google monetize free users through advertising that is informed by their actions on the platform, which involves the user continually feeding the company information about their preferences based on their browsing habits across their platforms. As a result, a “free” user is quite valuable to these companies, and becomes more so as they interact with the platform more.

    This isn’t really the case with OpenAI. Each free user of ChatGPT is, at best, a person that can be converted into a paying user. While OpenAI can use their inputs as potential training data, that’s infinitesimal value compared to operating costs. Unlike Facebook and Google, ChatGPT’s most frequent free users actually become less valuable over time, and become a burden on a system that already burns money.

    A business model that has costs scaling with increased freemium user counts is nothing new, but it seems like their freemium cost is higher than the expected revenue a paying user will bring in, and that's definitely not how Big Tech got big. Adding on users in other tech companies is almost marginal (cost of storage and some computation) – here, the net value of adding a new user is a loss. (There’s a neat back of the envelope calculation in the blog post comments comparing OpenAI to Dropbox’s free users.)

    This business is a fundamentally different one, even if the product/service being sold had already proven itself over time in a big market.
    Once the hype starts to cool down (not die off – just relax a little), I’m curious to see what will happen to all the funding that they so desperately need.

    30 votes
    1. [3]
      updawg
      Link Parent
      WRIT LARGE IS NOT A FANCY, PRETENTIOUS WAY TO SAY "AT LARGE;" IT IS A FANCY, PRETENTIOUS WAY TO SAY SOMETHING IS A MICROCOSM. The correctly pretentious way to express this would be "This suggests...
      • Exemplary

      a fundamental weakness in the generative artificial intelligence market writ large

      WRIT LARGE IS NOT A FANCY, PRETENTIOUS WAY TO SAY "AT LARGE;" IT IS A FANCY, PRETENTIOUS WAY TO SAY SOMETHING IS A MICROCOSM.

      The correctly pretentious way to express this would be "This suggests the fundamental weakness in the revenue model behind GPT is a microcosm of the general AI market writ large," or perhaps "The fundamental weakness in GPT's revenue model, writ large across the entire generative AI market, reveals an industry-wide vulnerability."

      I knew you didn't write it, but I recently swore I'd scream or kill someone or something the next time I heard that error.

      17 votes
      1. [2]
        redbearsam
        Link Parent
        I had to look this phrase up again cause I remembered it was someone on Tildes pet peeve from a couple days back to misuse it, and I'd felt it was misused here. Nice to scroll down and be proven...

        I had to look this phrase up again cause I remembered it was someone on Tildes pet peeve from a couple days back to misuse it, and I'd felt it was misused here.

        Nice to scroll down and be proven so very very right 😂

        Your words clearly haunt we updawg. Well done.

        1 vote
        1. updawg
          Link Parent
          Thank you lol I only even learned what the phrase actually means from looking it up after hearing people use it so often at work and getting sick of saying to myself "I feel like that's wrong..."

          Thank you lol I only even learned what the phrase actually means from looking it up after hearing people use it so often at work and getting sick of saying to myself "I feel like that's wrong..."

  2. [20]
    papasquat
    Link
    The problem with all of these ai companies is that they really have no differentiators. It's pretty well understood how to build an LLM now, and while I won't say the barrier to entry is low, I...

    The problem with all of these ai companies is that they really have no differentiators. It's pretty well understood how to build an LLM now, and while I won't say the barrier to entry is low, I will say that it's trivial for any company with the resources to do it at this point.

    Adding to that, there's no particular brand loyalty. A lot of companies could make a phone similar to an iPhone, but it's not an iPhone, so apple fans won't buy it. Same goes for Ferraris, or bose headphones, or really, most very successful products. ChatGPT is the same thing as copilot is the same thing as Claude. They're all virtually identical in their capabilities, and even if they're not at a moment in time, they very quickly achieve parity.

    Most importantly, people are starting to realize that this stuff isn't actually all that useful. Once everyone got over the initial awe phase, most people have realized that they were just using these things as expensive search engines. You can't actually use them to write anything for you, because their output is immediately detectable by anyone with a little exposure to them, the code they write is bad and needs to be massaged and edited by hand afterwards, the pictures they draw are uncanny and cheap looking. I have yet to see an actual use case either in business or personal use that is truly compelling for an LLM, so I don't really see how, in their current iteration, they could ever be a successful product, given the gargantuan cost it takes to develop and operate them.

    22 votes
    1. [11]
      Minori
      Link Parent
      I use LLMs at work, but they're really marginal. They're decent, if not poor, text summarizers, so there is an internal tool that basically creates a summary section out of something like a GitHub...

      I have yet to see an actual use case either in business or personal use that is truly compelling for an LLM

      I use LLMs at work, but they're really marginal. They're decent, if not poor, text summarizers, so there is an internal tool that basically creates a summary section out of something like a GitHub issue page. Engineers love the tool because they don't have to write pointless summaries for leadership anymore. I'm not sure I'd describe this tool as "truly compelling", but it did automate a frequent, boring task.

      12 votes
      1. [6]
        DefinitelyNotAFae
        Link Parent
        Similarly I've seen some handy "break this task into smaller tasks" tools out there that can be good for neurodivergent folks. Not sure they're worth the trade off of scraping the entire internet...

        Similarly I've seen some handy "break this task into smaller tasks" tools out there that can be good for neurodivergent folks.

        Not sure they're worth the trade off of scraping the entire internet and using more power than god.

        8 votes
        1. [3]
          Habituallytired
          Link Parent
          goblin.tools is the one I use. But I never considered openAI for the same task.

          goblin.tools is the one I use. But I never considered openAI for the same task.

          4 votes
          1. [2]
            DefinitelyNotAFae
            Link Parent
            Yeah that's the AI tool I was thinking of actually. They use OpenAI on the backend but intend to pivot to "more ethical" options in the future. Sometimes it's helpful, sometimes not but I think...

            Yeah that's the AI tool I was thinking of actually. They use OpenAI on the backend but intend to pivot to "more ethical" options in the future.

            Sometimes it's helpful, sometimes not but I think those are examples of more useful AI tools. (I was just invited to something AI involved at work and I'm already dreading it.)

            3 votes
            1. Habituallytired
              Link Parent
              Good luck with the work AI thing! I have a new AI thing at work too, but I'm only tangentially involved, so I don't have to think about it every day. I'm glad they're going to pivot away from...

              Good luck with the work AI thing! I have a new AI thing at work too, but I'm only tangentially involved, so I don't have to think about it every day.

              I'm glad they're going to pivot away from OpenAI. I really like using it, even if the breakdowns are sometimes overwhelming themselves, lol.

              1 vote
        2. [2]
          Minori
          Link Parent
          The power usage concerns really don't bother me because fundamentally it should be easier and cheaper to build renewable energy sources. Also, many small models are actually efficient enough they...

          The power usage concerns really don't bother me because fundamentally it should be easier and cheaper to build renewable energy sources. Also, many small models are actually efficient enough they can be run on your cellphone. The beefiest models aren't necessary for simple summarisation tasks etc, aaaand all the necessary training data has already been scraped and loaded into the working models. No further development work is needed for the simplest summary tools.

          1 vote
          1. DefinitelyNotAFae
            Link Parent
            It's some real fruit of the poisonous tree though. Independent models that run separately are one thing, using the big LLMs to run your little tool, idk. I've not found the use in my work,...

            It's some real fruit of the poisonous tree though. Independent models that run separately are one thing, using the big LLMs to run your little tool, idk. I've not found the use in my work, especially given the things I need to summarize involve private information.

            2 votes
      2. [3]
        Lexinonymous
        Link Parent
        I feel like this is the biggest use of AI - handling the unnecessary grungework to satisfy bad and un-engaged leadership, grungework that is likely write-only anyway and only exists for CYA reasons.

        Engineers love the tool because they don't have to write pointless summaries for leadership anymore.

        I feel like this is the biggest use of AI - handling the unnecessary grungework to satisfy bad and un-engaged leadership, grungework that is likely write-only anyway and only exists for CYA reasons.

        5 votes
        1. [2]
          drannex
          Link Parent
          I know a CTO (life-long network engineer) that I work with on a contract who is high up in a company, who is the number one technology vendor for $ExtremeMegaCorp, and he figured out that he can...

          I know a CTO (life-long network engineer) that I work with on a contract who is high up in a company, who is the number one technology vendor for $ExtremeMegaCorp, and he figured out that he can only get the CEO to respond or take action seriously if he writes out all the technical information, and then passes it through ChatGPT with something along the lines of "summarize these notes, place in bulletlist form, business friendly and professional". Ever since he started formatting his messages and intel through that he has a batting average of 100% on getting shit done.

          Absolutely nuts, he only told me (and showed me!), because we were trying to convince the guy to move forward on something urgent. We got it approved within a day.

          14 votes
          1. chocobean
            Link Parent
            it's a BusinesS translator!! The Futurama Electronium Hat that Gunther, a monkey of moderate intellience who is great at business, wears!

            it's a BusinesS translator!! The Futurama Electronium Hat that Gunther, a monkey of moderate intellience who is great at business, wears!

            1 vote
      3. sparksbet
        Link Parent
        I'm a data scientist, and while the models we deploy aren't LLMs (that would be unsustainably expensive even if we wanted to do it), they have a lot of utility in the process of curating a...

        I'm a data scientist, and while the models we deploy aren't LLMs (that would be unsustainably expensive even if we wanted to do it), they have a lot of utility in the process of curating a training data set for a simpler machine learning model. The ability to describe what you're looking for and get a few examples from it can be useful in the early stages of training a model, and later on it can be helpful to have it look over your training data to see if it spots anything it thinks is mis-labelled according to your description -- even if it's not something you'd change the label on, that can often help you spot edge cases or potentially confusing examples in the training data. Or you could potentially have an LLM go over a large test set to see if it catches anything the production model missed -- this can be something very useful because it would be totally infeasible to have a human look through thousands and thousands of negatives for anything the classified missed, so even imperfect results from an LLM are an improvement.

        I don't think many of these things are generalizable outside of data science, ofc. But it is worth noting that there can be genuine use cases for these models -- they just aren't the "cure-alls" they're currently being marketed as.

        1 vote
    2. [8]
      chocobean
      Link Parent
      I have a use case for you. I needed a really cheap really quick poster to put up in the local area bulletin boards to sell eggs. The alternative was a sharpie and white paper with "$4/dz" and a...

      I have a use case for you.

      the pictures they draw are uncanny and cheap looking. I have yet to see an actual use case

      I needed a really cheap really quick poster to put up in the local area bulletin boards to sell eggs. The alternative was a sharpie and white paper with "$4/dz" and a number written on it. I got one of these AI to generate me something cute and colourful, and then I used photo editing software to touch up three eyed chickens, mostly bird shaped garbage, spelling errors and other text nonsense. I was willing to pay $0 and I would have accepted sharpie on white, and came away with a nicer product.

      I will use it again if I need a lost cat poster, sell charity cookies, garage sale etc: where they are okay to look cheap and where they continue to be free.

      6 votes
      1. [2]
        DefinitelyNotAFae
        Link Parent
        Are those really the two options? Some clip art and Word Art in Word and you'd also have a reasonably eye-catching sign. That's what I would have done five years ago too. Way less work than the...

        Are those really the two options? Some clip art and Word Art in Word and you'd also have a reasonably eye-catching sign. That's what I would have done five years ago too.

        Way less work than the post generative editing IMO

        8 votes
        1. chocobean
          Link Parent
          you got it! it's mere laziness on my part only that makes this a valid use case -- i would have been happy with sharpie on white. I have the artistic ability to draw a full thing and layout and...

          you got it! it's mere laziness on my part only that makes this a valid use case -- i would have been happy with sharpie on white. I have the artistic ability to draw a full thing and layout and choose font etc, but I wasn't going to and i wouldn't have, had this tech not existed. So, it's more work than sharpie on white, but it's honestly less work than starting from scratch.

          1 vote
      2. [2]
        winther
        Link Parent
        Question is how much would you be willing to pay for such an image? Are there enough customers out there to make the billions in cost profitable?

        Question is how much would you be willing to pay for such an image? Are there enough customers out there to make the billions in cost profitable?

        6 votes
        1. chocobean
          Link Parent
          as stated, free. absolutely would not in any world pay for this. there is no profitable use case

          as stated, free. absolutely would not in any world pay for this. there is no profitable use case

      3. [3]
        papasquat
        Link Parent
        I would propose that those use cases are valid for a very short time, before there's a general public understanding of what AI images look like. I imagine very soon, they'll have a reputation...

        I would propose that those use cases are valid for a very short time, before there's a general public understanding of what AI images look like.

        I imagine very soon, they'll have a reputation sorta like clipart did, as cheap, half assed artwork for someone who can't be bothered for anything better. Yes, there is still a slight use case for that stuff, but I think most people would rather see hand drawn stick figures than clipart at this point, and I think it's going to be the same way with AI generated art.

        6 votes
        1. skybrian
          Link Parent
          Yes, and it already has that reputation for some people. I think that much like special effects, there is a "look" that when you notice it, seems cheap. But if used more tastefully, it won't look...

          Yes, and it already has that reputation for some people. I think that much like special effects, there is a "look" that when you notice it, seems cheap.

          But if used more tastefully, it won't look like a special effect.

          (Similarly for clip art.)

          Maybe the cheaper look will seem retro someday?

          2 votes
        2. chocobean
          Link Parent
          Clip art was fun for a while, yeah, especially when teachers didn't know what they were and awarded more points to those lazy projects than to hand drawn stick figures. I imagine we're in the...

          Clip art was fun for a while, yeah, especially when teachers didn't know what they were and awarded more points to those lazy projects than to hand drawn stick figures.

          I imagine we're in the midst of that right now, and then eventually we'll hopefully all learn to use AI art as a medium, like oil pastels or watercolors, to make better stick figure art

          Case in point: for that transition period, we saw a sharp rise of Macromedia Flash projects that used JPGs, clip art, and hand drawn silliness to make something new and better than just clip art. And then slowly that gave way to a divergence between much better animation and the abandonment of drawn art altogether in favor of human videos.

          My modest hope is that actual artists will be able to use AI to make their human art even better, and other content creators will commission artists for AI assisted human art more often.

          1 vote
  3. [3]
    rosco
    Link
    I'm going to strap on my tin foil hat and hope that someone has a logical explanation for why it's wrong. My initial theory on OpenAI is that it was a way for Microsoft to engage in free R&D. They...

    I'm going to strap on my tin foil hat and hope that someone has a logical explanation for why it's wrong.

    My initial theory on OpenAI is that it was a way for Microsoft to engage in free R&D. They donate a large sum of money to OpenAI and write it off their books, OpenAI then spends almost all of that money on Azure credits going right back into Microsoft coffers, and the development outcomes are released as open source and can be implemented and improved on by Microsoft. The end game was free R&D.

    Now that OpenAI has so much hype there is a second opportunity for Microsoft. They own up to 49% of OpenAI after their recent investment. OpenAI never has to be profitable, kind of like Trump Social, but if they sell the hype for going public then they could off load their shares (49% of the proposed $150 billion target or ~75 billion). OpenAI has such extreme hype - similar to Telsa or NVIDIA - I don't see why that isn't entirely feasible.

    So now Microsoft has conducted intensive R&D with no losses, punted off an incredible IPO with windfall profits, and all in all come out on top. Am I completely off base?

    8 votes
    1. [2]
      TommyTenToes
      Link Parent
      I have no expertise in the world of tax but I'll say that I think that there's a ton of misinformation about what companies can write off as donations. I don't believe it's "every dollar spent on...

      I have no expertise in the world of tax but I'll say that I think that there's a ton of misinformation about what companies can write off as donations. I don't believe it's "every dollar spent on donation is one less dollar of tax."

      On a business/finance note, OpenAI would have been purchasable by any player in the space while it was operating. It's not as if they can just decline any potential buyer until Microsoft is ready to buy their stake. When a compelling offer is made, the company has a fiduciary responsibility to accept it and can otherwise face shareholder lawsuits (Twitter was essentially put into this scenario). So this would've been a pretty huge risk by Microsoft.

      4 votes
      1. rosco
        Link Parent
        Totally, isn't it "every dollar spent on donation is one less dollar of taxable income"? (so if the effective tax rate is like 15%, that's 15% of whatever they donated?). Still seems like a win...

        Totally, isn't it "every dollar spent on donation is one less dollar of taxable income"? (so if the effective tax rate is like 15%, that's 15% of whatever they donated?). Still seems like a win for conducting R&D.

        It's not as if they can just decline any potential buyer until Microsoft is ready to buy their stake. When a compelling offer is made, the company has a fiduciary responsibility to accept it and can otherwise face shareholder lawsuits

        Again, totally, but that wasn't a possibility until they had a for-profit arm relatively recently, and 2 couldn't they either A - cash out on the offer if it was over-inflated and make a profit on their investment (3.3 billion for 49% I believe) or just outbid the offer if it was deemed too low or a loss?

        It just feels like they are running with a lot of leverage, partially because of their ownership stake and partially because of their enormous financial war chest.

        1 vote
  4. tesseractcat
    Link
    I think this article is interesting, but a bit myopic considering that it's basically missing the biggest part of OpenAI's strategy. They're hemorrhaging money because they're betting that they...

    I think this article is interesting, but a bit myopic considering that it's basically missing the biggest part of OpenAI's strategy. They're hemorrhaging money because they're betting that they can train a model that is smarter. I think it's uncontroversial that the earnings potential increases the smarter the model gets (very few people would pay for a GPT-2 level model, for instance), so the only question is whether or not they can train a smarter model in time, and what people will pay for it at that point.

    One objection is that even if they do manage to train a smarter model, at that point there will be no differentiator from other companies who will also train equivalently smart models. I think this is sort of true, but I wouldn't be surprised if OpenAI can exploit their lead, and then maintain it through network effects.

    3 votes