43 votes

OpenAI is a bad business

61 comments

  1. [28]
    tauon
    Link
    After recent discussion on how AI companies may use your data for profit, here is a breakdown of OpenAI’s financials… And while I knew they weren’t profitable yet, I wasn’t aware it’s quite that...

    After recent discussion on how AI companies may use your data for profit, here is a breakdown of OpenAI’s financials… And while I knew they weren’t profitable yet, I wasn’t aware it’s quite that bad, just a guesstimate feeling of “sure, every aspect of LLMs is expensive“.

    It’s a great read, but quite long if you’re only “adjacently interested“ in the topic, so here are some excerpts. Each separated quotation block means I left out a […] for the sake of reading flow:

    OpenAI's monthly revenue hit $300 million in August, and the company expects to make $3.7 billion in revenue this year (the company will, as mentioned, lose $5 billion anyway), yet the company says that it expects to make $11.6 billion in 2025 and $100 billion by 2029, a statement so egregious that I am surprised it's not some kind of financial crime to say it out loud. For some context, Microsoft makes about $250 billion a year, Google about $300 billion a year, and Apple about $400 billion a year. To be abundantly clear, as it stands, OpenAI currently spends $2.35 to make $1.

    (double emphasis not mine, but accurate)

    Collectively, this means that OpenAI — the most popular company in the industry — can only convert about 3% of its users.
    While there's a chance that OpenAI could have a chunk of users that aren't particularly active, one cannot run a business based on selling stuff you hope that people won't use.
    OpenAI's primary revenue source is one of the most easily-commoditized things in the world — a Large Language Model in a web browser — and its competitor is Mark Zuckerberg, a petty king with a huge warchest that can never, ever be fired, even with significant investor pressure. Even if that wasn't the case, the premium product that OpenAI sells is far from endearing, still looking for a killer app a year-and-a-half into its existence, with its biggest competitor being the free version of ChatGPT.
    And so, [OpenAI] has two options [for the necessary growth]: Either it relies on partnerships and external sales channels, allowing it to potentially increase the gross number of customers, but at the expense of the money it makes, or it can build a proper sales and marketing team.
    Both options kinda suck. The latter option also promises to be expensive, costly, and has no guarantees of success.
    Let’s go back to Twilio — a company that makes it easy to send SMS messages and push notifications. Over the past quarter, it made around $1bn in revenue. That’s what OpenAI made from renting out its models/APIs over the past year. Twilio also made roughly $4bn over the past four quarters — which is more than OpenAI’s projected revenue for the entirety of 2024. OpenAI, I remind you, is the most hyped company in tech right now, and it’s aiming for a $150bn valuation. Twilio’s market cap is, at the time of writing, just under $10bn.

    And I cannot express enough how bad a sign it is that its cloud business is so thin. The largest player in the supposedly most important industry ever can only scrounge together $1 billion in annual revenue selling access to the most well-known model in the industry. This suggests a fundamental weakness in the revenue model behind GPT, as well as a fundamental weakness in the generative artificial intelligence market writ large. If OpenAI cannot make more than a billion dollars of revenue off of this, then it’s fair to assume that there is either a lack of interest from developers or a lack of interest from the consumers those developers are serving.

    Around the halfway mark, before some of the above had even been mentioned, the thing that spoke to me the most:

    A note on “free” products: Some of you may suggest that OpenAI having 350 million free users may be a good sign, likely comparing it to the early days of Facebook, or Google. It’s really important to note how different ChatGPT is to those products. While Facebook and Google had cloud infrastructure costs, they were dramatically lower than OpenAI’s, and both Facebook and Google had (and have) immediate ways to monetize free users.

    Both Meta and Google monetize free users through advertising that is informed by their actions on the platform, which involves the user continually feeding the company information about their preferences based on their browsing habits across their platforms. As a result, a “free” user is quite valuable to these companies, and becomes more so as they interact with the platform more.

    This isn’t really the case with OpenAI. Each free user of ChatGPT is, at best, a person that can be converted into a paying user. While OpenAI can use their inputs as potential training data, that’s infinitesimal value compared to operating costs. Unlike Facebook and Google, ChatGPT’s most frequent free users actually become less valuable over time, and become a burden on a system that already burns money.

    A business model that has costs scaling with increased freemium user counts is nothing new, but it seems like their freemium cost is higher than the expected revenue a paying user will bring in, and that's definitely not how Big Tech got big. Adding on users in other tech companies is almost marginal (cost of storage and some computation) – here, the net value of adding a new user is a loss. (There’s a neat back of the envelope calculation in the blog post comments comparing OpenAI to Dropbox’s free users.)

    This business is a fundamentally different one, even if the product/service being sold had already proven itself over time in a big market.
    Once the hype starts to cool down (not die off – just relax a little), I’m curious to see what will happen to all the funding that they so desperately need.

    38 votes
    1. [27]
      updawg
      Link Parent
      WRIT LARGE IS NOT A FANCY, PRETENTIOUS WAY TO SAY "AT LARGE;" IT IS A FANCY, PRETENTIOUS WAY TO SAY SOMETHING IS A MICROCOSM. The correctly pretentious way to express this would be "This suggests...
      • Exemplary

      a fundamental weakness in the generative artificial intelligence market writ large

      WRIT LARGE IS NOT A FANCY, PRETENTIOUS WAY TO SAY "AT LARGE;" IT IS A FANCY, PRETENTIOUS WAY TO SAY SOMETHING IS A MICROCOSM.

      The correctly pretentious way to express this would be "This suggests the fundamental weakness in the revenue model behind GPT is a microcosm of the general AI market writ large," or perhaps "The fundamental weakness in GPT's revenue model, writ large across the entire generative AI market, reveals an industry-wide vulnerability."

      I knew you didn't write it, but I recently swore I'd scream or kill someone or something the next time I heard that error.

      44 votes
      1. [26]
        redbearsam
        Link Parent
        I had to look this phrase up again cause I remembered it was someone on Tildes pet peeve from a couple days back to misuse it, and I'd felt it was misused here. Nice to scroll down and be proven...

        I had to look this phrase up again cause I remembered it was someone on Tildes pet peeve from a couple days back to misuse it, and I'd felt it was misused here.

        Nice to scroll down and be proven so very very right 😂

        Your words clearly haunt we updawg. Well done.

        7 votes
        1. [25]
          updawg
          Link Parent
          Thank you lol I only even learned what the phrase actually means from looking it up after hearing people use it so often at work and getting sick of saying to myself "I feel like that's wrong..."

          Thank you lol I only even learned what the phrase actually means from looking it up after hearing people use it so often at work and getting sick of saying to myself "I feel like that's wrong..."

          4 votes
          1. [24]
            tauon
            Link Parent
            Funnily enough, I also felt that it was wrong (not a native speaker), but didn’t bother looking it up anymore this time, because I’m not gonna correct a section quoted from the article either way,...

            Funnily enough, I also felt that it was wrong (not a native speaker), but didn’t bother looking it up anymore this time, because I’m not gonna correct a section quoted from the article either way, haha.

            4 votes
            1. [18]
              Promonk
              Link Parent
              There's actually a standardized way in English to flag that you recognize an error or incorrect usage in something you've quoted. Follow the faux pas with the Latin word sic, meaning "thus," in...

              There's actually a standardized way in English to flag that you recognize an error or incorrect usage in something you've quoted. Follow the faux pas with the Latin word sic, meaning "thus," in square brackets.

              For example:

              User @updawg really hates the incorrect usage of the phrase "writ large," and will scream when it is misused, irregardless [sic] of whether the commenter is merely quoting someone else misusing it.

              In most style guidelines, square brackets are used to denote comments from the editor or minor edits made to quotes in order to bring their grammar in line with the current piece. This is also why ellipses are supposed to be put in square brackets when editing out irrelevant passages from a quote, as you acknowledged in your thesis comment. It denotes that omissions were done intentionally and suggests that the missing text doesn't appreciably change the meaning of the quoted passage.

              Apologies if this old news for you. I find this sort of thing interesting and useful, and am kind of sad that the democratization of writing in the Internet Age has led to the loss of such useful, though somewhat obscure, customs.

              8 votes
              1. [8]
                updawg
                Link Parent

                User @updawg really hates the incorrect usage of the phrase "writ large," and will scream when it is misused, irregardless [sic][sic] of whether the commenter is merely quoting someone else misusing it.

                6 votes
                1. [7]
                  Promonk
                  Link Parent
                  Begun, the Descriptivist/Prescriptivist Wars have. I am for the most part descriptivist myself, excepting when I'm editing formal writing. There are a couple usages that bug me, however, but...

                  Begun, the Descriptivist/Prescriptivist Wars have.

                  I am for the most part descriptivist myself, excepting when I'm editing formal writing.

                  There are a couple usages that bug me, however, but "irregardless" is not among them. "Supposably" and, oddly, "is comprised of" get on my tits, for some reason. I also lament that "literally" has fallen on hard times of late.

                  8 votes
                  1. [6]
                    DefinitelyNotAFae
                    Link Parent
                    Literally has been used hyperbolically for quite a long time before its valley girl stereotype fame

                    Literally has been used hyperbolically for quite a long time before its valley girl stereotype fame

                    4 votes
                    1. [5]
                      Promonk
                      (edited )
                      Link Parent
                      Sure. Language evolves, after all. That's why I tried to suggest my own reaction to these usages isn't exactly reasonable. Plenty of words in English have evolved to mean nearly the opposite of...

                      Sure. Language evolves, after all. That's why I tried to suggest my own reaction to these usages isn't exactly reasonable. Plenty of words in English have evolved to mean nearly the opposite of their original definition and don't make my skin crawl, though I think that in many of those cases, the antonym didn't have an already established word in the language.

                      I think what bugs me about them isn't the "misuse" of the words or the meaning they're intended to convey, but rather the motivation for their use. "Is comprised of" is a good example of this, because the only reason someone would use it is because they somehow think it sounds fancier than "is composed of," which is what they actually mean. Even that is oversimplifying things quite a bit.

                      I generally won't go around correcting people's usages because I know my distaste for some of these things is idiosyncratic and emotional, not reasonable. There are times when I'll jump in and say something, but it's usually when I find something interesting about the "correct" usage and want to share it, or when there's a specific meaning to the term misused that has bearing on the subject under discussion. "Begs the question" is sometimes one of the latter, since that rhetorical ploy is being used more and more by propagandists and people arguing in bad faith.

                      2 votes
                      1. [4]
                        DefinitelyNotAFae
                        Link Parent
                        Oh I was just responding to the "recently" part! Oddly I wouldn't have considered comprised was fancier than composed. Both sound more formal than "made up of" but otherwise I don't think composed...

                        Oh I was just responding to the "recently" part! Oddly I wouldn't have considered comprised was fancier than composed. Both sound more formal than "made up of" but otherwise I don't think composed is used that way often anymore. Comprises reads oddly in its "correct" usage most of the time so I understand why it's shifted.

                        1 vote
                        1. [3]
                          Promonk
                          Link Parent
                          I should probably have specified that "recently" in my feverish little English major brain meant "in the last century or two," though the abominable practice does seem to have had a notable uptick...

                          I should probably have specified that "recently" in my feverish little English major brain meant "in the last century or two," though the abominable practice does seem to have had a notable uptick in the last decade or so.

                          I simply don't use "comprise" anymore. It's at best superfluous, and at worst confusing. It really only persists to satisfy the English grammarians' irrational distaste for the passive voice, which is one of those things about prescriptivism that bugs the shit out of me. Yes, passive voice is often used to prevaricate and obfuscate, but that's not the only reason it gets used.

                          Also, I'll end my independent clauses in a preposition if I damn well want to.

                          1 vote
                          1. [2]
                            DefinitelyNotAFae
                            Link Parent
                            Ok but I think if it started in the late 1700s, that's like halfway between the start of modern English and the present. That feels not very recent (◠‿◕) I mostly talk how I like with too many...

                            Ok but I think if it started in the late 1700s, that's like halfway between the start of modern English and the present. That feels not very recent (⁠◠⁠‿⁠◕⁠)

                            I mostly talk how I like with too many commas and subclauses and too many random British English bits thrown in sometimes if ive been watching a lot of Doctor Who.

                            2 votes
                            1. Promonk
                              Link Parent
                              I feel that. I'm also overly fond of dependent clauses and commas. Nothing gives me a crisis of confidence quite like reading over my writing to evaluate rhythm. Do I have too many short, clipped...

                              I mostly talk how I like with too many commas and subclauses and too many random British English bits thrown in sometimes if ive been watching a lot of Doctor Who.

                              I feel that. I'm also overly fond of dependent clauses and commas.

                              Nothing gives me a crisis of confidence quite like reading over my writing to evaluate rhythm. Do I have too many short, clipped sentences? Does emphasizing the logical connections of these ideas justify a sentence with three dependent clauses? It'll give ya the yips if you let it.

                              I'm suddenly having flashbacks to beating my head against periodic sentences in English composition class. I was taking Latin at the same time, and all I could think was, "this would be so much easier to do in Latin syntax."

                              2 votes
              2. [9]
                tauon
                Link Parent
                Agreed, and no need for apologies! I may have been aware already, but other/future readers perhaps won’t know about sic yet :-) I just didn’t apply it since I wasn’t 100% sure about it being a...

                Apologies if this old news for you. I find this sort of thing interesting and useful, and am kind of sad that the democratization of writing in the Internet Age has led to the loss of such useful, though somewhat obscure, customs.

                Agreed, and no need for apologies! I may have been aware already, but other/future readers perhaps won’t know about sic yet :-)

                I just didn’t apply it since I wasn’t 100% sure about it being a mistake like I’d mentioned, and while I would normally have looked it up out of curiosity, I was in a bit of a rush prior to putting up the topic post, haha.

                irregardless [sic]

                Fun choice for ensuring/provoking a response by updawg, I’m sure. :P

                1. [8]
                  Promonk
                  Link Parent
                  It would seem so. lol
                  1 vote
                  1. [7]
                    updawg
                    Link Parent
                    I thought about also going for not italicizing sic but different style guides give different guidance. I still always prefer to italicize foreign text.

                    I thought about also going for not italicizing sic but different style guides give different guidance. I still always prefer to italicize foreign text.

                    4 votes
                    1. [6]
                      Promonk
                      Link Parent
                      I usually do as well, but that line between foreign word and borrowed word can be quite nebulous. For instance, why should "sic" or "dele" be italicized, but "parenthesis" or "ellipsis" not? All...

                      I usually do as well, but that line between foreign word and borrowed word can be quite nebulous. For instance, why should "sic" or "dele" be italicized, but "parenthesis" or "ellipsis" not? All of those terms have been in use in English printing for centuries, but the Greek words are now proper English, yet the Latin words are still somehow Latin.

                      I think that terms of art taken from classical languages in medicine, biology and law should be italicized if for no other reason than to signpost them as terms of art, but I'm less picky in other cases. Italicization should be consistent within a work and should defer to the generally accepted style guidelines of a discipline where possible, but outside of that, the author calls the shots.

                      I maybe overemphasized retaining authorial voice in my time as editor, but I had my reasons.

                      2 votes
                      1. [5]
                        updawg
                        Link Parent
                        After deleting my initial parenthetical, I have to assume it's because we're literally using Latin words for sic and dele, whereas "ellipsis" and "parenthesis" both have different meanings than...

                        For instance, why should "sic" or "dele" be italicized, but "parenthesis" or "ellipsis" not? All of those terms have been in use in English printing for centuries, but the Greek words are now proper English, yet the Latin words are still somehow Latin.

                        After deleting my initial parenthetical, I have to assume it's because we're literally using Latin words for sic and dele, whereas "ellipsis" and "parenthesis" both have different meanings than they did in Latin (or Greek before that).

                        2 votes
                        1. [4]
                          Promonk
                          Link Parent
                          That's the explanation I've seen, but to be honest, I don't find it very convincing, especially in these two cases. It's true that when we say "ellipsis" we're generally referring to the...

                          That's the explanation I've seen, but to be honest, I don't find it very convincing, especially in these two cases. It's true that when we say "ellipsis" we're generally referring to the typographical character rather than the rhetorical device, which is what Aristotle or Quintilian would've meant by it. But on the other hand, the typographic character is really only used to represent the rhetorical device. That's a pretty damned fine distinction upon which to base an orthographic rule.

                          I think the real reason is just that "parenthesis" and "ellipsis" simply have broader application than sic or dele, and so got used often enough to fully enter the lexicon. I think ideas of foreignness determining orthography are mostly post-hoc [i?] rationalizations to justify our compulsive categorization of things, and there are a ton of other rules in English orthography that I think boil down to the same thing.

                          It's a tension that runs throughout linguistics, really, and it hits editors particularly hard. Like, I can't imagine being a professional editor for years upon years and being a hard-line prescriptivist throughout. The dissonance would make me a bald, neurotic mess.

                          2 votes
                          1. [3]
                            updawg
                            Link Parent
                            I don't really see how that's any--or at least significantly--different from my guess. Seems to just be more words.

                            I don't really see how that's any--or at least significantly--different from my guess. Seems to just be more words.

                            1 vote
                            1. [2]
                              Promonk
                              Link Parent
                              It's the difference between emphasizing the foreignness of a word versus emphasizing the familiarity. "Sic" gets italicized because it's foreign, but it's exactly as foreign in derivation as...

                              It's the difference between emphasizing the foreignness of a word versus emphasizing the familiarity. "Sic" gets italicized because it's foreign, but it's exactly as foreign in derivation as "versus," which isn't italicized. If it were the foreignness that determined the orthography, they'd both be all slanty. The relevant distinction between the two isn't their derivation, it's their familiarity.

                              I don't know, man. Once you get into linguistic pedantry, it's all tilting at windmills.

                              1 vote
                              1. updawg
                                Link Parent
                                Yeah, that's exactly the same thing I'm saying lol almost all words are derived from foreign words. The point is just "are you using it as an English word or as a foreign word/tool (like sic/dele)."

                                Yeah, that's exactly the same thing I'm saying lol almost all words are derived from foreign words. The point is just "are you using it as an English word or as a foreign word/tool (like sic/dele)."

                                1 vote
            2. [5]
              redbearsam
              Link Parent
              I blame you 100% Tauon. Unbelievable behaviour. Beyond the pale.

              I blame you 100% Tauon. Unbelievable behaviour. Beyond the pale.

              1 vote
              1. [2]
                updawg
                Link Parent
                I propose a second Tildes rule: always correct anything you see that uses "writ large" incorrectly. What's especially funny is that using "writ large" instead of "at large" pretty much always...

                I propose a second Tildes rule: always correct anything you see that uses "writ large" incorrectly.

                What's especially funny is that using "writ large" instead of "at large" pretty much always seems wrong, but I feel like you could totally use "at large" when it should be "writ large" and I wouldn't even notice.

                2 votes
                1. DefinitelyNotAFae
                  Link Parent
                  Considering you put it on all caps, I could consider your correction "wrote large"?

                  Considering you put it on all caps, I could consider your correction "wrote large"?

                  3 votes
  2. [21]
    papasquat
    Link
    The problem with all of these ai companies is that they really have no differentiators. It's pretty well understood how to build an LLM now, and while I won't say the barrier to entry is low, I...

    The problem with all of these ai companies is that they really have no differentiators. It's pretty well understood how to build an LLM now, and while I won't say the barrier to entry is low, I will say that it's trivial for any company with the resources to do it at this point.

    Adding to that, there's no particular brand loyalty. A lot of companies could make a phone similar to an iPhone, but it's not an iPhone, so apple fans won't buy it. Same goes for Ferraris, or bose headphones, or really, most very successful products. ChatGPT is the same thing as copilot is the same thing as Claude. They're all virtually identical in their capabilities, and even if they're not at a moment in time, they very quickly achieve parity.

    Most importantly, people are starting to realize that this stuff isn't actually all that useful. Once everyone got over the initial awe phase, most people have realized that they were just using these things as expensive search engines. You can't actually use them to write anything for you, because their output is immediately detectable by anyone with a little exposure to them, the code they write is bad and needs to be massaged and edited by hand afterwards, the pictures they draw are uncanny and cheap looking. I have yet to see an actual use case either in business or personal use that is truly compelling for an LLM, so I don't really see how, in their current iteration, they could ever be a successful product, given the gargantuan cost it takes to develop and operate them.

    27 votes
    1. [11]
      Minori
      Link Parent
      I use LLMs at work, but they're really marginal. They're decent, if not poor, text summarizers, so there is an internal tool that basically creates a summary section out of something like a GitHub...

      I have yet to see an actual use case either in business or personal use that is truly compelling for an LLM

      I use LLMs at work, but they're really marginal. They're decent, if not poor, text summarizers, so there is an internal tool that basically creates a summary section out of something like a GitHub issue page. Engineers love the tool because they don't have to write pointless summaries for leadership anymore. I'm not sure I'd describe this tool as "truly compelling", but it did automate a frequent, boring task.

      19 votes
      1. [6]
        DefinitelyNotAFae
        Link Parent
        Similarly I've seen some handy "break this task into smaller tasks" tools out there that can be good for neurodivergent folks. Not sure they're worth the trade off of scraping the entire internet...

        Similarly I've seen some handy "break this task into smaller tasks" tools out there that can be good for neurodivergent folks.

        Not sure they're worth the trade off of scraping the entire internet and using more power than god.

        14 votes
        1. [3]
          Habituallytired
          Link Parent
          goblin.tools is the one I use. But I never considered openAI for the same task.

          goblin.tools is the one I use. But I never considered openAI for the same task.

          5 votes
          1. [2]
            DefinitelyNotAFae
            Link Parent
            Yeah that's the AI tool I was thinking of actually. They use OpenAI on the backend but intend to pivot to "more ethical" options in the future. Sometimes it's helpful, sometimes not but I think...

            Yeah that's the AI tool I was thinking of actually. They use OpenAI on the backend but intend to pivot to "more ethical" options in the future.

            Sometimes it's helpful, sometimes not but I think those are examples of more useful AI tools. (I was just invited to something AI involved at work and I'm already dreading it.)

            4 votes
            1. Habituallytired
              Link Parent
              Good luck with the work AI thing! I have a new AI thing at work too, but I'm only tangentially involved, so I don't have to think about it every day. I'm glad they're going to pivot away from...

              Good luck with the work AI thing! I have a new AI thing at work too, but I'm only tangentially involved, so I don't have to think about it every day.

              I'm glad they're going to pivot away from OpenAI. I really like using it, even if the breakdowns are sometimes overwhelming themselves, lol.

              2 votes
        2. [2]
          Minori
          Link Parent
          The power usage concerns really don't bother me because fundamentally it should be easier and cheaper to build renewable energy sources. Also, many small models are actually efficient enough they...

          The power usage concerns really don't bother me because fundamentally it should be easier and cheaper to build renewable energy sources. Also, many small models are actually efficient enough they can be run on your cellphone. The beefiest models aren't necessary for simple summarisation tasks etc, aaaand all the necessary training data has already been scraped and loaded into the working models. No further development work is needed for the simplest summary tools.

          3 votes
          1. DefinitelyNotAFae
            Link Parent
            It's some real fruit of the poisonous tree though. Independent models that run separately are one thing, using the big LLMs to run your little tool, idk. I've not found the use in my work,...

            It's some real fruit of the poisonous tree though. Independent models that run separately are one thing, using the big LLMs to run your little tool, idk. I've not found the use in my work, especially given the things I need to summarize involve private information.

            6 votes
      2. [3]
        Lexinonymous
        Link Parent
        I feel like this is the biggest use of AI - handling the unnecessary grungework to satisfy bad and un-engaged leadership, grungework that is likely write-only anyway and only exists for CYA reasons.

        Engineers love the tool because they don't have to write pointless summaries for leadership anymore.

        I feel like this is the biggest use of AI - handling the unnecessary grungework to satisfy bad and un-engaged leadership, grungework that is likely write-only anyway and only exists for CYA reasons.

        12 votes
        1. [2]
          drannex
          (edited )
          Link Parent
          I know a REDACTED that I work with on a contract who is high up in a company, who is the number one REDACTED for $$$REDACTED, and he figured out that he can only get the REDACTED to respond or...

          I know a REDACTED that I work with on a contract who is high up in a company, who is the number one REDACTED for $$$REDACTED, and he figured out that he can only get the REDACTED to respond or take action seriously if he writes out all the technical information, and then passes it through ChatGPT with something along the lines of "summarize these notes, place in bulletlist form, business friendly and professional". Ever since he started formatting his messages and intel through that he has a batting average of 100% on getting shit done.

          Absolutely nuts, he only told me (and showed me!), because we were trying to convince the guy to move forward on something urgent. We got it approved within a day.

          24 votes
          1. chocobean
            Link Parent
            it's a BusinesS translator!! The Futurama Electronium Hat that Gunther, a monkey of moderate intellience who is great at business, wears!

            it's a BusinesS translator!! The Futurama Electronium Hat that Gunther, a monkey of moderate intellience who is great at business, wears!

            7 votes
      3. sparksbet
        Link Parent
        I'm a data scientist, and while the models we deploy aren't LLMs (that would be unsustainably expensive even if we wanted to do it), they have a lot of utility in the process of curating a...

        I'm a data scientist, and while the models we deploy aren't LLMs (that would be unsustainably expensive even if we wanted to do it), they have a lot of utility in the process of curating a training data set for a simpler machine learning model. The ability to describe what you're looking for and get a few examples from it can be useful in the early stages of training a model, and later on it can be helpful to have it look over your training data to see if it spots anything it thinks is mis-labelled according to your description -- even if it's not something you'd change the label on, that can often help you spot edge cases or potentially confusing examples in the training data. Or you could potentially have an LLM go over a large test set to see if it catches anything the production model missed -- this can be something very useful because it would be totally infeasible to have a human look through thousands and thousands of negatives for anything the classified missed, so even imperfect results from an LLM are an improvement.

        I don't think many of these things are generalizable outside of data science, ofc. But it is worth noting that there can be genuine use cases for these models -- they just aren't the "cure-alls" they're currently being marketed as.

        6 votes
    2. [8]
      chocobean
      Link Parent
      I have a use case for you. I needed a really cheap really quick poster to put up in the local area bulletin boards to sell eggs. The alternative was a sharpie and white paper with "$4/dz" and a...

      I have a use case for you.

      the pictures they draw are uncanny and cheap looking. I have yet to see an actual use case

      I needed a really cheap really quick poster to put up in the local area bulletin boards to sell eggs. The alternative was a sharpie and white paper with "$4/dz" and a number written on it. I got one of these AI to generate me something cute and colourful, and then I used photo editing software to touch up three eyed chickens, mostly bird shaped garbage, spelling errors and other text nonsense. I was willing to pay $0 and I would have accepted sharpie on white, and came away with a nicer product.

      I will use it again if I need a lost cat poster, sell charity cookies, garage sale etc: where they are okay to look cheap and where they continue to be free.

      12 votes
      1. [2]
        DefinitelyNotAFae
        Link Parent
        Are those really the two options? Some clip art and Word Art in Word and you'd also have a reasonably eye-catching sign. That's what I would have done five years ago too. Way less work than the...

        Are those really the two options? Some clip art and Word Art in Word and you'd also have a reasonably eye-catching sign. That's what I would have done five years ago too.

        Way less work than the post generative editing IMO

        14 votes
        1. chocobean
          Link Parent
          you got it! it's mere laziness on my part only that makes this a valid use case -- i would have been happy with sharpie on white. I have the artistic ability to draw a full thing and layout and...

          you got it! it's mere laziness on my part only that makes this a valid use case -- i would have been happy with sharpie on white. I have the artistic ability to draw a full thing and layout and choose font etc, but I wasn't going to and i wouldn't have, had this tech not existed. So, it's more work than sharpie on white, but it's honestly less work than starting from scratch.

          6 votes
      2. [3]
        papasquat
        Link Parent
        I would propose that those use cases are valid for a very short time, before there's a general public understanding of what AI images look like. I imagine very soon, they'll have a reputation...

        I would propose that those use cases are valid for a very short time, before there's a general public understanding of what AI images look like.

        I imagine very soon, they'll have a reputation sorta like clipart did, as cheap, half assed artwork for someone who can't be bothered for anything better. Yes, there is still a slight use case for that stuff, but I think most people would rather see hand drawn stick figures than clipart at this point, and I think it's going to be the same way with AI generated art.

        12 votes
        1. chocobean
          Link Parent
          Clip art was fun for a while, yeah, especially when teachers didn't know what they were and awarded more points to those lazy projects than to hand drawn stick figures. I imagine we're in the...

          Clip art was fun for a while, yeah, especially when teachers didn't know what they were and awarded more points to those lazy projects than to hand drawn stick figures.

          I imagine we're in the midst of that right now, and then eventually we'll hopefully all learn to use AI art as a medium, like oil pastels or watercolors, to make better stick figure art

          Case in point: for that transition period, we saw a sharp rise of Macromedia Flash projects that used JPGs, clip art, and hand drawn silliness to make something new and better than just clip art. And then slowly that gave way to a divergence between much better animation and the abandonment of drawn art altogether in favor of human videos.

          My modest hope is that actual artists will be able to use AI to make their human art even better, and other content creators will commission artists for AI assisted human art more often.

          6 votes
        2. skybrian
          Link Parent
          Yes, and it already has that reputation for some people. I think that much like special effects, there is a "look" that when you notice it, seems cheap. But if used more tastefully, it won't look...

          Yes, and it already has that reputation for some people. I think that much like special effects, there is a "look" that when you notice it, seems cheap.

          But if used more tastefully, it won't look like a special effect.

          (Similarly for clip art.)

          Maybe the cheaper look will seem retro someday?

          4 votes
      3. [2]
        winther
        Link Parent
        Question is how much would you be willing to pay for such an image? Are there enough customers out there to make the billions in cost profitable?

        Question is how much would you be willing to pay for such an image? Are there enough customers out there to make the billions in cost profitable?

        9 votes
        1. chocobean
          Link Parent
          as stated, free. absolutely would not in any world pay for this. there is no profitable use case

          as stated, free. absolutely would not in any world pay for this. there is no profitable use case

          6 votes
    3. RobotOverlord525
      Link Parent
      There are subtle differences that are apparent to people that use the three major LLMs (i.e., ChatGPT, Claude, and Gemini). Claude is broadly understood to be the most "literary" of the three. And...

      The problem with all of these ai companies is that they really have no differentiators.

      There are subtle differences that are apparent to people that use the three major LLMs (i.e., ChatGPT, Claude, and Gemini). Claude is broadly understood to be the most "literary" of the three. And when I had access to both ChatGPT and a Gemini Pro trial, I could tell the difference between the output of the two. (ChatGPT-4o was better nine times out of 10.)

      while I won't say the barrier to entry is low, I will say that it's trivial for any company with the resources to do it at this point.

      Evidently those resources are huge, though. To the point where I wouldn't downplay them. Part of the reason why OpenAI is burning through cash so quickly is because building/training and operating these LLMs is outrageously expensive. The small/open source LLMs are apparently not comparable to the big three in large part because of their hardware limitations. It's why Sam Altman was looking for trillions of dollars in investments to build out chip manufacturing, for example. Granted, yeah, some of that is forward-looking as part of his quest to build AGI, but there's no denying that, in looking at their financials, their operating costs are huge.

      Most importantly, people are starting to realize that this stuff isn't actually all that useful.

      I find ChatGPT handy. Not indispensable, mind you, but handy. I use it for D&D prep, as a novice DM. And my D&D group uses it to make art for various moments in our campaigns. Yeah, most of that art is crap, but every once in a while it produces something great. At work, I've used it to help me with code for the system I'm an administrator of (I'm not a developer, so my code writing capabilities are pretty limited).

      That said, I saw a recent thread on Reddit where people talk about what they use ChatGPT for (There are a million of these threads on /r/ChatGPT), and I found some of it a little disturbing. I don't trust anything ChatGPT ever says if it's supposed to be something factual. Yeah, it's fine for D&D prep, because, as a DM, anything I want to be canon in my version of the Forgotten Realms is; there's no cost to getting things "wrong." (There's also no expectation of originality when it comes to DMing, so if it produces suggestions that are highly derivative, it doesn't matter because my whole group has only been playing D&D for a year and a half anyway.) But as a replacement for Google? LLMs still hallucinate too much. (Arguably, 100% of their output is a hallucination, but that's a whole separate topic.)

      And that's to say nothing of the people who are using it as a therapist or a friend or what have you. But I suppose that's the future: all of us siloed into our own private worlds, afraid of real life.

      3 votes
  3. [2]
    vczf
    (edited )
    Link
    I was finding myself nodding along as I read this article, but it’s missing something big and it no longer assuaging my doubts about the future. Author is not thinking like a surveillance...

    I was finding myself nodding along as I read this article, but it’s missing something big and it no longer assuaging my doubts about the future.

    Both Meta and Google monetize free users through advertising that is informed by their actions on the platform, which involves the user continually feeding the company information about their preferences based on their browsing habits

    This isn’t really the case with OpenAI.

    Author is not thinking like a surveillance capitalist. An AI assistant is the perfect vector to inject brand recognition and have paid suggestions for products and services. Infinitely more valuable than having ads on a Google search.

    Imagine an entire generation of children hooked on ChatGPT for advice and information. You could learn so much more about them than Google could. They will confide in you when they have nobody else willing to listen, and trust you because your AI product pretends to care about them.

    You’ll know exactly what and when they want, need, and crave. That’s the perfect opportunity to sell. Imagine a 22 year old who’s been talking to an AI for the last 15 years. When they want to move to get a new apartment, or need to buy a car, where will they turn?

    That’s just the optimistic way surveillance capitalism could turn out. Even darker, the AI could make suggestions to manipulate people. Could plant seeds in their minds about where they should move, where they should shop. It could encourage you to start drinking alcohol, try a nicotine product, start gambling or playing an addictive gacha game—and take a slice of that lucrative recurring revenue stream.

    As the technology improves the accessible context and personalization, this will only become more effective. This model also negates the “commoditization” argument from the author, because a user who has formed a parasocial dependency on an AI is not going to jump ship to a different service.

    On a more positive note, this future could be mitigated by e.g FTC regulations or consumer protection legislation. However, I do think this is “where the money is” in consumer AI, so it will likely be an uphill battle.

    15 votes
    1. tauon
      Link Parent
      Definitely concur. I had similar thoughts in the other thread recently about how chat transcripts may contain data that should be highly privileged in access/usage rights, even if given...

      Definitely concur. I had similar thoughts in the other thread recently about how chat transcripts may contain data that should be highly privileged in access/usage rights, even if given voluntarily… and how this information may be used for nefarious purposes.

      Similar to the Googles and Facebooks of now, customers/users may be tricked into growing up with a system they don’t even begin to understand the harms or at least dangers of, let alone how it works (for most people).

      5 votes
  4. [9]
    rosco
    Link
    I'm going to strap on my tin foil hat and hope that someone has a logical explanation for why it's wrong. My initial theory on OpenAI is that it was a way for Microsoft to engage in free R&D. They...

    I'm going to strap on my tin foil hat and hope that someone has a logical explanation for why it's wrong.

    My initial theory on OpenAI is that it was a way for Microsoft to engage in free R&D. They donate a large sum of money to OpenAI and write it off their books, OpenAI then spends almost all of that money on Azure credits going right back into Microsoft coffers, and the development outcomes are released as open source and can be implemented and improved on by Microsoft. The end game was free R&D.

    Now that OpenAI has so much hype there is a second opportunity for Microsoft. They own up to 49% of OpenAI after their recent investment. OpenAI never has to be profitable, kind of like Trump Social, but if they sell the hype for going public then they could off load their shares (49% of the proposed $150 billion target or ~75 billion). OpenAI has such extreme hype - similar to Telsa or NVIDIA - I don't see why that isn't entirely feasible.

    So now Microsoft has conducted intensive R&D with no losses, punted off an incredible IPO with windfall profits, and all in all come out on top. Am I completely off base?

    14 votes
    1. [6]
      TommyTenToes
      Link Parent
      I have no expertise in the world of tax but I'll say that I think that there's a ton of misinformation about what companies can write off as donations. I don't believe it's "every dollar spent on...

      I have no expertise in the world of tax but I'll say that I think that there's a ton of misinformation about what companies can write off as donations. I don't believe it's "every dollar spent on donation is one less dollar of tax."

      On a business/finance note, OpenAI would have been purchasable by any player in the space while it was operating. It's not as if they can just decline any potential buyer until Microsoft is ready to buy their stake. When a compelling offer is made, the company has a fiduciary responsibility to accept it and can otherwise face shareholder lawsuits (Twitter was essentially put into this scenario). So this would've been a pretty huge risk by Microsoft.

      8 votes
      1. [5]
        rosco
        Link Parent
        Totally, isn't it "every dollar spent on donation is one less dollar of taxable income"? (so if the effective tax rate is like 15%, that's 15% of whatever they donated?). Still seems like a win...

        Totally, isn't it "every dollar spent on donation is one less dollar of taxable income"? (so if the effective tax rate is like 15%, that's 15% of whatever they donated?). Still seems like a win for conducting R&D.

        It's not as if they can just decline any potential buyer until Microsoft is ready to buy their stake. When a compelling offer is made, the company has a fiduciary responsibility to accept it and can otherwise face shareholder lawsuits

        Again, totally, but that wasn't a possibility until they had a for-profit arm relatively recently, and 2 couldn't they either A - cash out on the offer if it was over-inflated and make a profit on their investment (3.3 billion for 49% I believe) or just outbid the offer if it was deemed too low or a loss?

        It just feels like they are running with a lot of leverage, partially because of their ownership stake and partially because of their enormous financial war chest.

        4 votes
        1. saturnV
          Link Parent
          R&D tax credits are significant (apparently 20% of QREs whatever that is), so if it did save them money i wouldn't expect it to be a massive gain

          R&D tax credits are significant (apparently 20% of QREs whatever that is), so if it did save them money i wouldn't expect it to be a massive gain

          5 votes
        2. [2]
          sparksbet
          Link Parent
          It's more complex than this, because lowering one's taxable income can change things like tax brackets and how you interact with other systems. For a personal income tax example, I'm able to have...

          (so if the effective tax rate is like 15%, that's 15% of whatever they donated?).

          It's more complex than this, because lowering one's taxable income can change things like tax brackets and how you interact with other systems. For a personal income tax example, I'm able to have $0 taxable income on my US taxes bc I live abroad, and this means I'm able to make no payments on my student loans through my income-driven repayment plan. Idk anything about business taxes, but I would be surprised if there aren't benefits to lowering your taxable income for them beyond just the tax rate on that income specifically.

          4 votes
          1. rosco
            Link Parent
            Tinfoil off! Thanks for the explanation.

            Tinfoil off! Thanks for the explanation.

            1 vote
        3. R3qn65
          Link Parent
          You also can't just donate money to another business and write that off - it has to be to a charitable organization.

          You also can't just donate money to another business and write that off - it has to be to a charitable organization.

          2 votes
    2. [2]
      Greg
      Link Parent
      Microsoft only got involved in 2019, after OpenAI converted to a for-profit legal entity: https://openai.com/our-structure/ The more recent changes were around how much control the non-profit had...

      Microsoft only got involved in 2019, after OpenAI converted to a for-profit legal entity: https://openai.com/our-structure/

      The more recent changes were around how much control the non-profit had over the for-profit, but legally the money’s been flowing through a normal company the whole time.

      I forget whether they’ve explicitly confirmed this, but given that the non-profit managed $130M total over four years, and then the for-profit got a billion dollars from Microsoft pretty much as soon as it was formed, I’d assume that MS were actually the ones insisting on the for-profit structure as a condition of their investment.

      6 votes
      1. rosco
        Link Parent
        I think I can decidedly take off my tin foil again. Thanks for the runthrough!

        I think I can decidedly take off my tin foil again. Thanks for the runthrough!

        5 votes
  5. tesseractcat
    Link
    I think this article is interesting, but a bit myopic considering that it's basically missing the biggest part of OpenAI's strategy. They're hemorrhaging money because they're betting that they...

    I think this article is interesting, but a bit myopic considering that it's basically missing the biggest part of OpenAI's strategy. They're hemorrhaging money because they're betting that they can train a model that is smarter. I think it's uncontroversial that the earnings potential increases the smarter the model gets (very few people would pay for a GPT-2 level model, for instance), so the only question is whether or not they can train a smarter model in time, and what people will pay for it at that point.

    One objection is that even if they do manage to train a smarter model, at that point there will be no differentiator from other companies who will also train equivalently smart models. I think this is sort of true, but I wouldn't be surprised if OpenAI can exploit their lead, and then maintain it through network effects.

    11 votes