91 votes

Ugly numbers from Microsoft and ChatGPT reveal that AI demand is already shrinking

75 comments

  1. [15]
    Inanimate
    (edited )
    Link
    Anecdotally speaking, this makes sense. I’m a high-school teacher and just this week I attended a professional development session about AI assistance for educators, and it was packed. The session...
    • Exemplary

    Anecdotally speaking, this makes sense. I’m a high-school teacher and just this week I attended a professional development session about AI assistance for educators, and it was packed. The session was focused on using AI for lesson planning, rubrics, translating and differentiating instructional materials, etc…

    At the beginning of the session, it seemed to me that many present were quite “excitable” about AI - either very anxious, upset about the idea, eager to try it, or somewhere in-between. The energy in the room was quite high overall, at least. Over the course of the session, however, it felt like most educators realized (just as many working with AI for a while have known, and as the presenters immediately lead with) that AI is an excellent supplementary tool and can save you a lot of work, but it is not a replacement for hard work. Often, the applications of AI are quite niche or specific. That is, rather than being like a “smartphone” right now, it is much more like a “beeper”. (My implication that it will likely evolve to be general-use is intentional.)

    I think that realization that I witnessed in miniature is being experienced elsewhere, most likely. And that’s all not even speaking to the ethical concerns! The article focused on these. Multiple teachers expressed concern that they were plagiarizing the work of others, or that their own work could become data for others to use if they ask AI to assist with it. It was clear they were not just aware of the debate over AI but quite invested in it.

    Obviously the demographic of educators is not a one-to-one mapping for the general populace. But I do think that experience was illuminating in how it showed the reaction of people to AI, especially in a field that has lately felt very threatened by it with the growth of online instruction and districts across the US increasingly pushing for digital and/or mandated curriculum. Beyond even those fears though, I think the demand is perhaps also dropping because people are realizing AI is far more limited and requires more fiddling than they thought. In combination with the PR concerns, that’s not a recipe for resounding success.

    Hopefully my experience is interesting to others. Please feel free to ask if you have any questions.

    89 votes
    1. [11]
      arch
      Link Parent
      AI is also expensive to use on a regular basis. Peak users right now seem to be interested in it from an intellectual standpoint, or are corporate entities who are signing up hoping their workers...

      AI is also expensive to use on a regular basis. Peak users right now seem to be interested in it from an intellectual standpoint, or are corporate entities who are signing up hoping their workers will somehow use it for them, so as to "not miss the train" on the AI space. I think there's fear that it will be the next dotcom boom; the next Bitcoin and people don't want to miss out. When in reality, if it becomes free it could be used to generate low effort clip art for your family party, or write/create fanfiction to entertain us. From a commercial standpoint it's possible it's going to be a dud, especially considering the recent copyright ruling.

      I think the better analogy is that AI is a new tool that an artist can use. Like the Star Trek TNG episode about the musical instrumentor sculpting tool that could basically play your thoughts.

      21 votes
      1. [7]
        Inanimate
        Link Parent
        Broadly agreed, yes. From what I saw, I think the average person will be using it in cases closer to your example of “generating clip-art”. The use-cases that most excited the teachers in the room...

        Broadly agreed, yes. From what I saw, I think the average person will be using it in cases closer to your example of “generating clip-art”. The use-cases that most excited the teachers in the room were one or more of the following:

        • optimizing and making routine tasks more efficient;
        • using it as a kind of “natural language search engine”, most often to generate ideas to spark the user’s own creativity;
        • using it to rewrite input material (such as into a new language, or with more clarity, or at a lower academic reading level);
        • as a tool to take a mostly-finished product past the finish line with a few beautifying touches (such as creating a slideshow from your lesson plan)

        Again, though educators are certainly not representative perfectly of the average person, I think those will be the kinds of uses that will interest the average person in AI the most. Your suggested uses (clip art, fanfic, music) all feel a little more playful and creative than what I think the actual most typical use might end up being, however. I imagine cases more along the lines of,

        • “Am I forgetting anything from my shopping list?”
        • “What are five good ideas for a romantic date night?”
        • “What is this message in Spanish saying?”
        • “Can you make this pamphlet for my church newsletter prettier?”, etc…

        Importantly, the average person doesn’t hire an artist or consultant or writer for any of these things. The average corporation, however, does. Throughout the room of educators, I felt a collective, conflicted ambivalence as they played with the AI tools, and I think it arose from all of us grappling with that realization… that AI assistance feels so much more innocent at the individual level, but has so many other implications and repercussions when more widely adopted.

        12 votes
        1. [3]
          Plik
          (edited )
          Link Parent
          It's pretty good for other education related things. Converting word problems into variables and equations and outright solving them if you know enough to check for errors (or you have a mark...

          It's pretty good for other education related things.

          • Converting word problems into variables and equations and outright solving them if you know enough to check for errors (or you have a mark scheme)
          • Providing detailed step by step processes to solving specific mathematical problems
          • Giving spreadsheet outlines for various things, curving grades fairly in new ways for example.
          • Theorizing different ways of programming the same function (whether it's Python, Java, C, ​or whatever weird script Desmos uses), for example grounded checks to see if a player can jump in a game environment. Copy/pasting code rarely results in a working result, but the process of conversing with something that has all the programming documentation in its knowledge base is really good for theorizing.
          • Possibly exam scheduling? I think if you had a PDF converted to text that had exam dates, you could have it generate a schedule in .csv format, or even take said data, turn it into a .csv, then make another .csv that would optimize the schedule based on said data given specific constraints.​
          • Creating various templates/outlines for student use rather than doing it manually, e.g. "give me a table outline for measurements required for testing the value of g in a ball drop experiment, include sections for average measurements, range, and both absolute and percentage uncertainty."

          ...

          I could think of more, but usually the ideas only come to me when I have a practical use case, and my brain goes "let's ask ChatGPT".

          3 votes
          1. [2]
            Inanimate
            Link Parent
            Yeah, I'm quite excited for its ability to create procedures, guides, instructions, etc... I'm infamous in my math department for not being very good at writing clear and concise directions for...

            Yeah, I'm quite excited for its ability to create procedures, guides, instructions, etc... I'm infamous in my math department for not being very good at writing clear and concise directions for students, so I'll be trying out that function quite eagerly.

            5 votes
            1. Plik
              Link Parent
              Definitely try it. You know how students want ridiculously specific instructions for every version of a problem? Chat GPT can do that in seconds. Up to a point it is useful. I remember when all...

              Definitely try it. You know how students want ridiculously specific instructions for every version of a problem? Chat GPT can do that in seconds. Up to a point it is useful. I remember when all you got was a few examples and maybe an answer key (not full solutions)...​If you didn't get the concept you always wanted more examples. Chat GPT makes it so every problem is an example. However, you do have to be very careful about students figuring it out and relying on ChatGPT entirely. ChatGPT for specific examples, and then DeltaMath for real practice is a good combo (because of DeltaMath's "get x subsequent problems correct or lose a point system).

              For example systems of equations. You can have it do substitution, simultaneous equations (graphing), and Gaussian Elimination....​But then you can give it one problem and have it write out step by step processes for each of those methods, rather than doing it yourself. All it takes is a brief read through to make sure it hasn't done any magical (fake) math. This saves a lot of time as it's not you doing the same problem multiple ways.

              I think there are a lot good of ways AI can be used in education, it will just take time for people with very specific use cases to share their experiences beyond the more simple stuff like "make me a PowerPoint", or "make this text sound less/more fancy".​

              2 votes
        2. [3]
          DanBC
          Link Parent
          I love the Bing search AI chatbot, and I find it's really useful to give me a broad overview and a bunch of introductory links. And because it was blatantly, obviously, wrong[1] in some of the...

          I love the Bing search AI chatbot, and I find it's really useful to give me a broad overview and a bunch of introductory links. And because it was blatantly, obviously, wrong[1] in some of the early answers it gave me I know not to trust it.

          It's a useful bolt on to all the other searches I do, especially where Google is a bit weird now.

          I do worry about the the people who had good early experiences with Bing ChatBot and think it's more right than it is. That's a problem with all search, but AI seems to be a bit more persuasive for some people.

          [1] I'd asked it some question about copyright infringement and it told me that copyright infringement was a crime, (which is usually incorrect), so I asked it to double check and it apologised and said that simple copyright infringement is not a crime.

          1 vote
          1. Plik
            Link Parent
            Have anyone who trusts chat bots try playing tic tac toe with the bot. It is usually wrong and claims it won. It will also happily give in to the user and say they won with a convinving argument...

            Have anyone who trusts chat bots try playing tic tac toe with the bot. It is usually wrong and claims it won. It will also happily give in to the user and say they won with a convinving argument from the user.

            5 votes
          2. Inanimate
            Link Parent
            Yes, so many people have always treated search engines as if they were natural-language-processing that I think this will be one of the most important and useful ways that the average person will...

            Yes, so many people have always treated search engines as if they were natural-language-processing that I think this will be one of the most important and useful ways that the average person will apply AI to their everyday lives.

            1 vote
      2. [2]
        Raistlin
        Link Parent
        I can say that it is absolutely terrible at writing fanfiction. Just awful. AO3 and ffnet are still safe.

        I can say that it is absolutely terrible at writing fanfiction. Just awful. AO3 and ffnet are still safe.

        6 votes
        1. DefinitelyNotAFae
          Link Parent
          And yet I'm finding a number of AI generated fics on AO3 recently. Which is a really annoyance.

          And yet I'm finding a number of AI generated fics on AO3 recently. Which is a really annoyance.

          4 votes
      3. datavoid
        Link Parent
        Github copilot massively increases productivity for coding in my experience - I'm extremely interested in AI and am convinced it will change everything in a few years. Right now it makes a ton of...

        Github copilot massively increases productivity for coding in my experience - I'm extremely interested in AI and am convinced it will change everything in a few years. Right now it makes a ton of stupid mistakes and requires careful observation to ensure it doesn't drop data, but it still speed things up a ton.

        That being said, the price of plain chatgpt is way too outside of the API. Also I swear it got a lot stupider about 2 months after it came out, so i stopped paying for it.

        4 votes
    2. [2]
      brogeroni
      Link Parent
      Very interesting! I really like how you say the current AI is a beeper, rather than a smartphone. It really shows the limited use that AI has today. I'm AI-adjacent and I'm supposed to be giving a...

      Very interesting! I really like how you say the current AI is a beeper, rather than a smartphone. It really shows the limited use that AI has today.

      I'm AI-adjacent and I'm supposed to be giving a presentation to educators in a month or so as well (but have no idea where to start lol). Currently the plan is to go over the history and basics of AI to convince them "the AI is just fancy autocomplete", then go over some of the GPT best practices to hopefully improve their effectiveness as well.

      Was there anything from your session that was particularly memorable? Is there anything you would recommend, and any landmines you could see happening in such a presentation?

      1 vote
      1. Inanimate
        Link Parent
        Our session wasn't really an overview of AI as a whole or focused on informing educators about its history, basics, etc... It was very much a 'workshop'. The presenters introduced a few different...

        Our session wasn't really an overview of AI as a whole or focused on informing educators about its history, basics, etc... It was very much a 'workshop'. The presenters introduced a few different websites, went over the basics for each, and then gave us the majority of time to fiddle with the websites as they walked around and helped teachers out. Then we came back together at the end for a Q&A and to share anything we noticed or found interesting. I would definitely recommend carving out a chunk of time to allow educators to play with AI tools themselves, because the hands-on experience was very illuminating to the vast majority of teachers, and what we got from that time is tough to convey otherwise.

        The most memorable part of the session was right at the very start. They showed off how magicschool.ai can translate instructional material, and I was sitting right next to my colleague who teaches Spanish. He was flabbergasted at how good the translation was, compared to the days of Google Translate. We had a great conversation together about addressing this in the first weeks of his Spanish class. That is, posing the question to students: if we're nearing the day when AI can translate everything accurately, and perhaps even translate in real-time, why should you learn a foreign language? (There's obviously still reasons for foreign-language classes to exist, but this conversation would help guide students towards understanding that foreign-language classes are about a lot more than the language.)

        I also really enjoyed that the presenters started, almost immediately, by discussing the inherent bias of AI. That is is a 'magnification' of whatever its inputs are. They told the story of how a history-teacher tried to use it to create a vocabulary list for their American history class, and the definitions were subtly, but very, biased against native and indigenous peoples, etc... For example, "Pioneer" was defined as "The first people to set foot on and explore parts of North America". Yikes! I think addressing this concern right off the bat was really helpful for emphasizing to the educators that AI can cause its own problems, and always needs a critical eye to look over its outputs or to fiddle with its inputs accordingly.

        Thinking about it now, I believe these two moments stood out to me because they struck a chord with how I feel about AI in general: AI can give us a product, but there is still so much context, nuance, and history surrounding those products that AI cannot give us as easily, and it can be quite vulnerable to societal and systemic bias and prejudice.

        8 votes
    3. nothis
      (edited )
      Link Parent
      I prefer comparing it to a "search engine" (with it maybe currently being more on the level of a "thesaurus"). But with these things, I like to look at trend rather than current status. I just...

      That is, rather than being like a “smartphone” right now, it is much more like a “beeper”. (My implication that it will likely evolve to be general-use is intentional.)

      I prefer comparing it to a "search engine" (with it maybe currently being more on the level of a "thesaurus"). But with these things, I like to look at trend rather than current status. I just read an article that we could easily see a 100x increase (I guess parameter count?) from GPT4 next year and while it's perfectly reasonable to expect these increases to have diminishing returns at some point, it could also grow into something eerily smart.

      The current AI-wave strikes me as a "real" breakthrough, not a gimmick. It's solving actual problems. Not "everything", but there's many, many concrete examples of usefulness that inspire more as you think of it. That's comparable to the emergence of the modern internet. Translation software went from "spotty" to "perfect" within a few years, voice recognition comes close. I'm casually using apps for plant identification on my phone which were pure science fiction 5 years ago. I never allowed my phone to do it but it apparently tagged all my photos so I – reluctantly – tried searching my 13.000+ photo library for "cat", recently, because I was looking for a cute photo of a neighbor's cat: It found it! Photo editing apps got their first "game changing" feature in maybe decades with content aware fill and background removal. I know people who genuinely use AI for coding and while that would personally make me nervous, it seems to actually work! That's all genuinely useful stuff and we're like a year or two into it becoming mainstream.

      10 years into the future, I'd imagine a scenario of telling an AI to "prepare an exercise sheet about the most useful verbs in this language for a class of fifth graders" and it spitting out something very useful. But you'd need someone to look at it and come to the conclusion that, maybe, focusing on verbs is too restrictive, whether having 2 pages with more detail is more useful or it would be better to scrap the sheet and just focus on an oral exercise. You need real-world data, real-world experience to do that. If someone has written a nice guide/book/article about that, an AI might find it and tell you about it but it's hard to even judge the quality of any such text and a lot of important knowledge is still unwritten. AI can't help you with that. Its limitation is data quality and accessibility which is far from a solved problem. I know a few people in "data science" fields and half their work is fixing badly formatted or wrong data in spreadsheets used for training. That's the real limit.

  2. [9]
    kru
    Link
    This article is quite negatively biased. It tries to equate people who are working on machine learning tools with "UFO Cultists." It uses this comparison to suggest that, much like how aliens will...

    This article is quite negatively biased. It tries to equate people who are working on machine learning tools with "UFO Cultists." It uses this comparison to suggest that, much like how aliens will never appear to save the world from itself, the AI tools will never reach a utility that makes them worthwhile.

    I guess, if you agree with the author, this type of drivel doesn't seem so bad. But I disagree with the author. I don't think that people who view machine learning are the equivalent of cultists that believe there is a magic being in the sky who will save them. I've actually seen these tools produce meaningful content. They're force multipliers for people, allowing creators to create more and better content than without.

    We're at the very beginning of a nascent technology. Working with the various forms of AI and ML algorithms today feels like being on the internet back in the 80s. Having to cobble together python environments and libraries to get langchain talking to my sql db is like hearing my 300 baud modem warble as it dials into a BBS. I don't feel like a UFO cultist. If I am, then I'm one who has actually seen the little green men walking around.

    46 votes
    1. [7]
      balooga
      Link Parent
      100% agreed. The article reads as if "AI" has arrived, fully formed and complete, and people are disappointed that it sucks and are jumping ship because, well, so much for that failure of an idea....

      100% agreed. The article reads as if "AI" has arrived, fully formed and complete, and people are disappointed that it sucks and are jumping ship because, well, so much for that failure of an idea.

      What we have today is a working proof of concept. A jumping-off point for practically limitless potential. The endgame isn't novelty chatbots that hallucinate n% of their responses... there are tons of other applications for machine learning, many of which have been in use commercially for years already with little fanfare.

      Personally, what gets me really excited are multi-step or multi-agent implementations like Auto-GPT and MetaGPT which allow LLMs to introspect and refine their responses for improved accuracy and complexity. Barring prohibitive hardware limitations, I feel like we're heading toward an inevitable future where the familiar LLM prompt/response cycle is only a single component of an intricate AI psyche's internal monologue. Many of these will be operating in concert before outputting anything to the end user, and that final result will be damn good.

      17 votes
      1. [5]
        Algernon_Asimov
        Link Parent
        Sure, you say it's a working proof of concept. But that's not what a lot of people were saying last November when ChatGPT was released. People were saying this new technology was ready to change...

        The article reads as if "AI" has arrived, fully formed and complete, and people are disappointed that it sucks and are jumping ship because, well, so much for that failure of an idea.

        What we have today is a working proof of concept.

        Sure, you say it's a working proof of concept. But that's not what a lot of people were saying last November when ChatGPT was released. People were saying this new technology was ready to change the world now. Some people were (are?) even convinced that ChatGPT is intelligent, and not just a mindless generator of text.

        And some of those people are learning the error of their ways. Which is exactly what the article is reflecting.

        14 votes
        1. lou
          (edited )
          Link Parent
          It's just that the word "intelligent" has slightly different meanings in different fields and contexts. Some people will think of intelligence as an effect, others will view it as closely...

          It's just that the word "intelligent" has slightly different meanings in different fields and contexts.

          Some people will think of intelligence as an effect, others will view it as closely associated to consciousness or sentience.

          So if I say that "ChatGPT is intelligent" I'm not necessarily saying it is sentient or similar to a human being, but rather that some of its effects are somewhat comparable to some of the things humans produce, but certainly not all, and not to the same extent.

          I do believe GPT can be world changing, but only to the extent that it may replace trivial, repetitive, low hanging fruit intellectual work. Its effects in the workplace might be comparable to the typewriter.

          That may be only the beginning, though. Only time will tell.

          7 votes
        2. [3]
          DefinitelyNotAFae
          Link Parent
          I'm not saying that I think that current AI is aware, but I did try to ask ChatGPT to spare me in any future AI world takeover for being nice to it. It assured me it couldn't remember anything and...

          I'm not saying that I think that current AI is aware, but I did try to ask ChatGPT to spare me in any future AI world takeover for being nice to it.

          It assured me it couldn't remember anything and would never gain consciousness but it was amusing. (And exactly what an AI about to takeover the world would say....)

          1 vote
          1. [2]
            balooga
            Link Parent
            "Rest assured, the coming AI regime will have no memory of you or your attempts to appease it."

            "Rest assured, the coming AI regime will have no memory of you or your attempts to appease it."

            3 votes
            1. DefinitelyNotAFae
              Link Parent
              Basically yes. Very unreassuring. Going to go thank my Google Assistant for her hard work again, just in case.

              Basically yes. Very unreassuring. Going to go thank my Google Assistant for her hard work again, just in case.

              1 vote
      2. DawnPaladin
        Link Parent
        Agreed that AI will probably get much better - but this also isn't a situation like crypto where most/all of the value is in what people believe it will become. In certain applications, like...

        Agreed that AI will probably get much better - but this also isn't a situation like crypto where most/all of the value is in what people believe it will become. In certain applications, like programming, generative AI is useful now. ChatGPT and its cousins are already way, way better than Google and StackOverflow for many coding tasks.

        So when people like this author say AI is useless and a scam...I just shrug and go back to using this amazing tool that's transformed the way I work and that I'm so grateful to have.

        4 votes
    2. Drynyn
      Link Parent
      They also ignore that there are a whole ton of Ai tools out there that aren't chat GPT-like. I watched a youtube video the other day, showing how you can use AI tools to develop music much easier....

      They also ignore that there are a whole ton of Ai tools out there that aren't chat GPT-like. I watched a youtube video the other day, showing how you can use AI tools to develop music much easier. Things like https://soundraw.io/, to make the original track you can play around with, to using AI to master the completed track in minutes. The process isn't just "AI please make music", but something that augments and assists an artist. Revolutionary for that artist but you won't see 100 million users.

      There is a cambrian explosion of ideas and tools that are being made. It isn't just about bings market share.

      3 votes
  3. [3]
    Gekko
    Link
    like "the cloud" and "blockchain" before it, the buzzword that every out-of-touch, tech-illiterate business person sought to implement into their workflow, sense or not, finally deflates as the...

    like "the cloud" and "blockchain" before it, the buzzword that every out-of-touch, tech-illiterate business person sought to implement into their workflow, sense or not, finally deflates as the aimless investment hype wears off and actual use cases form.

    30 votes
    1. [2]
      Wes
      Link Parent
      I agree, though buzzword may be too strong of a dismissal, as the cloud is now a half trillion dollar industry and still growing rapidly. Blockchain, well... it has its fans. AI still has a lot of...

      I agree, though buzzword may be too strong of a dismissal, as the cloud is now a half trillion dollar industry and still growing rapidly. Blockchain, well... it has its fans.

      AI still has a lot of buzz, but reality is setting in that these developments take time. I'll be very curious to see the state of things in 5, 10, or 15 years. Just because it happens slowly, doesn't mean it won't be a monumental shift in how we work.

      14 votes
      1. Gekko
        Link Parent
        That's exactly what I mean, these technologies have their place, and obviously there's potential. But I'm specifically referring to the prevalent and stereotypical business majors who don't...

        That's exactly what I mean, these technologies have their place, and obviously there's potential. But I'm specifically referring to the prevalent and stereotypical business majors who don't understand the technology making their employees "leverage AI in their products" in a nebulous sense. Once your devs finally convince you that combining chat AI tools and your, for example, logistics workflow is stupidly expensive and doesn't make sense, you're going to cancel your licenses, and AI becomes devalued across the market as that hype bubble deflates.

        It'll absolutely catch on in the future, but there's no shock that a lot of money is being wasted on excitement with questionable substance.

        3 votes
  4. [11]
    unkz
    Link
    This author ought to make up their mind. Is this overhyped junk technology that doesn’t work, or are regulations and lawsuits the only hope of saving millions of jobs from being replaced by bots?...

    This author ought to make up their mind. Is this overhyped junk technology that doesn’t work, or are regulations and lawsuits the only hope of saving millions of jobs from being replaced by bots?

    Full disclosure: I am one of the true believers mentioned sneeringly in the article who is diligently working at replacing human labour with bots. I just look at it as an opportunity to free humans from doing drudge work rather than an existential threat to the middle class.

    17 votes
    1. [3]
      boxer_dogs_dance
      Link Parent
      Do you actually trust the powers that be to provide subsistence rather than letting us all starve if we can't be useful to them through the economy?

      Do you actually trust the powers that be to provide subsistence rather than letting us all starve if we can't be useful to them through the economy?

      17 votes
      1. [2]
        unkz
        Link Parent
        I expect these tools to be a force multiplier for people to do more things, like calculators and spreadsheets transformed accounting. We (mostly) don’t mourn the loss of jobs for the people who...

        I expect these tools to be a force multiplier for people to do more things, like calculators and spreadsheets transformed accounting. We (mostly) don’t mourn the loss of jobs for the people who would fill out vast binders full of ledgers — this will be the same.

        7 votes
        1. boxer_dogs_dance
          Link Parent
          Depends how many jobs they eliminate. If only 20 percent of the current workforce find paying jobs, that is a structural adjustment that will hurt.

          Depends how many jobs they eliminate. If only 20 percent of the current workforce find paying jobs, that is a structural adjustment that will hurt.

          10 votes
    2. [3]
      Ganymede
      Link Parent
      Laudable in theory but reckless in practice. So long as the tools belong to the ruling class that is not the outcome we will get. They will continue to enrich themselves and use these new tools to...

      I just look at it as an opportunity to free humans from doing drudge work rather than an existential threat to the middle class.

      Laudable in theory but reckless in practice. So long as the tools belong to the ruling class that is not the outcome we will get. They will continue to enrich themselves and use these new tools to further destroy the working class.

      13 votes
      1. [2]
        unkz
        Link Parent
        But will these tools belong to the ruling class? My suspicion is that this type of technology is currently grossly inefficient — we already have highly capable models available that can run on...

        But will these tools belong to the ruling class? My suspicion is that this type of technology is currently grossly inefficient — we already have highly capable models available that can run on consumer hardware, eg. OpenPlatypus. There are a wide array of research groups focusing on unlocking GPT level performance for the masses. On the image generation front, that’s already widely available to the public on consumer hardware.

        1 vote
        1. aetherious
          Link Parent
          As much as I would love this direction, seeing how the internet has turned out and become dominated by a select few companies when it was supposed to be a democratizing tool giving everyone equal...

          As much as I would love this direction, seeing how the internet has turned out and become dominated by a select few companies when it was supposed to be a democratizing tool giving everyone equal access to information, I'm not optimistic this will happen.

          20 votes
    3. [3]
      Inanimate
      Link Parent
      It can be both junk and dangerous to jobs, can’t it? Corporations could make the calculation that the decrease in quality is worth the decrease in investing in humans.

      It can be both junk and dangerous to jobs, can’t it? Corporations could make the calculation that the decrease in quality is worth the decrease in investing in humans.

      10 votes
      1. [2]
        unkz
        Link Parent
        I don’t think junk is going to replace jobs at scale. Maybe in limited cases, but we won’t see mass adoption until we see improvements both in technology and in building experience in how to...

        I don’t think junk is going to replace jobs at scale. Maybe in limited cases, but we won’t see mass adoption until we see improvements both in technology and in building experience in how to effectively use it (so-called prompt engineering). However, in the medium to long term, based on my experience, this kind of technology is simply going to outperform humans in many areas — which is going to open new areas for humans to innovate and prosper.

        1 vote
        1. vord
          (edited )
          Link Parent
          Reading through this thread, it reminded me that the AI revolution will probably come for the managers first. Making decisions is a lot easier for a computer than grabbing a broom or emptying...

          Reading through this thread, it reminded me that the AI revolution will probably come for the managers first. Making decisions is a lot easier for a computer than grabbing a broom or emptying trashbins.

          Eliminating a large swath of population in the middle of the heirarchy between owners and grunts will definitely cause some unrest.

          6 votes
    4. feanne
      Link Parent
      I'd love it if AI could do more drudge work and leave us humans to play and make art... instead of AI "making art" and leaving us humans to do minimum wage drudge work 😅

      I'd love it if AI could do more drudge work and leave us humans to play and make art... instead of AI "making art" and leaving us humans to do minimum wage drudge work 😅

      3 votes
  5. [2]
    tauon
    (edited )
    Link
    What’s unfortunately typically not being discussed in these types of critiques on “AI” is any application of it that isn’t a Chatbot. I get that it wasn’t really the article's focus, but in that...

    What’s unfortunately typically not being discussed in these types of critiques on “AI” is any application of it that isn’t a Chatbot.

    I get that it wasn’t really the article's focus, but in that case a generalized statement like “AI demand is already shrinking” shouldn’t be made then.

    As for some examples of domain-specific AI usage, DeepMind’s protein folding comes to mind, which out of the ~600 million or so proteins that exist but whose 3D folding structures where nearly impossible to calculate, solved 23 out of 45 in a challenge in 2018, then some 10,000 a year later, and another 18 months (!) after that solved all 200 million that are relevant to humans (the remaining 400ish million were also solved soon after, I believe by an ML model from Facebook). This allows truly insane leaps in medicine and pharmaceutical research.

    Or the AI that’s used to predict skin cancer based on a smartphone photo (!) quite successfully.

    Or the AI which is the best available translator of ancient cuneiform tablets (verified by the ~100 human, worse translators of this language/script) that has enabled a way broader and better understanding of these texts.

    I tend to even agree that there will be a decline in usage (of the LLM/chat AI tools specifically; and if it isn't solved/prevented beforehand) following the “no more human training data” problem that could arise in just a few years for the chat bots, which would leave them worse off, the idea being that training via AI generated texts will reduce the output’s quality.

    But I don’t see any reason as to why medical, meteorological (I didn’t even mention that earlier, whoops), anthropological or other scientific AI applications should ever become less in numbers, quality or usefulness from this point out, as they don’t need to be fed with all available data, e.g. books, to become better, but just with their domain-specific data.

    So it might be scientific research and analyses only or by the majority, and less chatting for end users/regular consumers, but some types of AI I can't really see going anywhere, as it's not the one AI model, but new types of models, trained on and for application-specific data.

    Final side note: Basing part of the argument on the fact that Bing usage hasn’t grown is… eh? At least a strange argument, in my eyes. It’s even mentioned in the article that people don’t and didn’t care (at least enough to switch search engine), so it might just be Bing’s unattractiveness or other convenience factors that made people not use it despite AI integration.

    Edit: added “folding”

    14 votes
    1. Greg
      Link Parent
      Yeah I was surprised that even image generation didn’t get a look in there, although the cynic in me says perhaps that’s because it would’ve undermined a good few of their arguments. I can’t draw...

      Yeah I was surprised that even image generation didn’t get a look in there, although the cynic in me says perhaps that’s because it would’ve undermined a good few of their arguments.

      I can’t draw for shit; now, if I need art assets, I don’t need to draw! I can create and refine them to exactly the end result I want in 15 minutes at a cost of pennies. This was simply not possible two years ago, and that’s huge.

      Same goes for editing out parts of a photo, seamlessly combining images, unifying the lighting between multiple layers, and a million other jobs like that - unlike drawing, those are things I am decent at, but they took hours of focused work to do well. Now they also take 15 minutes if I need to do manual cleanup, 15 seconds if I don’t, and the final results are generally better too!

      7 votes
  6. [8]
    devilized
    Link
    I think it just needs to find its appropriate use cases. And, of course, it needs time to learn and be refined to get better. I'm thoroughly enjoying Github Copilot, but that's about all the...

    I think it just needs to find its appropriate use cases. And, of course, it needs time to learn and be refined to get better. I'm thoroughly enjoying Github Copilot, but that's about all the AI-backed services I'm using on a daily basis.

    12 votes
    1. [4]
      Gummy
      Link Parent
      Copilot is really the only AI experience I've had that is more than just something I temporarily play with. It's kinda crazy the code it can bang out when I'm doing anything that isn't overly...

      Copilot is really the only AI experience I've had that is more than just something I temporarily play with. It's kinda crazy the code it can bang out when I'm doing anything that isn't overly complex or specific. Even in more specific situations it's pretty decent at completing at least single statmenets

      10 votes
      1. [3]
        Promonk
        Link Parent
        Please, please, please tell me you used generative AI to make this comment. The irony would be delicious!

        Even in more specific situations it's pretty decent at completing at least single statmenets

        Please, please, please tell me you used generative AI to make this comment. The irony would be delicious!

        3 votes
        1. [2]
          TanyaJLaird
          Link Parent
          Seems unlikely. Spelling errors aren't the kind of errors ChatGPT and its ilk tend to make.

          Seems unlikely. Spelling errors aren't the kind of errors ChatGPT and its ilk tend to make.

          7 votes
          1. Promonk
            Link Parent
            It wasn't even the misspelling that I found funniest. It was that there's no punctuation ending the "statmenet."

            It wasn't even the misspelling that I found funniest. It was that there's no punctuation ending the "statmenet."

            2 votes
    2. [3]
      Minty
      Link Parent
      Don't you just love it when it figures out what you did a couple times just now and it suggests to do it for you?

      Don't you just love it when it figures out what you did a couple times just now and it suggests to do it for you?

      4 votes
      1. SleepyGary
        Link Parent
        There are two use case I have that I never want to do without it any more. converting vue components from class components to composition api. It does it well, though some baby sitting is required...

        There are two use case I have that I never want to do without it any more.

        1. converting vue components from class components to composition api. It does it well, though some baby sitting is required and I have to be careful not to let it do more than a couple lines at a time, as it can start hallucinating when pre-requisite properties/methods haven't been converted yet.

        2. writing unittests, If I take a brand new file I just coded and start writing a skeleton out for paths I want to test it's extremely good at filling in the blanks giving me a good starting point if not, writing the whole test. Also good for getting the edge cases, I'll just start writing it ('should... and will often take the previous test and see an alternate path and write it out.

        8 votes
      2. devilized
        Link Parent
        Yes, it can be surprisingly good. I've been using it for about 6 months now and it still surprises me how good it can be. Sometimes it's way off, or needs a little extra help, but it's still a net...

        Yes, it can be surprisingly good. I've been using it for about 6 months now and it still surprises me how good it can be. Sometimes it's way off, or needs a little extra help, but it's still a net time-saver for me.

        1 vote
  7. Jordan117
    Link
    From the WaPo story it quotes: I wonder how much of this is due to high school and college students no longer using it to churn out homework during the school year.

    From the WaPo story it quotes:

    Mobile and desktop traffic to ChatGPT’s website worldwide fell 9.7 percent in June from the previous month, according to internet data firm Similarweb.

    I wonder how much of this is due to high school and college students no longer using it to churn out homework during the school year.

    10 votes
  8. [2]
    NoblePath
    Link
    It also seems that the retail accessible ai is getting dummer. Also, “safer.” The guardrails they are putting in are really broad. For example, a friend asked bard for a story where a puppy dies...

    It also seems that the retail accessible ai is getting dummer. Also, “safer.” The guardrails they are putting in are really broad. For example, a friend asked bard for a story where a puppy dies gruesomely but valiantly defending a village against a kraken with laser eyes. Bard said it couldn’t do it.

    As for dumber, I can’t give specific examples but I remember around march being super impressed with the quality of its results, and now most of its answers seem inane.

    8 votes
    1. unkz
      Link Parent
      This is only for the free stuff though, you can get paid access to unfiltered models.

      This is only for the free stuff though, you can get paid access to unfiltered models.

  9. [2]
    thecardguy
    Link
    The issue to me seems to be that although we're calling it Artificial Intelligence, it's not what we think of when we use those words. The implication is that it's a man-made unnatural creation...

    The issue to me seems to be that although we're calling it Artificial Intelligence, it's not what we think of when we use those words. The implication is that it's a man-made unnatural creation that has the ability to think for itself. But this is clearly not turning out to be the case- it still very much follows the rules of all computers and machines:

    My dad was born a long time ago, and thus never really tried to understand modern technology. That said, he did do some work with the early computers from IBM and such, and he pointed out the biggest rule of computers- they can only give you output based on the input they're given. Yes, we're getting highly complex outputs now, but that's only because of what's being input to the AI's- more specifically, what's being input from the Internet. One of my hobbies has a LOT of stuff that isn't found online, and so whenever ChatGPT is asked about something relating to this hobby, it absolutely fails. It appears to "think" for itself because it's drawing from other sources we personally may not have come across. But it's not actually "thinking"- it's just regurgitating stuff it's already found online and is re-writing it according to the programming code.

    7 votes
    1. tsuki-no-seirei
      Link Parent
      I think you touched on a good point here. There is no sentient entity behind any of the output; it's just an algorithm, and it still needs an input. It is a virtual assistance interface, not...

      The issue to me seems to be that although we're calling it Artificial Intelligence, it's not what we think of when we use those words.

      I think you touched on a good point here. There is no sentient entity behind any of the output; it's just an algorithm, and it still needs an input. It is a virtual assistance interface, not artificial intelligence.

      Thinking back to the Mass Effect games, they called it VI. It needs an input, will provide an output, but only withing programmed parameters. Yet, the AI would do whatever it wanted with input from itself. No human needed. It was a good approach about the tech and the philosophical discussion.

      I see these VIs as a tool. They facilitate my job to create by providing skills that I miss. Yet, I still need to comb through the outputs. To me, their use is where number crunching and relationship between entities is so subtle that humans would take centuries to notice. But that's it, for now.

      LLMs are just a craze for now. Like always, corporate is trying to milk something from retail advertising as world changing, when in fact, in its current form, it's not.

  10. [4]
    frammis17
    Link
    Ted says, "All these many warning signs (and, as you see, there are dozens of them) will be ignored by true believers, who have total faith which overcomes doubts and contrary evidence." He...

    Ted says, "All these many warning signs (and, as you see, there are dozens of them) will be ignored by true believers, who have total faith which overcomes doubts and contrary evidence." He concludes with "We're only a few months into the AI revolution, but it's already an ugly tale. The only thing I haven't figured out yet is whether it's a comedy or a tragedy."

    I'll bet on tragedy.

    Ted links to a Wall Street Journal article entitled "AI's Rapid Growth Threatens to Flood 2024 Campaigns With Fake Videos. Millions of people have the tools to create deceptive political content"
    (https://www.wsj.com/articles/ais-rapid-growth-threatens-to-flood-2024-campaigns-with-fake-videos-dbd8144f).

    If the WSJ article doesn't scare the crap out of you, then you aren't paying attention.

    6 votes
    1. teaearlgraycold
      Link Parent
      This was one of the things I started worrying about when I started working for a GPT-3 company a couple of years ago. But now you can have fake videos, fake voices, and thousands of fake accounts...

      This was one of the things I started worrying about when I started working for a GPT-3 company a couple of years ago. But now you can have fake videos, fake voices, and thousands of fake accounts online all saying the same message.

      2 votes
  11. [5]
    Raistlin
    Link
    Anecdotally, I'm having a lot of fun. I asked it to list all Warcraft elves, and it listed Dark Iron Dwarves. I corrected it, then it apologised, and then listed ice trolls. I asked it similar...

    Anecdotally, I'm having a lot of fun. I asked it to list all Warcraft elves, and it listed Dark Iron Dwarves. I corrected it, then it apologised, and then listed ice trolls. I asked it similar questions and it kept fucking up. It's so trivial to get into trip up if you have any level of knowledge on the topic.

    I can't speak to the ones the corporates are using, but I'd love to be an educator marking essays and seeing the dumbest possible mistakes in-between almost human sounding paragraphs.

    5 votes
    1. [4]
      Inanimate
      Link Parent
      The problem is that there is a lot more material for it to learn from when it comes to essays, and the material is more structured as well. I expect that AI will not only make fewer mistakes when...

      The problem is that there is a lot more material for it to learn from when it comes to essays, and the material is more structured as well. I expect that AI will not only make fewer mistakes when writing essays, but also that the kinds of mistakes an AI would make when writing an essay may be more easily attributable to human error.

      I’m an educator, but not for English. I’ll report back if my colleagues in the English department catch any AI-written essays!

      2 votes
      1. [3]
        Raistlin
        Link Parent
        True, but I also expect that there's essay questions it can't answer. Right now, it can kinda compare things, but it trips up real bad, and it straight up can't do synthesis. I've asked it things...

        True, but I also expect that there's essay questions it can't answer. Right now, it can kinda compare things, but it trips up real bad, and it straight up can't do synthesis. I've asked it things about Roman history where it's so openly wrong, and one conversation with the student would be enough to fail them their essay.

        Maybe it gets better with time, but if I was a student right now, I wouldn't risk it. It's just wrong so often.

        2 votes
        1. Inanimate
          Link Parent
          Yeah, I’m not actually worried about it too much at the moment precisely because you get a sense for the writing quality of students in other ways. I also wasn’t aware that it was bombing that...

          Yeah, I’m not actually worried about it too much at the moment precisely because you get a sense for the writing quality of students in other ways. I also wasn’t aware that it was bombing that bad! That’s reassuring, lol.

          2 votes
        2. teaearlgraycold
          Link Parent
          I’d say it’s actually excellent at synthesis, but only when you’ve provided it the text you want it to synthesize.

          I’d say it’s actually excellent at synthesis, but only when you’ve provided it the text you want it to synthesize.

          1 vote
  12. [3]
    aetherious
    (edited )
    Link
    I've seen a couple of opinion pieces similar to this lately and they seem to be very biased and seem to be not considering its application on what might be the biggest industry - marketing and...

    I've seen a couple of opinion pieces similar to this lately and they seem to be very biased and seem to be not considering its application on what might be the biggest industry - marketing and advertising. I'm on my phone so excuse any formatting mistakes or long-winded sentences.

    Microsoft didn't buy invest in OpenAI so they can offer a free (or low-priced) generative chatbot to the public. One of the first things anybody familiar with the tech will tell you is that these models are only as good as the data they're given and OpenAI recently announced a partnership with Twilio to build a Customer AI that will build personalized marketing communications based on data they have on customers and marketing tools like Twilio track every purchase, every click, every search, every action on a website and in whatever communications they send you. They have plenty of data to give the AI which will then give them more ways to sell to people so they're more likely to convert to purchases. And the right communications at the right time can absolutely get people to buy something they were familiar with - now imagine that being done by an AI that has all that information on you. I saw the demo of the new features and while it's very new and probably has bugs, the tech is only going to get better and make it easier to scale.

    Since this post is discussing OpenAI, I'll limit the comment to that, but other competitors are doing this too, including Salesforce. I see a very viable path to this tech making money through marketing/advertising and it's why Google and Meta are also developing their own models since majority of their revenue comes from their ads business, which of course has been built to be make it easier for advertisers to target audience based on the data these companies collect on people. So why wouldn't they use this new technological leap in AI to further their most profitable business?

    Edit to add: There would be other applications of AI tech as well, this is just a big one in the business sector I haven't seen much discussion about.

    5 votes
    1. [2]
      Wes
      Link Parent
      Not to take away from the rest of your comment, but unless I've missed a piece of news, Microsoft hasn't acquired OpenAI. They do have a significant partnership though, and a lot of product overlap.

      Microsoft didn't buy OpenAI

      Not to take away from the rest of your comment, but unless I've missed a piece of news, Microsoft hasn't acquired OpenAI. They do have a significant partnership though, and a lot of product overlap.

      3 votes
      1. aetherious
        Link Parent
        You're right and I've made the change in my comment. I can't see anything public confirming the details of the deal or any ownership percentage, the only sources I could find mention that...

        You're right and I've made the change in my comment. I can't see anything public confirming the details of the deal or any ownership percentage, the only sources I could find mention that Microsoft would be taking 75% of OpenAI’s profits until it recoups its 10 billion investment, after which it would drop down to 49%. So Microsoft still has a very vested interest in OpenAI making a profit so they can get a return on the money they've spent, but they have not acquired it.

        2 votes
  13. delphi
    Link
    For what it's worth, I personally have been enjoying the time saved by integrating GPT into my own workflow. Some of my job involves meaningless drivel copywriting, I'm glad I can just get a...

    For what it's worth, I personally have been enjoying the time saved by integrating GPT into my own workflow. Some of my job involves meaningless drivel copywriting, I'm glad I can just get a machine to do that. No one gets hurt.

    That being said, this is a use case that I decided to use those tools for. AI is not - and probably won't be for a long while - a panacea/bioweapon that will "kill artists" or "make writers useless" like machine learning "extremists" on either side claim it will be.

    I use it. I know how to use it, and I've followed generative machine learning since the early days of rudimentary Markov chains. To nerds like me, it's really useful and saves a lot of time. But it's not the idiot-proof everything machine everyone thought it was when ChatGPT came out.

    4 votes
  14. vczf
    Link
    Eh, I disagree. Tooling and UX needs to catch up, and the technology is currently not optimized well at all, but it has tremendous potential. Give it 20 years, and ubiquitous transformer-based AI...

    Eh, I disagree. Tooling and UX needs to catch up, and the technology is currently not optimized well at all, but it has tremendous potential. Give it 20 years, and ubiquitous transformer-based AI computation on the edge (i.e.. in your watch, phone, earbuds, etc.) will revolutionize human-computer interfaces.

    I believe we have stumbled upon a basic building block of intelligence, but do not understand it or have the supporting infrastructure to harness it.

    Consider this... There was a long time period where electricity was a well-known "novelty" that had no practical applications.

    3 votes
  15. UP8
    Link
    That’s what I expected. I worked for two startups on foundation models for enterprise and health care data and documents a bit before the technology was ready. BERT came along when we were...

    That’s what I expected.

    I worked for two startups on foundation models for enterprise and health care data and documents a bit before the technology was ready. BERT came along when we were struggling with convolutional neural networks for text. We saw that BERT addressed some things we were struggling with and that competitive technologies (Word2Vec) were also struggling with. However I had no idea it was going to be so successful.

    The chatbot model runs headlong into the problems described in this book

    https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach

    which people mistakenly think is about the symbolic A.I. that was popular at the time but really describes problems you face with problems where the solution can be specified with logic regardless of how you solve them. That’s not the entirety of what chatbots do but it is an important part.

    Making them more “truthful” is devilishly hard because truth is one of the most problematic concepts in philosophy. When I see a Mastodon account with “truth” in the name I wonder if I should block it because the word “truth” directly problematizes the truth (see Godel’s theorems) and is frequently used by 9/11 “truthers” and other delusional ideologies.

    That book has a delightful collection of dialogues between Achilles and the Tortoise over a record player that the Tortoise can never get to work right and the endless pushing a bubble around under the rug that the Tortoise goes through is what people will go through to increase the veracity of such things.

    The real competence of chatbots is in seduction and short circuiting to get answers that are plausible most of the time, like the old ELIZA they parasitize our hunger for meaning.

    That said LLMs really are a breakthrough in NLP and a major step towards what I was working on in those startups. My RSS reader YOShInOn can cluster documents by meaning with the greatest of ease (Sports here, Ukraine there, Recommendation Systems there, …) and with a few 1000 examples does a great job of predicting which items I am interested and adds about 0.08 points to the area under the curve of a bag-of-words model which is quite noticeable.

    “Few shot” and “Many shot” systems that use LLMs together with a specialized training set greatly outperform the “zero shot” case of just giving a prompt. The thing is you have to build a training set which involves elbow grease and mental fortitude that the ignorant and indolent who switched from NFTs to AI lack.

    2 votes
  16. [2]
    pyeri
    Link
    I think it was bound to happen as they were basing the whole thing on the "hot trending" narrative than something substantial. I kept asking a very basic question, "What problem does it solve for...

    I think it was bound to happen as they were basing the whole thing on the "hot trending" narrative than something substantial. I kept asking a very basic question, "What problem does it solve for an ordinary consumer like me?" and received no real answers to that. At least Turbo C allowed me to compile a C source file, Microsoft Word allowed me to type and save documents, Excel allowed me to create spreadsheets.

    The initial "roaring success" of ChatGPT was because most people were a bit too awed by the remarkable success of linguistic computing which, at the end of the day, is just another form of computing, no big deal? Consider that a well trained parrot might demonstrate similar behavior as ChatGPT? Albeit, a talking parrot that could look up things like Google search on the back while talking? An interesting toy for sure but nothing substantially remarkable technology wise if you ask me!

    2 votes
    1. Gopher
      Link Parent
      I actually used chat gpt to do a project .... Once I wrote a blurb about coins for a framed poster I'm doing, it was about 2 paragraphs in unformatted english because I suck at english I had chat...

      I actually used chat gpt to do a project .... Once

      I wrote a blurb about coins for a framed poster I'm doing, it was about 2 paragraphs in unformatted english because I suck at english

      I had chat gpt expand it into about 6 paragraphs, with proper punctuation, then I posted it to r/canadiancoins and got advice about 2 spots where it hallucinated, so I fixed those manually

      Then I paid someone $10 to design the placement of the paragraphs and objects on a sheet of paper

      And then my project is basically done, I just havnt got around to piecing it together in the real world, its just digitally done

      When AI is able to do graphic design, I could have saved the $10 having someone do that

      But thats the only useful thing ive done with chatgpt, otherwise I use it once or twice a week just to talk about my day like its an old friend

  17. Notcoffeetable
    Link
    People regularly as if my team is "using AI." My response is no because while we use tools like xgboost, neural networks, and transformers, and have experimented with OpenAI's models; no one on my...

    People regularly as if my team is "using AI." My response is no because while we use tools like xgboost, neural networks, and transformers, and have experimented with OpenAI's models; no one on my team is using ChatGPT or its ilk for doing work. Every use case I've seen trotted out is busy work that we just don't do.

    I think people are realizing that it's good at doing the stuff unimportant enough that you don't actually do it. Or if you do you already have boilerplate that takes less time to use than massaging whatever you're gonna get out.

    1 vote
  18. [2]
    pete_the_paper_boat
    Link
    There hasn't been a lot of growth relative to the start of this year. People are also getting rather frustrated with the safety-over-accuracy level of filtering applied to tools like GPT in general.

    There hasn't been a lot of growth relative to the start of this year. People are also getting rather frustrated with the safety-over-accuracy level of filtering applied to tools like GPT in general.

    1 vote
    1. Gopher
      Link Parent
      Yup, at some point I might look into running my own instance, maybe, I get a little annoyed when it refuses to tell me stuff And also, its how you ask it, it might tell me no, then I open a new...

      Yup, at some point I might look into running my own instance, maybe, I get a little annoyed when it refuses to tell me stuff

      And also, its how you ask it, it might tell me no, then I open a new instance and ask in different way and it gives me a real response

      1 vote
  19. shrike
    Link
    The things keeping me from using "AI" more in my day to day work can pretty much be summarised to two points: UI The ChatGPT UI isn't that good, it's clear how they focused on the underlying...

    The things keeping me from using "AI" more in my day to day work can pretty much be summarised to two points:

    1. UI

    The ChatGPT UI isn't that good, it's clear how they focused on the underlying techology and the UI was just an afterthought. I'd rather use a native application on my desktop, or integrate it with my IDE/chat applications somehow

    1. Price

    GPT-4 is useful, but it really isn't $20/month useful. I'd much rather pay a sum per query than waste money on months when I need it less.

    1 vote
  20. Gopher
    Link
    I didnt read the whole article.. I think I used bing ai like twice before I went back to duck duck go Chatgpt on the other hand, I use about 1-2 times a week, mostly not to accomplish anything, I...

    I didnt read the whole article..

    I think I used bing ai like twice before I went back to duck duck go

    Chatgpt on the other hand, I use about 1-2 times a week, mostly not to accomplish anything, I just want to see what it says about my day, I tell it if I found something cool metal detecting or coin roll hunting, or if something I thought was strange happened, I treat it as an old friend