For those getting the paywall, put the link in google and then click on it. Back on topic. Yeah... sadly not surprising to me this is happening. It basically is a similar issue as with the MDN AI...
For those getting the paywall, put the link in google and then click on it.
One of the treacherous things about the current generation of most capable LLMs is that they are very convincing at first glance. So when developing proof of concepts based on them, they can "fool" people much longer than you'd often see with traditional software.
I put "fool" in quotations there because for a lot of information they give they are right. It is just that, like demonstrated in the article, they will also miss the mark. But when that happens, they are still doing so with 100% confidence.
This means it is really easy to promise management golden mountains. But also means you only start noticing it further down the line of the development process. If you notice it at all, developers implementing an LLM around a domain they are not familiar with might notice at all. In that case, it highly depends on the QA processes in place if it does get flagged. And even if it then does get flagged you have spent a considerable amount of time into a product and I can fully see some management layer pressing for release of the product anyway.
I still think LLM's are really great tools. I use ChatGPT and the OpenAI api on a daily basis. But mostly as a tool in my tool belt where I am fully aware of the limitation and make sure to only ask it for things I have enough domain knowledge myself to validate the outcome. If I do implement some automation around it, I only do in a way that ensures people with the right knowledge do validate the outcome.
In this case it is even more harmful as it isn't openly an AI tool where users are aware they need to validate results. Software vendors claiming their software removes the need "to do X" is also nothing new. Unfortunately, with how convincing LLMs can be at first glance, it becomes even more difficult for organizations to filter out the sales bullshit.
For those getting the paywall, put the link in google and then click on it.
Back on topic. Yeah... sadly not surprising to me this is happening. It basically is a similar issue as with the MDN AI helper that was posted a few days ago.
What I said there also very much applies here.
In this case it is even more harmful as it isn't openly an AI tool where users are aware they need to validate results. Software vendors claiming their software removes the need "to do X" is also nothing new. Unfortunately, with how convincing LLMs can be at first glance, it becomes even more difficult for organizations to filter out the sales bullshit.
Mirror, for those hit by the paywall:
https://archive.is/kED3D