I think articles like these might just be noise now. There is a demand for it. I work at an analytics company and our customers are demanding it and when we deliver they are using it. You wouldn’t...
I think articles like these might just be noise now.
There is a demand for it. I work at an analytics company and our customers are demanding it and when we deliver they are using it.
You wouldn’t think that customers of a product thats meant to give them accurate answers all the time would accept such a thing as an agentic AI that often makes mistakes, but they are actually demanding it, paying for it, and then using it.
I hate it and I think the whole thing is gobbling up land resources and using rocks to replace human workers and its not sustainable and the products that are created are stupid but the customer does actually want this nonsense.
So many people are using agentic browsers. Mozilla has no choice but to keep up.
it depends on how they weigh "customer". his poll is clearly biased, but DDG from the article: again, skewed. But I see general polls go around 60-70% of users. Now if you treat customers as...
but the customer does actually want this nonsense.
it depends on how they weigh "customer". his poll is clearly biased, but DDG from the article:
Another great example came from DuckDuckGo, which opened a poll asking whether you are for or against AI. After more than 175,000 votes, 90% of respondents said no to AI.
again, skewed. But I see general polls go around 60-70% of users.
Now if you treat customers as money, then yes. a lot of money wants to say it wants AI. I think even that reached an inflection point with Pintrest's reaction to their layoffs, but we'll see.
They don't let you turn it off. Hence the article title. Options are always nice. We aren't given an option. The bare minimum to let me hide my dissatisfaction is to let me turn off the feature. Meanwhile, Microsoft tries to push Copilot on the searchbar and Google pushes pop ups every dozen searches (which I blocked most of. but not the "are you interested in AI mode?"). Nadella's reaction to the negativity doesn't suggest we'll get such options
We need to get beyond the arguments of slop vs sophistication and develop a new equilibrium in terms of our “theory of the mind” that accounts for humans being equipped with these new cognitive amplifier tools as we relate to each other.
I'll be kinder than the article interpretation and say this amounts to "the user's input doesn't matter. This is going to make us so much more productive!". Which still supposes this god complex, as if they know what's best for all of us. But "all of us" are very diverse. What use does a plumber need for AI in the day to day? maybe some device they use? What about a grade school teacher? A politician (past bribes to de-regulate it)?
This quote actually scared the shit out of me I interpreted it as “we humans will no longer communicate with each other without the help of computers” Which like, fair, theres already spelling...
This quote actually scared the shit out of me
We need to get beyond the arguments of slop vs sophistication and develop a new equilibrium in terms of our “theory of the mind” that accounts for humans being equipped with these new cognitive amplifier tools as we relate to each other.
I interpreted it as “we humans will no longer communicate with each other without the help of computers”
Which like, fair, theres already spelling suggestions on mobile phones and most people use that, but I guess later on there will be entire “what do I say now” suggestions which is especially nefarious considering how so much of my high school relationship chatter and even breakups were through text
My iphone is old and doesnt have the new ai stuff, could you actually have a whole personal conversation with those suggestions? I see the google suggestions on my gmail and honestly I forgot...
My iphone is old and doesnt have the new ai stuff, could you actually have a whole personal conversation with those suggestions?
I see the google suggestions on my gmail and honestly I forgot about that cause the suggestions are like “thats great! See you then” or some other really standard chat bot reply you’d get when calling your bank
Plumbers have a business to run. The AI ghosts can’t do the actual plumbing but I imagine they they might be able to handle some of the the office work? Whatever software they have now might not...
Plumbers have a business to run. The AI ghosts can’t do the actual plumbing but I imagine they they might be able to handle some of the the office work? Whatever software they have now might not seem as good in comparison?
It’s sort of like asking who needs their own website. Some businesses get by with a Facebook page, but if it’s easy to improve on that, they might see the appeal.
The OpenClaw hoopla shows that at least a large minority of semi-technical users are losing their minds over this stuff.
You sound like your work in B2B. In that case the users are very different from the buyers. Your buyers are clamoring for AI. But if your AI sucks, then the users most definitely aren’t. But that...
You sound like your work in B2B. In that case the users are very different from the buyers. Your buyers are clamoring for AI. But if your AI sucks, then the users most definitely aren’t. But that surprises me. Vibe coding analytics is right up there in terms of things LLMs can do reasonably well.
Its a complicated microservice infrastructure, Claude code doesn’t help us code around the token limitation from AWS unfortunately. I think it works like shit cause I see all the bugs for it every...
Its a complicated microservice infrastructure, Claude code doesn’t help us code around the token limitation from AWS unfortunately.
I think it works like shit cause I see all the bugs for it every day, I guess it must work well enough for customers or they’d stop using it.
I think a lot of humans just have really low standards for what we accept as good enough when it comes to stuff like this, which is surprising, because our customers are healthcare providers.
Yeah, there was that study recently that said experienced coders (familiar with the code base) were sure they were more productive with AI, but weren't, they just thought they were more...
Yeah, there was that study recently that said experienced coders (familiar with the code base) were sure they were more productive with AI, but weren't, they just thought they were more productive.
I saw that, and I think it takes a bit of mindfulness to really use AI to be more productive when coding. A lot of it boils down to wether or not you can upload your entire codebase into Claude...
I saw that, and I think it takes a bit of mindfulness to really use AI to be more productive when coding.
A lot of it boils down to wether or not you can upload your entire codebase into Claude code so that it can have the entire context of what you’re doing.
Even then theres some limitations because of the context outside of the codebase like company infrastructure and feature requirements
I think lots of people are lazy and would rather spend more time having someone or something else do something instead of doing it themselves in less time. I don’t mean lazy as an insult either. I...
I think lots of people are lazy and would rather spend more time having someone or something else do something instead of doing it themselves in less time.
I don’t mean lazy as an insult either. I think there is something inherent to humans that makes us want to accomplish our goals with as little energy expenditure on our end as possible.
That used to lead to the development of tools that allowed us to be more productive. Now, with agentic AI systems, an inefficient process that produces mediocre results scratches the same itch since the human involved still spent less energy to achieve their goal.
I'd replace humans with mammals, and it's a well known part of the process of evolution. Calories are historically expensive and both movement and cognition use a lot of calories so organisms...
I'd replace humans with mammals, and it's a well known part of the process of evolution. Calories are historically expensive and both movement and cognition use a lot of calories so organisms evolve to be as lazy as they can get away with while still surviving effectively.
I have the same theory and I try not to think about it cause like, if you can afford to be that lazy about how you do your job, does your job even need doing?
I have the same theory and I try not to think about it cause like, if you can afford to be that lazy about how you do your job, does your job even need doing?
I agree with a lot of what you're saying. Definitely agree that AI is being forced into many places that it just doesn't belong. Medium-sized companies (like Proton or Atlassian) are building...
I agree with a lot of what you're saying. Definitely agree that AI is being forced into many places that it just doesn't belong. Medium-sized companies (like Proton or Atlassian) are building their own chat bots/models to try to compete with the big players and it just makes no sense to me. I think very few people are going to choose to use lumo when big name models like gpt, claude, gemini are available.
With that said, I think you're understating the usefulness of AI a little bit. Maybe it's not "revolutionary" (I don't really know how to quantify that), but it is pretty damn useful. I've been using Claude Opus for work and for personal projects. It's good enough now to create an entire small project for you. If you're creative about the information you give it, it can also help a ton with large codebases. Not to soapbox too much, but as an example, this past weekend I got interested in identifying home-installation solar panels using aerial imagery. Essentially, I was interested in replicating the work of this Stanford study, Deep solar. Within a day, I had a complete python pipeline that would, given imagery I downloaded from state government sources (116GB GeoTiff), split the images into tiles, use image classification on the tiles to identify solar panels, and present the results in an HTML Leaflet app (including tile-server to display the GeoTiff on the map for easy verification of results). I wasn't super pleased with the performance of the image classifier (identifying metal roofs as solar panels was one issue), so the AI was even able to build me a labelling tool so that I could easily work through a few hundred examples and label false positives that I could use to fine-tune the classifier. Throughout the process, I could even take screenshots of the map and give those to claude and, with enough explanation, it can identify issues in the screenshots and provide fixes.
It's very far from a perfect tool, and anyone calling an AI model their "friend" I think needs to get outside more and talk to some real people... But as a tool for software engineering and automation, it is really really good. Better than it was a year ago. Way beyond a "glorified auto complete".
economically, it doesn't make sense. It'a a billion dollar tool with trillion dollars of investment. It's not sustainable as it is now from that angle alone. That's why a bubble popping and a soft...
With that said, I think you're understating the usefulness of AI a little bit.
economically, it doesn't make sense. It'a a billion dollar tool with trillion dollars of investment. It's not sustainable as it is now from that angle alone.
That's why a bubble popping and a soft reset in terms of not trying to stuff money into any mention of it. Bring out real customer demand and real (current, not "in 10 years") benefits to it. Which won't be as omnipresent as it is trying to portray.
I think you're probably right about the economics and I hope you're right about a soft reset. There is certainly a tremendous amount of waste happening right now for the sake of "the investors are...
I think you're probably right about the economics and I hope you're right about a soft reset. There is certainly a tremendous amount of waste happening right now for the sake of "the investors are all excited about this AI thing, so let's spend money on it".
I was more just trying to make the case that what we have today is useful and there is real demand for it today. But yeah, probably not enough demand to justify the trillions that are being thrown at it.
The financial markets have gotten rather enthusiastic, but I don’t think that tells us anything about whether AI makes sense for a particular customer. It also doesn’t tell us what prices for...
The financial markets have gotten rather enthusiastic, but I don’t think that tells us anything about whether AI makes sense for a particular customer. It also doesn’t tell us what prices for customers will look like after the froth boils off, because in the meantime there will be algorithmic improvements bringing costs down.
We're in the same boat! I use AI almost daily, although for simpler tasks, like translation (English isn't my native language) and simple JavaScript snippets (I don't code except for HTML and...
We're in the same boat! I use AI almost daily, although for simpler tasks, like translation (English isn't my native language) and simple JavaScript snippets (I don't code except for HTML and CSS).
My opposition — which I tried to express in the article — is more aligned with what @raze2012 said in other comments here: the overpromising and the bad practices of corporations that don't accept “no“ for an answer.
Software pushing AI features is annoying even for people who sometimes like AI. Just because I’m using it in one way doesn’t mean I have any interest in using it in a different product.
Software pushing AI features is annoying even for people who sometimes like AI. Just because I’m using it in one way doesn’t mean I have any interest in using it in a different product.
It feels related to how, from Amazon’s perspective, I’m a “customer” first and “person” second (or never?). And their algorithmic suggestions just have to latch onto whatever it can to suggest...
Just because I’m using it in one way doesn’t mean I have any interest in using it in a different product.
It feels related to how, from Amazon’s perspective, I’m a “customer” first and “person” second (or never?). And their algorithmic suggestions just have to latch onto whatever it can to suggest more of the same.
I remember hearing someone talk about how they resisted using Amazon for years, but gave into the same-day shipping because their toilet seat broke and they wanted a replacement ASAP. But then because that was the only data point that Amazon had on this customer, their follow up emails were filled with more suggestions for more toilet seats. From the simplistic algorithm’s perspective, I get it, that’s the only data point it has to work with, and it’s gotta stuff something into the emails.
But as a person, they were like “thanks Amazon but I’m not some kind of toilet seat connoisseur, I don’t need dozens of toilet seat suggestions. I had a singular need, and that’s now been fulfilled.”
I think articles like these might just be noise now.
There is a demand for it. I work at an analytics company and our customers are demanding it and when we deliver they are using it.
You wouldn’t think that customers of a product thats meant to give them accurate answers all the time would accept such a thing as an agentic AI that often makes mistakes, but they are actually demanding it, paying for it, and then using it.
I hate it and I think the whole thing is gobbling up land resources and using rocks to replace human workers and its not sustainable and the products that are created are stupid but the customer does actually want this nonsense.
So many people are using agentic browsers. Mozilla has no choice but to keep up.
again, skewed. But I see general polls go around 60-70% of users.
Now if you treat customers as money, then yes. a lot of money wants to say it wants AI. I think even that reached an inflection point with Pintrest's reaction to their layoffs, but we'll see.
I'll be kinder than the article interpretation and say this amounts to "the user's input doesn't matter. This is going to make us so much more productive!". Which still supposes this god complex, as if they know what's best for all of us. But "all of us" are very diverse. What use does a plumber need for AI in the day to day? maybe some device they use? What about a grade school teacher? A politician (past bribes to de-regulate it)?
This quote actually scared the shit out of me
I interpreted it as “we humans will no longer communicate with each other without the help of computers”
Which like, fair, theres already spelling suggestions on mobile phones and most people use that, but I guess later on there will be entire “what do I say now” suggestions which is especially nefarious considering how so much of my high school relationship chatter and even breakups were through text
My iPhone already suggests entire replies to me in iMessage. Gmail does similar stuff.
My iphone is old and doesnt have the new ai stuff, could you actually have a whole personal conversation with those suggestions?
I see the google suggestions on my gmail and honestly I forgot about that cause the suggestions are like “thats great! See you then” or some other really standard chat bot reply you’d get when calling your bank
Teams also does that. So far it is simply short rote phrases but it exists.
Ah I haven’t used teams since before chatgpt existed. My company uses Slack, and they probably offer it, but we didnt buy that feature I guess
Plumbers have a business to run. The AI ghosts can’t do the actual plumbing but I imagine they they might be able to handle some of the the office work? Whatever software they have now might not seem as good in comparison?
It’s sort of like asking who needs their own website. Some businesses get by with a Facebook page, but if it’s easy to improve on that, they might see the appeal.
The OpenClaw hoopla shows that at least a large minority of semi-technical users are losing their minds over this stuff.
You sound like your work in B2B. In that case the users are very different from the buyers. Your buyers are clamoring for AI. But if your AI sucks, then the users most definitely aren’t. But that surprises me. Vibe coding analytics is right up there in terms of things LLMs can do reasonably well.
Its a complicated microservice infrastructure, Claude code doesn’t help us code around the token limitation from AWS unfortunately.
I think it works like shit cause I see all the bugs for it every day, I guess it must work well enough for customers or they’d stop using it.
I think a lot of humans just have really low standards for what we accept as good enough when it comes to stuff like this, which is surprising, because our customers are healthcare providers.
Yeah, there was that study recently that said experienced coders (familiar with the code base) were sure they were more productive with AI, but weren't, they just thought they were more productive.
Interesting times.
I saw that, and I think it takes a bit of mindfulness to really use AI to be more productive when coding.
A lot of it boils down to wether or not you can upload your entire codebase into Claude code so that it can have the entire context of what you’re doing.
Even then theres some limitations because of the context outside of the codebase like company infrastructure and feature requirements
I think lots of people are lazy and would rather spend more time having someone or something else do something instead of doing it themselves in less time.
I don’t mean lazy as an insult either. I think there is something inherent to humans that makes us want to accomplish our goals with as little energy expenditure on our end as possible.
That used to lead to the development of tools that allowed us to be more productive. Now, with agentic AI systems, an inefficient process that produces mediocre results scratches the same itch since the human involved still spent less energy to achieve their goal.
I'd replace humans with mammals, and it's a well known part of the process of evolution. Calories are historically expensive and both movement and cognition use a lot of calories so organisms evolve to be as lazy as they can get away with while still surviving effectively.
I have the same theory and I try not to think about it cause like, if you can afford to be that lazy about how you do your job, does your job even need doing?
I agree with a lot of what you're saying. Definitely agree that AI is being forced into many places that it just doesn't belong. Medium-sized companies (like Proton or Atlassian) are building their own chat bots/models to try to compete with the big players and it just makes no sense to me. I think very few people are going to choose to use lumo when big name models like gpt, claude, gemini are available.
With that said, I think you're understating the usefulness of AI a little bit. Maybe it's not "revolutionary" (I don't really know how to quantify that), but it is pretty damn useful. I've been using Claude Opus for work and for personal projects. It's good enough now to create an entire small project for you. If you're creative about the information you give it, it can also help a ton with large codebases. Not to soapbox too much, but as an example, this past weekend I got interested in identifying home-installation solar panels using aerial imagery. Essentially, I was interested in replicating the work of this Stanford study, Deep solar. Within a day, I had a complete python pipeline that would, given imagery I downloaded from state government sources (116GB GeoTiff), split the images into tiles, use image classification on the tiles to identify solar panels, and present the results in an HTML Leaflet app (including tile-server to display the GeoTiff on the map for easy verification of results). I wasn't super pleased with the performance of the image classifier (identifying metal roofs as solar panels was one issue), so the AI was even able to build me a labelling tool so that I could easily work through a few hundred examples and label false positives that I could use to fine-tune the classifier. Throughout the process, I could even take screenshots of the map and give those to claude and, with enough explanation, it can identify issues in the screenshots and provide fixes.
It's very far from a perfect tool, and anyone calling an AI model their "friend" I think needs to get outside more and talk to some real people... But as a tool for software engineering and automation, it is really really good. Better than it was a year ago. Way beyond a "glorified auto complete".
economically, it doesn't make sense. It'a a billion dollar tool with trillion dollars of investment. It's not sustainable as it is now from that angle alone.
That's why a bubble popping and a soft reset in terms of not trying to stuff money into any mention of it. Bring out real customer demand and real (current, not "in 10 years") benefits to it. Which won't be as omnipresent as it is trying to portray.
I think you're probably right about the economics and I hope you're right about a soft reset. There is certainly a tremendous amount of waste happening right now for the sake of "the investors are all excited about this AI thing, so let's spend money on it".
I was more just trying to make the case that what we have today is useful and there is real demand for it today. But yeah, probably not enough demand to justify the trillions that are being thrown at it.
The financial markets have gotten rather enthusiastic, but I don’t think that tells us anything about whether AI makes sense for a particular customer. It also doesn’t tell us what prices for customers will look like after the froth boils off, because in the meantime there will be algorithmic improvements bringing costs down.
We're in the same boat! I use AI almost daily, although for simpler tasks, like translation (English isn't my native language) and simple JavaScript snippets (I don't code except for HTML and CSS).
My opposition — which I tried to express in the article — is more aligned with what @raze2012 said in other comments here: the overpromising and the bad practices of corporations that don't accept “no“ for an answer.
Software pushing AI features is annoying even for people who sometimes like AI. Just because I’m using it in one way doesn’t mean I have any interest in using it in a different product.
It feels related to how, from Amazon’s perspective, I’m a “customer” first and “person” second (or never?). And their algorithmic suggestions just have to latch onto whatever it can to suggest more of the same.
I remember hearing someone talk about how they resisted using Amazon for years, but gave into the same-day shipping because their toilet seat broke and they wanted a replacement ASAP. But then because that was the only data point that Amazon had on this customer, their follow up emails were filled with more suggestions for more toilet seats. From the simplistic algorithm’s perspective, I get it, that’s the only data point it has to work with, and it’s gotta stuff something into the emails.
But as a person, they were like “thanks Amazon but I’m not some kind of toilet seat connoisseur, I don’t need dozens of toilet seat suggestions. I had a singular need, and that’s now been fulfilled.”