This is an interesting argument. Since you asked for new perspectives, I would argue that the biggest issue with this piece is that it is maximally positive on human efficacy and maximally...
This is an interesting argument. Since you asked for new perspectives, I would argue that the biggest issue with this piece is that it is maximally positive on human efficacy and maximally negative on AI efficacy. Under those bounds, of course implementing more AI is going to look like a bad idea. But reality is more complicated.
Here's what I mean. The author cites the current accuracy of pilots (>99.999%) to demonstrate the accuracy of human-run systems and then writes, in the context of government services,
Even if the system is designed generously and cautiously, you're still replacing an accountable human decision with an automated system. Is it worth it if it's faster for the average user, but some percentage of people end up having to laboriously appeal a bad LLM decision? Again: thinking this works requires making huge assumptions about improvements in accuracy.
But government services aren't currently at 99.999% accuracy. Right now, more than 10% of Medicare claims are appealed and more than 80% of those appeals are accepted. That is abysmal in every direction and makes any claim that AI agents can only worsen the system immediately suspect. The whole piece is kind of like that.
And I agree with you re: maliciousness. LLMs are demonstrably less likely to be malicious than humans.
to add to that, pilots don't actually do very much piloting anymore. and the other example, nurse, was very funny to me. I love all nurses but I really wouldn't rate their accuracy as anywhere...
accuracy of pilots (>99.999%)
to add to that, pilots don't actually do very much piloting anymore.
and the other example, nurse, was very funny to me. I love all nurses but I really wouldn't rate their accuracy as anywhere close to 99%. We just don't advertise that to the patient, it would scare them off...
This blog post ( not mine ) expresses that even assuming LLM do become reliable enough, both side of any dealings will be using them (customer and companies, administration and citizens). As LLM...
This blog post ( not mine ) expresses that even assuming LLM do become reliable enough, both side of any dealings will be using them (customer and companies, administration and citizens).
As LLM are choosen to act in favor of their respective user only, this would encourage all dealings to be maximally malicious in a race to the bottom. And using LLM will become necessary for no (or even negative) utility compared to when no LLM existed.
This sounds somewhat convincing to me, but appart from some "magical" human touch, whatever stopped humans from being maximally malicious so far could very well also apply to our LLM? Couldn't we see some kind of reputation system emerge to be wary of agents working for humans known to employ unusually "malicious" agent?
I'm hoping to learn some new perspective here.
I think it’s quite hard to “imagine the traffic jam” accurately and in enough detail to do much in the way of planning, even if you’re pretty sure there will be traffic jams in some vague sense....
I think it’s quite hard to “imagine the traffic jam” accurately and in enough detail to do much in the way of planning, even if you’re pretty sure there will be traffic jams in some vague sense.
But it does seems likely that our AI minions will largely try to do what they’ve been told to do. They don’t have enough context to know when to act against their owners. At best you can build in some ethical rules about things the AI should never do, for anyone. But even these could probably be overridden by giving the AI some misleading context.
So maybe we will end up with a system where everyone has their own AI lawyer advocating for their interests? And we’ll take for granted that this is an adversarial process.
Perhaps in such a world, the reliable reporting of verifiable facts becomes more important? We take it for granted now, but the overwhelming success of Wikipedia was surprising at the time. I wonder what the equivalent will be in this new era?
This is an interesting argument. Since you asked for new perspectives, I would argue that the biggest issue with this piece is that it is maximally positive on human efficacy and maximally negative on AI efficacy. Under those bounds, of course implementing more AI is going to look like a bad idea. But reality is more complicated.
Here's what I mean. The author cites the current accuracy of pilots (>99.999%) to demonstrate the accuracy of human-run systems and then writes, in the context of government services,
But government services aren't currently at 99.999% accuracy. Right now, more than 10% of Medicare claims are appealed and more than 80% of those appeals are accepted. That is abysmal in every direction and makes any claim that AI agents can only worsen the system immediately suspect. The whole piece is kind of like that.
And I agree with you re: maliciousness. LLMs are demonstrably less likely to be malicious than humans.
to add to that, pilots don't actually do very much piloting anymore.
and the other example, nurse, was very funny to me. I love all nurses but I really wouldn't rate their accuracy as anywhere close to 99%. We just don't advertise that to the patient, it would scare them off...
This blog post ( not mine ) expresses that even assuming LLM do become reliable enough, both side of any dealings will be using them (customer and companies, administration and citizens).
As LLM are choosen to act in favor of their respective user only, this would encourage all dealings to be maximally malicious in a race to the bottom. And using LLM will become necessary for no (or even negative) utility compared to when no LLM existed.
This sounds somewhat convincing to me, but appart from some "magical" human touch, whatever stopped humans from being maximally malicious so far could very well also apply to our LLM? Couldn't we see some kind of reputation system emerge to be wary of agents working for humans known to employ unusually "malicious" agent?
I'm hoping to learn some new perspective here.
I think it’s quite hard to “imagine the traffic jam” accurately and in enough detail to do much in the way of planning, even if you’re pretty sure there will be traffic jams in some vague sense.
But it does seems likely that our AI minions will largely try to do what they’ve been told to do. They don’t have enough context to know when to act against their owners. At best you can build in some ethical rules about things the AI should never do, for anyone. But even these could probably be overridden by giving the AI some misleading context.
So maybe we will end up with a system where everyone has their own AI lawyer advocating for their interests? And we’ll take for granted that this is an adversarial process.
Perhaps in such a world, the reliable reporting of verifiable facts becomes more important? We take it for granted now, but the overwhelming success of Wikipedia was surprising at the time. I wonder what the equivalent will be in this new era?