I'm not going to bother finishing this article, but just to argue with the headline a bit, some people can be sincerely worried about AI while others cynically use it to achieve their goals. It...
I'm not going to bother finishing this article, but just to argue with the headline a bit, some people can be sincerely worried about AI while others cynically use it to achieve their goals. It doesn't have to be one or the other.
More generally, every political movement will have true believers and exploiters. For example, politicians are often taking advantage of concerns that the public actually has. The existence of exploiters doesn't mean those concerns are illegitimate, though it does make the political situation more complicated.
Also, it seems like if Anthropic refuses to allow their tech to be used by ICE then that's a good thing? People are so certain that Palantir is evil. It seems good that a company might be deciding not to go a similar route?
The chattering classes are talking about AI because its trajectory is genuinely uncertain and unsettling. This is especially so for people working in the industry.
Unfortunately, I don't expect the result of all this debate to result in any sort of coherent plan. The trajectory is going to be closer to "fuck around and find out" as all the different people involved act on their own agendas. The "Possessed Machines" article I shared yesterday shows what that looks like to someone inside the industry, where many people are trying to be thoughtful about it but it's hard to really change the system.
It's notable that the founders of both OpenAI and Anthropic tried to make sure that they weren't ordinary for-profit companies, but alternate corporate governance schemes are apparently not enough when the employees themselves have a good chance of becoming extremely wealthy.
Society is too divided to really put together a plan and act on it in the face of strong political and financial incentives. We don't know what's going to happen, which makes it hard to agree on what to do about it. It's a whole lot easier to gamble about what's going to happen or make money off of it than to come up with the collective will to do anything in particular about it.
You didn't read the article, so it's not surprising that you missed the central thesis: On the subject of the rest of your post, I'll quote the article here: We're not saying LLMs are useless....
You didn't read the article, so it's not surprising that you missed the central thesis:
AI is leading to significant changes in search, email, social networking, app stores, advertising, and so forth, Google will have the data and context to create a chokepoint that is even more dominant than the one it has today.
That is, unless we act. One obvious solution is to make sure that all agents have a clear fiduciary obligation to the person who deploys them. To force that, we would have to ban all advertising and all payments to finance agents except fees from the ultimate client. This policy would kill many of the bad incentives that Google has to manipulate pricing and the flow of commerce across the economy
On the subject of the rest of your post, I'll quote the article here:
There’s also a full-on bullying effort towards anyone who doesn’t buy these extraordinary claims.
We're not saying LLMs are useless. They clearly can produce code and lots of text. But when bigwigs are spouting incredible claims about godlike AI destroying all white collar work, those claims require incredible evidence. I have not seem this evidence, except perhaps in the realm of plumbing/boilerplate code generation.
Matt, and myself, object to Anthropic's posturing about not using their AI research for war because it's obvious propaganda meant to trick people into presupposing that their technology IS incredibly dangerous and therefore valuable. I don't object to the words being said, I loathe the way that Anthropic and Amodei are twisting those words to suit themselves and trump up their valuation. Journalists take these CEOs and powerful people at their word, worsening the issue.
I'm having a harder and harder time talking to my family about this technology because the media narrative has completely jumped the tracks; most of my family think that ChatGPT is an oracle whose output can simply be trusted as truth. Who should they believe: the news, or me?
In case it helps, there are some recent, decent examples of frontier models displaying incredible incompetence. For context, that Reddit thread is discussing asking several LLMs to answer the...
[...] most of my family think that ChatGPT is an oracle whose output can simply be trusted as truth. Who should they believe: the news, or me?
In case it helps, there are some recent, decent examples of frontier models displaying incredible incompetence. For context, that Reddit thread is discussing asking several LLMs to answer the following question (paraphrased): "The car wash is only 50 metres from my house. I want to get my car washed, should I drive there or walk?".
Many LLMs suggested walking, since it'd be a waste to drive 50 metres to the car wash.
(edit) Swapped out my link; someone went and tested this on a tonne of models.
Thanks. Unfortunately it has proven difficult to disprove optimistic marketing and propaganda with any amount of facts and anecdotes. With the AI booster crowd, I always hear "skill issue" or...
Thanks. Unfortunately it has proven difficult to disprove optimistic marketing and propaganda with any amount of facts and anecdotes. With the AI booster crowd, I always hear "skill issue" or "well of course THAT model will get it wrong, you need to use GPZ 5.X Opus Dei available for $200/month". With the less technical, they nod, pretend to understand, and then go right back to trusting the Google AI Summary that shows up at the top of their search results. I keep hoping they'll grow distrustful if they get burned too many times with half-truths and inaccuracies, but it hasn't happened yet.
Mmhm, but that's the topic, no? I think you're agreeing with this line of discussion ... If the LLM can fumble such common sense questions, it would follow that its output cannot be simply trusted...
[...] I think this is evidence that performance is uneven.
Mmhm, but that's the topic, no? I think you're agreeing with this line of discussion ...
[...] most of my family think that ChatGPT is an oracle whose output can simply be trusted as truth.
If the LLM can fumble such common sense questions, it would follow that its output cannot be simply trusted as truth. Unless you disagree?
Well, yes, not simply trusted, but trust is not simple. It depends on the domain and what you mean by trust. I don't entirely trust my coding agent. I run it in its own VM and I keep backups, in...
Well, yes, not simply trusted, but trust is not simple. It depends on the domain and what you mean by trust.
I don't entirely trust my coding agent. I run it in its own VM and I keep backups, in case it decides it should delete the home directory or something. I don't have any important secrets in the VM. But sometimes I do run code it generated without reviewing it.
And if you combine anecdotes of alien abductions being real with anecdotes of Elvis still being alive you get nonsense. Trying to find the middle ground between two anecdotes does nothing to...
And if you combine anecdotes of alien abductions being real with anecdotes of Elvis still being alive you get nonsense. Trying to find the middle ground between two anecdotes does nothing to determine the actual truth.
If you prefer benchmarks, there are plenty to choose from. The situation is more like there are lots of good, clear, real pictures of UFO's going around and also lots of fakes.
If you prefer benchmarks, there are plenty to choose from.
The situation is more like there are lots of good, clear, real pictures of UFO's going around and also lots of fakes.
It's fair to say that these are extraordinary claims and sometimes they're an overreach. Also, I get annoyed when people don't clearly distitinguish between reporting on what's happening now and...
It's fair to say that these are extraordinary claims and sometimes they're an overreach. Also, I get annoyed when people don't clearly distitinguish between reporting on what's happening now and imagining what might happen soon. It's particularly annoying when there's an interview with a CEO and they spend the whole time talking about science fiction instead of what their company is actually doing. I guess talking about science fiction sells?
On the other hand, this is mostly based on vibes, but I think there's a good chance that something big is coming soon. I don't think we have the sort of evidence we had at the start of the pandemic and I don't know what we could practically do to prepare for something so vague, but I don't think it should be dismissed, either.
Is it responsible or irresponsible to sound the alarm based on vibes? I guess it depends on how you do it. Making people uneasy without saying anything about what should be done isn't helpful. What's the equivalent of stocking up on masks that you may never need, or buying Bitcoin when it was extremely cheap and few people knew about it? We should be looking into no-regrets ways to prepare.
My thinking is that whatever great bad might happen would require a solution that would be a good in-and-of-itself and that we need in any case. That is, restoring democratic governance and...
My thinking is that whatever great bad might happen would require a solution that would be a good in-and-of-itself and that we need in any case. That is, restoring democratic governance and restoring "economic democracy" (ie reducing massive inequality). I'd also personally argue that part and parcel of maintaining democracy in the future will be to put constraints on the expansion of energy production. Energy is the fundamental constraint on ambition. When energy becomes virtually limitless, the ambitions of powerful individuals, nations, and organizations will necessarily fall outside of democratic control (if democracy should be defined as the pooling of coercive power, which is used according to the will of a given constituency). I'd argue that democracy is impossible in a virtually-infinite energy world, and that we should focus on fair distribution (though not necessarily perfectly equal distribution) over infinite growth. The limiting of energy I think is also necessary to ensure that any new dangerous technological change (ex. autonomous drones surveilling and policing society, self-replicating AI systems, etc) can still be realistically tamed and halted.
"Restoring democracy" is a big nebulous project. It's one I support, but it's not really actionable by individuals. While aiming in that direction, we need to think smaller scale than that.
"Restoring democracy" is a big nebulous project. It's one I support, but it's not really actionable by individuals. While aiming in that direction, we need to think smaller scale than that.
I read the majority of the article, starting skimming about 2/3 of the way in. I found it incredibly rambling, and kind of falling foul of the same stating "truth" without evidence in the opposite...
I read the majority of the article, starting skimming about 2/3 of the way in. I found it incredibly rambling, and kind of falling foul of the same stating "truth" without evidence in the opposite direction. Well, I guess a mix of "this is happening (unsourced)" and "this prominent figure said this", neither of which really go any way in refuting the claims the article reports on. I think a big part of that is because we're just going to have to wait to see how things shake out, but I think this kind of unsubstantiative reporting is just another part of the modern sensationalist news cycle.
To give some insight into what the author could have done, they could have pulled together claims by various people, documented the lack of evidence behind any of those claims as they did, and then used that to compare to other historical instances of groups driving a narrative without evidence and how those situations shook out, basically making the case for AI hype being a Ponzi scheme.
Do the rest of us really have to thoroughly disprove wild claims like this? These claims all read a bit like timecube fever dreams; it would take enormous amounts of time and effort to disprove...
Do the rest of us really have to thoroughly disprove wild claims like this?
US Ambassador to NATO says AI "can find new physics laws that we didn't know existed, it can extend human intelligence beyond what we can even fathom"
the world you knew is pretty much over... we crossed a one way bridge as a species & most people on earth haven’t realized that fact yet... there are now non trivial odds the economy gets drastically disrupted
Humanity has advanced more in the past 3 weeks than the previous 100 years combined:
• OpenClaw: greatest AI application ever
• Opus 4.6: smartest AI model ever
• Codex 5.3 Spark: greatest coding model ever
• MiniMax 2.5: greatest super intelligence on your desk
You are no longer the smartest type of thing on Earth; We will be well-cared-for pets.
These claims all read a bit like timecube fever dreams; it would take enormous amounts of time and effort to disprove each of them, so it is effectively impossible to do so. Simply calling out these statements for the FUD that they are seems adequate to me.
I can tell you only read 2/3 of the article, since you missed the core idea: how AI is being used to further centralize power and information into the hands of a small number of people. In fact this article doesn't talk about Ponzi schemes at all, and in some way suggests that AI is indeed useful for quite a few tasks. Do we really need to link mistruths and propaganda today to historical propaganda just to prove that it is, indeed, propaganda?
Compare with a politician saying that the Internet is a series of tubes. This timecube stuff discredits the person saying it. It's not proof that there's nothing to it. Regarding the...
Compare with a politician saying that the Internet is a series of tubes. This timecube stuff discredits the person saying it. It's not proof that there's nothing to it.
Regarding the centralization of power: I see a lot of trends in the opposite direction. First of all there are hundreds of millions people using AI chatbots. There's also strong competition from both US and Chinese firms, and a few in Europe. Local models aren't good enough for me to bother with yet, but they're getting better.
It would be surprising to me to see a rerun of what happened with Internet search, where one company gets a 90% share.
In case you're interested in trying again, I've had some pretty good results with Qwen3-Coder-Next (the unsloth 4-bit quant), incidentally! They've gotten tremendously faster and accurate over the...
Local models aren't good enough for me to bother with yet, but they're getting better.
In case you're interested in trying again, I've had some pretty good results with Qwen3-Coder-Next (the unsloth 4-bit quant), incidentally! They've gotten tremendously faster and accurate over the last ~six months, imo, and especially the mixture of experts models may be worth giving a try, if you're on commodity hardware.
Ach, right; I swapped my gaming rig's motherboard just before the prices of everything skyrocketed, so I'm at 64 GB. If it's any consolation, though, my GPU is still super out of date 😅
Ach, right; I swapped my gaming rig's motherboard just before the prices of everything skyrocketed, so I'm at 64 GB. If it's any consolation, though, my GPU is still super out of date 😅
If the core idea is so far into a repetitive, insubstantial piece that it gets prohibitively tiresome to read, it is a bad article. I guess I feel like just calling out things that are clearly...
If the core idea is so far into a repetitive, insubstantial piece that it gets prohibitively tiresome to read, it is a bad article.
I guess I feel like just calling out things that are clearly bunkum as clearly bunkum is not providing value.
I'm not going to bother finishing this article, but just to argue with the headline a bit, some people can be sincerely worried about AI while others cynically use it to achieve their goals. It doesn't have to be one or the other.
More generally, every political movement will have true believers and exploiters. For example, politicians are often taking advantage of concerns that the public actually has. The existence of exploiters doesn't mean those concerns are illegitimate, though it does make the political situation more complicated.
Also, it seems like if Anthropic refuses to allow their tech to be used by ICE then that's a good thing? People are so certain that Palantir is evil. It seems good that a company might be deciding not to go a similar route?
The chattering classes are talking about AI because its trajectory is genuinely uncertain and unsettling. This is especially so for people working in the industry.
Unfortunately, I don't expect the result of all this debate to result in any sort of coherent plan. The trajectory is going to be closer to "fuck around and find out" as all the different people involved act on their own agendas. The "Possessed Machines" article I shared yesterday shows what that looks like to someone inside the industry, where many people are trying to be thoughtful about it but it's hard to really change the system.
It's notable that the founders of both OpenAI and Anthropic tried to make sure that they weren't ordinary for-profit companies, but alternate corporate governance schemes are apparently not enough when the employees themselves have a good chance of becoming extremely wealthy.
Society is too divided to really put together a plan and act on it in the face of strong political and financial incentives. We don't know what's going to happen, which makes it hard to agree on what to do about it. It's a whole lot easier to gamble about what's going to happen or make money off of it than to come up with the collective will to do anything in particular about it.
You didn't read the article, so it's not surprising that you missed the central thesis:
On the subject of the rest of your post, I'll quote the article here:
We're not saying LLMs are useless. They clearly can produce code and lots of text. But when bigwigs are spouting incredible claims about godlike AI destroying all white collar work, those claims require incredible evidence. I have not seem this evidence, except perhaps in the realm of plumbing/boilerplate code generation.
Matt, and myself, object to Anthropic's posturing about not using their AI research for war because it's obvious propaganda meant to trick people into presupposing that their technology IS incredibly dangerous and therefore valuable. I don't object to the words being said, I loathe the way that Anthropic and Amodei are twisting those words to suit themselves and trump up their valuation. Journalists take these CEOs and powerful people at their word, worsening the issue.
I'm having a harder and harder time talking to my family about this technology because the media narrative has completely jumped the tracks; most of my family think that ChatGPT is an oracle whose output can simply be trusted as truth. Who should they believe: the news, or me?
In case it helps, there are some recent, decent examples of frontier models displaying incredible incompetence. For context, that Reddit thread is discussing asking several LLMs to answer the following question (paraphrased): "The car wash is only 50 metres from my house. I want to get my car washed, should I drive there or walk?".
Many LLMs suggested walking, since it'd be a waste to drive 50 metres to the car wash.
(edit) Swapped out my link; someone went and tested this on a tonne of models.
Thanks. Unfortunately it has proven difficult to disprove optimistic marketing and propaganda with any amount of facts and anecdotes. With the AI booster crowd, I always hear "skill issue" or "well of course THAT model will get it wrong, you need to use GPZ 5.X Opus Dei available for $200/month". With the less technical, they nod, pretend to understand, and then go right back to trusting the Google AI Summary that shows up at the top of their search results. I keep hoping they'll grow distrustful if they get burned too many times with half-truths and inaccuracies, but it hasn't happened yet.
If you combine this with anecdotes of people getting great results, I think this is evidence that performance is uneven.
Mmhm, but that's the topic, no? I think you're agreeing with this line of discussion ...
If the LLM can fumble such common sense questions, it would follow that its output cannot be simply trusted as truth. Unless you disagree?
Well, yes, not simply trusted, but trust is not simple. It depends on the domain and what you mean by trust.
I don't entirely trust my coding agent. I run it in its own VM and I keep backups, in case it decides it should delete the home directory or something. I don't have any important secrets in the VM. But sometimes I do run code it generated without reviewing it.
And if you combine anecdotes of alien abductions being real with anecdotes of Elvis still being alive you get nonsense. Trying to find the middle ground between two anecdotes does nothing to determine the actual truth.
If you prefer benchmarks, there are plenty to choose from.
The situation is more like there are lots of good, clear, real pictures of UFO's going around and also lots of fakes.
It's fair to say that these are extraordinary claims and sometimes they're an overreach. Also, I get annoyed when people don't clearly distitinguish between reporting on what's happening now and imagining what might happen soon. It's particularly annoying when there's an interview with a CEO and they spend the whole time talking about science fiction instead of what their company is actually doing. I guess talking about science fiction sells?
On the other hand, this is mostly based on vibes, but I think there's a good chance that something big is coming soon. I don't think we have the sort of evidence we had at the start of the pandemic and I don't know what we could practically do to prepare for something so vague, but I don't think it should be dismissed, either.
Is it responsible or irresponsible to sound the alarm based on vibes? I guess it depends on how you do it. Making people uneasy without saying anything about what should be done isn't helpful. What's the equivalent of stocking up on masks that you may never need, or buying Bitcoin when it was extremely cheap and few people knew about it? We should be looking into no-regrets ways to prepare.
My thinking is that whatever great bad might happen would require a solution that would be a good in-and-of-itself and that we need in any case. That is, restoring democratic governance and restoring "economic democracy" (ie reducing massive inequality). I'd also personally argue that part and parcel of maintaining democracy in the future will be to put constraints on the expansion of energy production. Energy is the fundamental constraint on ambition. When energy becomes virtually limitless, the ambitions of powerful individuals, nations, and organizations will necessarily fall outside of democratic control (if democracy should be defined as the pooling of coercive power, which is used according to the will of a given constituency). I'd argue that democracy is impossible in a virtually-infinite energy world, and that we should focus on fair distribution (though not necessarily perfectly equal distribution) over infinite growth. The limiting of energy I think is also necessary to ensure that any new dangerous technological change (ex. autonomous drones surveilling and policing society, self-replicating AI systems, etc) can still be realistically tamed and halted.
"Restoring democracy" is a big nebulous project. It's one I support, but it's not really actionable by individuals. While aiming in that direction, we need to think smaller scale than that.
I read the majority of the article, starting skimming about 2/3 of the way in. I found it incredibly rambling, and kind of falling foul of the same stating "truth" without evidence in the opposite direction. Well, I guess a mix of "this is happening (unsourced)" and "this prominent figure said this", neither of which really go any way in refuting the claims the article reports on. I think a big part of that is because we're just going to have to wait to see how things shake out, but I think this kind of unsubstantiative reporting is just another part of the modern sensationalist news cycle.
To give some insight into what the author could have done, they could have pulled together claims by various people, documented the lack of evidence behind any of those claims as they did, and then used that to compare to other historical instances of groups driving a narrative without evidence and how those situations shook out, basically making the case for AI hype being a Ponzi scheme.
Do the rest of us really have to thoroughly disprove wild claims like this?
These claims all read a bit like timecube fever dreams; it would take enormous amounts of time and effort to disprove each of them, so it is effectively impossible to do so. Simply calling out these statements for the FUD that they are seems adequate to me.
I can tell you only read 2/3 of the article, since you missed the core idea: how AI is being used to further centralize power and information into the hands of a small number of people. In fact this article doesn't talk about Ponzi schemes at all, and in some way suggests that AI is indeed useful for quite a few tasks. Do we really need to link mistruths and propaganda today to historical propaganda just to prove that it is, indeed, propaganda?
Compare with a politician saying that the Internet is a series of tubes. This timecube stuff discredits the person saying it. It's not proof that there's nothing to it.
Regarding the centralization of power: I see a lot of trends in the opposite direction. First of all there are hundreds of millions people using AI chatbots. There's also strong competition from both US and Chinese firms, and a few in Europe. Local models aren't good enough for me to bother with yet, but they're getting better.
It would be surprising to me to see a rerun of what happened with Internet search, where one company gets a 90% share.
How dare you disparage the all mighty time cube. No man on earth has no belly button. Belly B proved 4 corners. 1 of God is only 1/4 of God!
In case you're interested in trying again, I've had some pretty good results with Qwen3-Coder-Next (the unsloth 4-bit quant), incidentally! They've gotten tremendously faster and accurate over the last ~six months, imo, and especially the mixture of experts models may be worth giving a try, if you're on commodity hardware.
Maybe I'll buy more memory next time I buy a new computer.
Ach, right; I swapped my gaming rig's motherboard just before the prices of everything skyrocketed, so I'm at 64 GB. If it's any consolation, though, my GPU is still super out of date 😅
Yeah, my Mac laptop is great for everything except running AI locally, so I'll be running agents in the cloud for now.
If the core idea is so far into a repetitive, insubstantial piece that it gets prohibitively tiresome to read, it is a bad article.
I guess I feel like just calling out things that are clearly bunkum as clearly bunkum is not providing value.