This is what I was wondering about. On the low-side (the unclassified networks), GenAI.mil uses Google Gemini. While I've yet to use it, several of my coworkers have. And from their comments, it...
"The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused," the petition added.
This is what I was wondering about. On the low-side (the unclassified networks), GenAI.mil uses Google Gemini. While I've yet to use it, several of my coworkers have. And from their comments, it does alright (though they're just using it for "typical" workplace purposes; summarizing, writing policies/SOPs, etc.). Would it be that difficult for Google (or OpenAI) to simply step in if Anthropic is kicked to the curb?
I'm not saying that if Google and/or OpenAI are willing to allow "all lawful uses" and can easily replace Anthropic, that Anthropic shouldn't be making a stand. If I was at Anthropic, whatever stuff other companies do is their business. Living by your own principles is always worthwhile. Hopefully both Google and OpenAI are like "Yeah, nah, no go," too. However, my fear, as we've already seen with numerous companies, is that they simply are like "Yeah, do whatever! Just gib monies."
Google might have trouble providing the same level of functionality, Gemini just isn't as good right now (benchmarks aside), but there's certainly nothing stopping them from saying yes. Open AI...
Would it be that difficult for Google (or OpenAI) to simply step in if Anthropic is kicked to the curb?
Google might have trouble providing the same level of functionality, Gemini just isn't as good right now (benchmarks aside), but there's certainly nothing stopping them from saying yes. Open AI models are close enough that they'd probably be able to swap pretty easily.
Just gib monies
Agreed, I'd amazed if the pentagon couldn't find a replacement, the US gov is the ultimate enterprise customer if you're willing to deal with the regulations. The solidarity from the employees is still great to see though. One sliver of hope: The Trump admin is unpopular and this is a good PR opportunity.
xAI conspicuously absent from the conversation. Hmm, I wonder where they stand on the matter…?
"They're trying to divide each company with fear that the other will give in," the petition said, referring to the Department of War.
"That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War," it added.
xAI conspicuously absent from the conversation. Hmm, I wonder where they stand on the matter…?
I would say that if all others refuse to give in and there's enough public concern about this that it could have enough impact to persuade others from giving in, but people still use X despite...
I would say that if all others refuse to give in and there's enough public concern about this that it could have enough impact to persuade others from giving in, but people still use X despite Threads (which isn't much of a better alternative on the ownership front thanks to fuckerberg) and BlueSky existing. I doubt anyone will back off using anything from xAI's Grok at that point.
I'mma be honest, even though people were quite frantic about it, I have no idea how Claude is supposed to be used effectively for surveillance or for "weapons", but still nonetheless quite risky...
I'mma be honest, even though people were quite frantic about it, I have no idea how Claude is supposed to be used effectively for surveillance or for "weapons", but still nonetheless quite risky to publicly decline this US administration in this way. So a message, for sure.
Like much of this administration: very poorly and incompetently. Reading some of the law and medical horror stories at hand with AI makes me shudder at whatever they are going to do with it for...
I have no idea how Claude is supposed to be used effectively for surveillance or for "weapons"
Like much of this administration: very poorly and incompetently. Reading some of the law and medical horror stories at hand with AI makes me shudder at whatever they are going to do with it for governmental military weapons.
At least there's one company here on the record of pushing back. For the time being.
LLMs would be fantastic for a probable cause generator. Poke it until it says that whatever person/group you're targeting is the suspect, and now that an advanced AI system has said it you've got...
LLMs would be fantastic for a probable cause generator. Poke it until it says that whatever person/group you're targeting is the suspect, and now that an advanced AI system has said it you've got all the justification needed to go after them.
Douglas Adams used this exact idea, down to the software being bought out by the Pentagon, as a satirical plot point almost 40 years ago (relevant excerpt). I am not pleased to think that it’s now...
Douglas Adams used this exact idea, down to the software being bought out by the Pentagon, as a satirical plot point almost 40 years ago (relevant excerpt). I am not pleased to think that it’s now entirely realistic…
Probable cause isn’t really the domain of the pentagon? Sure for the alphabet agencies that’s a problem but this doesn’t strike me as even on the radar of the pentagon
Probable cause isn’t really the domain of the pentagon? Sure for the alphabet agencies that’s a problem but this doesn’t strike me as even on the radar of the pentagon
In a lengthy blog post on Thursday, Amodei wrote: “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.”
Amodei said Anthropic understands that the Pentagon, “not private companies, makes military decisions.” But “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” He also said use cases like mass surveillance and autonomous weapons are “outside the bounds of what today’s technology can safely and reliably do.”
In response, Emil Michael, the Pentagon’s Undersecretary for Research and Engineering who had been part of the negotiations, wrote on X: “It’s a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company.”
I mean it doesn’t really answer how that would be done. As people often note when anthropic’s blog posts come up, anthropic really likes to pretend like they’re techpriests summoning something...
I mean it doesn’t really answer how that would be done. As people often note when anthropic’s blog posts come up, anthropic really likes to pretend like they’re techpriests summoning something from the dark age of technology.
That answer implies that Claude could be used for surveillance or whatever in an unsafe and unreliable way, but as it currently is i don’t see how it could do anything remotely relevant, let alone unsafe.
I think this is the right take. I want to read this as Anthropic saying they object to autonomous weapons and mass surveillance… but I think what they’re actually saying is LLMs are just the wrong...
I think this is the right take. I want to read this as Anthropic saying they object to autonomous weapons and mass surveillance… but I think what they’re actually saying is LLMs are just the wrong tool for the job (but they have actual no problem with those things).
Which, I mean, that’s an objectively correct assessment of the tech, but not exactly a bulwark of moral fortitude.
I think that saying "llms are the wrong tool for the job" has a much bigger chance of succeeding than "we think this is unethical". So strategically it would be bad to not include that argument...
I think that saying "llms are the wrong tool for the job" has a much bigger chance of succeeding than "we think this is unethical". So strategically it would be bad to not include that argument regardless of the actual reason. Doesn't have to be just one of the reasons either, it can be both.
Well, here's the government's response: President Trump bans Anthropic from use in government systems. Someone else can post this as a separate post if they want.
Hegsweth also tweeted that he is This presumably means Microsoft developers will have to stop using Claude Code, and start using Microsofts own Github... I am feeling a little bit of schadenfreude...
Hegsweth also tweeted that he is
directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service.
This presumably means Microsoft developers will have to stop using Claude Code, and start using Microsofts own Github... I am feeling a little bit of schadenfreude over this...
Why does the DoD need Anthropic, or any AI company, for that matter? They have the wherewithal and resources to spin up all the LLMs and ML they could want, and develop models that are totally...
Why does the DoD need Anthropic, or any AI company, for that matter? They have the wherewithal and resources to spin up all the LLMs and ML they could want, and develop models that are totally optimized for their use cases, like performing game theory or killing people.
Maybe in theory, but that's pretty seriously understating what it takes to build a functional model. For comparison, Apple just failed to do so, gave up, and contracted with Google. And despite...
Maybe in theory, but that's pretty seriously understating what it takes to build a functional model. For comparison, Apple just failed to do so, gave up, and contracted with Google. And despite Meta having access to effectively infinite compute and offering ludicrous salaries to try to poach talent, they haven't been able to keep up with the frontrunners.
My impression from what I've heard from DOE researchers too is that the government, despite running multiple exascale machines, is actually lacking in compute power compared to industry. There...
My impression from what I've heard from DOE researchers too is that the government, despite running multiple exascale machines, is actually lacking in compute power compared to industry. There used to be a lot of talk in academia about how we need to build our own LLMs to not be beholden to industry, and I've noticed a lot of that discussion evaporating...
The US government buying COTS from American companies basically supports the US economy. Also, why reinvent the wheel? That said, there are definitely groups within the US government that have...
The US government buying COTS from American companies basically supports the US economy.
Also, why reinvent the wheel?
That said, there are definitely groups within the US government that have been working on internal AI tools. And there are numerous internal AI tools across government already in use. Some of which predate this administration. I work within the US government and even our small-d department has been looking into spinning up our own AI tool, separate from our parent agency and big-d Department's tools.
So somewhere, deep within the Pentagon or Fort Meade, there's definitely a group or groups who are working on AI that's optimized for the business of war. But again, if COTS products are better -- and they definitely are, compared to some of the internal AIs I've used, at least for typical mundane office stuff -- why not just buy and use those?
I've got to imagine that you're still using a big model's API though, right? You're not literally building it from scratch?
I work within the US government and even our small-d department has been looking into spinning up our own AI tool, separate from our parent agency and big-d Department's tools.
I've got to imagine that you're still using a big model's API though, right? You're not literally building it from scratch?
I'm not involved in that project (nor do I really want to be), so I'm not sure what route they're going. I'm not sure what "from scratch" entails, but I'm sure the idea they have is more like...
I'm not involved in that project (nor do I really want to be), so I'm not sure what route they're going. I'm not sure what "from scratch" entails, but I'm sure the idea they have is more like taking an existing model, hosting it locally, and training/turning it with whatever we toss into it.
We definitely don't have the expertise on hand to build an entirely new model. I even wonder if we have the expertise to take an existing model and make something of it! Either way, I think the group that's working on this is still exploring things.
Sounds like your typical AI company making their product out to be Skynet when it's really just an advanced Autocorrect. That said, the pentagon has a legit reason. AI has a lot of possibilities...
Sounds like your typical AI company making their product out to be Skynet when it's really just an advanced Autocorrect.
That said, the pentagon has a legit reason. AI has a lot of possibilities for both cyber warfare and regular warfare. If we don't at least look at the possibilities, we could show up to the next major war without proper cyber defenses and as the only one without aimbots for their soldiers.
Its the government who wants to redo a contract and give itself the option to use LLM to power fully autonomous killer robots or... more likely IMHO.... perform mass surveillance on Americans....
Sounds like your typical AI company making their product out to be Skynet when it's really just an advanced Autocorrect.
Its the government who wants to redo a contract and give itself the option to use LLM to power fully autonomous killer robots or... more likely IMHO.... perform mass surveillance on Americans.
Anthropic is refusing to adjust these terms and conditions. They are underselling their AI.
I imagine this is going to get thrown out by the courts, but by then the damage will be done.
Man, when did Democrats become the Antidisestablishmentarianists, and when did Republicans buy into Disestablishmentarianism.
LLMs are good at pattern matching and never become tired. That makes them powerful tools for surveillance if hooked into a larger system feeding them data. Just hook them up to feeds of...
LLMs are good at pattern matching and never become tired. That makes them powerful tools for surveillance if hooked into a larger system feeding them data. Just hook them up to feeds of information (such as those supplied by Palantir) and you have a machine that can keep people sorted into buckets and even automatically take action when someone new gets added to a particular bucket or some threshold is crossed.
An LLM can be given millions of documents and asked how they might relate to an investigation. An LLM could be given an image and write-ups of an ongoing situation and be asked if the image...
An LLM can be given millions of documents and asked how they might relate to an investigation. An LLM could be given an image and write-ups of an ongoing situation and be asked if the image represents something new to be reported on or if the image might contain targets to shoot. Obviously anyone wanting to put firing control in the hands of today's LLMs is crazy but combined with human operators checking their work they could possibly find leads to be investigated.
We know how well that works. It's mildly useful as an assistance to existing investigations, but it's hardly going to be the difference-maker to the existence of some kind of super spy network. I...
We know how well that works. It's mildly useful as an assistance to existing investigations, but it's hardly going to be the difference-maker to the existence of some kind of super spy network.
Anthropic didn’t make any objection to “looking into” those possibilities. The government could research them all they want. They also didn’t object to using AI for cyber defenses.
Anthropic didn’t make any objection to “looking into” those possibilities. The government could research them all they want. They also didn’t object to using AI for cyber defenses.
An option the government has is to classify LLM technology as export controlled under the ITAR or EAR regulations. Though it's not clear to me what it would mean for them to exercise it, or...
An option the government has is to classify LLM technology as export controlled under the ITAR or EAR regulations. Though it's not clear to me what it would mean for them to exercise it, or whether they could selectively do so without bringing Gemini or OpenAI under it as well.
Wow I did not expect that
This is a nice touch:
Open AI and Google employees sign on
This is what I was wondering about. On the low-side (the unclassified networks), GenAI.mil uses Google Gemini. While I've yet to use it, several of my coworkers have. And from their comments, it does alright (though they're just using it for "typical" workplace purposes; summarizing, writing policies/SOPs, etc.). Would it be that difficult for Google (or OpenAI) to simply step in if Anthropic is kicked to the curb?
I'm not saying that if Google and/or OpenAI are willing to allow "all lawful uses" and can easily replace Anthropic, that Anthropic shouldn't be making a stand. If I was at Anthropic, whatever stuff other companies do is their business. Living by your own principles is always worthwhile. Hopefully both Google and OpenAI are like "Yeah, nah, no go," too. However, my fear, as we've already seen with numerous companies, is that they simply are like "Yeah, do whatever! Just gib monies."
Google might have trouble providing the same level of functionality, Gemini just isn't as good right now (benchmarks aside), but there's certainly nothing stopping them from saying yes. Open AI models are close enough that they'd probably be able to swap pretty easily.
Agreed, I'd amazed if the pentagon couldn't find a replacement, the US gov is the ultimate enterprise customer if you're willing to deal with the regulations. The solidarity from the employees is still great to see though. One sliver of hope: The Trump admin is unpopular and this is a good PR opportunity.
xAI conspicuously absent from the conversation. Hmm, I wonder where they stand on the matter…?
I would say that if all others refuse to give in and there's enough public concern about this that it could have enough impact to persuade others from giving in, but people still use X despite Threads (which isn't much of a better alternative on the ownership front thanks to fuckerberg) and BlueSky existing. I doubt anyone will back off using anything from xAI's Grok at that point.
I'mma be honest, even though people were quite frantic about it, I have no idea how Claude is supposed to be used effectively for surveillance or for "weapons", but still nonetheless quite risky to publicly decline this US administration in this way. So a message, for sure.
Like much of this administration: very poorly and incompetently. Reading some of the law and medical horror stories at hand with AI makes me shudder at whatever they are going to do with it for governmental military weapons.
At least there's one company here on the record of pushing back. For the time being.
LLMs would be fantastic for a probable cause generator. Poke it until it says that whatever person/group you're targeting is the suspect, and now that an advanced AI system has said it you've got all the justification needed to go after them.
Douglas Adams used this exact idea, down to the software being bought out by the Pentagon, as a satirical plot point almost 40 years ago (relevant excerpt). I am not pleased to think that it’s now entirely realistic…
Probable cause isn’t really the domain of the pentagon? Sure for the alphabet agencies that’s a problem but this doesn’t strike me as even on the radar of the pentagon
Sifting through all of that Prism data? Find the dissidents using NLP.
How would Claude help in any way? The Pentagon is more than capable of running BERT.
But they want advanced NLP!
LLMs have come a long way since then.
That answers your question @stu2b50.
I mean it doesn’t really answer how that would be done. As people often note when anthropic’s blog posts come up, anthropic really likes to pretend like they’re techpriests summoning something from the dark age of technology.
That answer implies that Claude could be used for surveillance or whatever in an unsafe and unreliable way, but as it currently is i don’t see how it could do anything remotely relevant, let alone unsafe.
I think that's the point. The unsafe way is to ask it to do something it can't really do.
I think this is the right take. I want to read this as Anthropic saying they object to autonomous weapons and mass surveillance… but I think what they’re actually saying is LLMs are just the wrong tool for the job (but they have actual no problem with those things).
Which, I mean, that’s an objectively correct assessment of the tech, but not exactly a bulwark of moral fortitude.
I think that saying "llms are the wrong tool for the job" has a much bigger chance of succeeding than "we think this is unethical". So strategically it would be bad to not include that argument regardless of the actual reason. Doesn't have to be just one of the reasons either, it can be both.
My guess is that their strategy in this negotiation is to make themselves look like the reasonable party by minimizing their requirements.
Well, here's the government's response: President Trump bans Anthropic from use in government systems.
Someone else can post this as a separate post if they want.
Hegsweth also tweeted that he is
This presumably means Microsoft developers will have to stop using Claude Code, and start using Microsofts own Github... I am feeling a little bit of schadenfreude over this...
"request" ... apparently, Hegseth wasn't quite blunt enough.
Why does the DoD need Anthropic, or any AI company, for that matter? They have the wherewithal and resources to spin up all the LLMs and ML they could want, and develop models that are totally optimized for their use cases, like performing game theory or killing people.
Maybe in theory, but that's pretty seriously understating what it takes to build a functional model. For comparison, Apple just failed to do so, gave up, and contracted with Google. And despite Meta having access to effectively infinite compute and offering ludicrous salaries to try to poach talent, they haven't been able to keep up with the frontrunners.
My impression from what I've heard from DOE researchers too is that the government, despite running multiple exascale machines, is actually lacking in compute power compared to industry. There used to be a lot of talk in academia about how we need to build our own LLMs to not be beholden to industry, and I've noticed a lot of that discussion evaporating...
The US government buying COTS from American companies basically supports the US economy.
Also, why reinvent the wheel?
That said, there are definitely groups within the US government that have been working on internal AI tools. And there are numerous internal AI tools across government already in use. Some of which predate this administration. I work within the US government and even our small-d department has been looking into spinning up our own AI tool, separate from our parent agency and big-d Department's tools.
So somewhere, deep within the Pentagon or Fort Meade, there's definitely a group or groups who are working on AI that's optimized for the business of war. But again, if COTS products are better -- and they definitely are, compared to some of the internal AIs I've used, at least for typical mundane office stuff -- why not just buy and use those?
FWIW, I 100% agree with Anthropic here.
I've got to imagine that you're still using a big model's API though, right? You're not literally building it from scratch?
I'm not involved in that project (nor do I really want to be), so I'm not sure what route they're going. I'm not sure what "from scratch" entails, but I'm sure the idea they have is more like taking an existing model, hosting it locally, and training/turning it with whatever we toss into it.
We definitely don't have the expertise on hand to build an entirely new model. I even wonder if we have the expertise to take an existing model and make something of it! Either way, I think the group that's working on this is still exploring things.
Got it - that makes more sense.
Sounds like your typical AI company making their product out to be Skynet when it's really just an advanced Autocorrect.
That said, the pentagon has a legit reason. AI has a lot of possibilities for both cyber warfare and regular warfare. If we don't at least look at the possibilities, we could show up to the next major war without proper cyber defenses and as the only one without aimbots for their soldiers.
Its the government who wants to redo a contract and give itself the option to use LLM to power fully autonomous killer robots or... more likely IMHO.... perform mass surveillance on Americans.
Anthropic is refusing to adjust these terms and conditions. They are underselling their AI.
I imagine this is going to get thrown out by the courts, but by then the damage will be done.
Man, when did Democrats become the Antidisestablishmentarianists, and when did Republicans buy into Disestablishmentarianism.
How would an LLM be remotely useful for either of those purposes?
LLMs are good at pattern matching and never become tired. That makes them powerful tools for surveillance if hooked into a larger system feeding them data. Just hook them up to feeds of information (such as those supplied by Palantir) and you have a machine that can keep people sorted into buckets and even automatically take action when someone new gets added to a particular bucket or some threshold is crossed.
An LLM can be given millions of documents and asked how they might relate to an investigation. An LLM could be given an image and write-ups of an ongoing situation and be asked if the image represents something new to be reported on or if the image might contain targets to shoot. Obviously anyone wanting to put firing control in the hands of today's LLMs is crazy but combined with human operators checking their work they could possibly find leads to be investigated.
We know how well that works. It's mildly useful as an assistance to existing investigations, but it's hardly going to be the difference-maker to the existence of some kind of super spy network.
I wouldn't lose any sleep over it.
Anthropic didn’t make any objection to “looking into” those possibilities. The government could research them all they want. They also didn’t object to using AI for cyber defenses.
An option the government has is to classify LLM technology as export controlled under the ITAR or EAR regulations. Though it's not clear to me what it would mean for them to exercise it, or whether they could selectively do so without bringing Gemini or OpenAI under it as well.