The title says "admits", but it seems like it was more of a boast during a conference. That framing feels a little slimy from the Daily Beast. Regardless, this is clearly an inappropriate method...
The title says "admits", but it seems like it was more of a boast during a conference. That framing feels a little slimy from the Daily Beast.
Regardless, this is clearly an inappropriate method for choosing anything of such importance. And while they don't specify which AI was used (only that it's an "intelligence community chatbot"), at this point I wouldn't be surprised if it was a cloud-hosted option which would have serious data policy concerns. There's very few examples of proper data handling techniques being used by this administration.
I'm sure there's no security implications in this whatsoever. She probably just uploaded a ton of classified files to DeepSeek to get an opinion from one of the latest AIs. No biggie...
I'm sure there's no security implications in this whatsoever. She probably just uploaded a ton of classified files to DeepSeek to get an opinion from one of the latest AIs. No biggie...
Man. I really really hope this means that she was using AWS's self-hosted LLMs. Those are actually private and keep your data isolated to your own cloud. AWS also offers even stricter Gov Cloud...
Man. I really really hope this means that she was using AWS's self-hosted LLMs. Those are actually private and keep your data isolated to your own cloud. AWS also offers even stricter Gov Cloud services, so here's hoping she used them.
I understand exactly why a director would invite her to speak at the conference, but this is probably not the kind of press Amazon wants.
Found the relevant parts from the transcript (thanks u/updawg!): I am going to assume she used an LLM hosted on Top Secret Gov Cloud servers. If that's the case, this is perfectly secure and...
You know, there's been an intelligence community chat bot that's been deployed across the enterprise. Opening up and, and making it possible for us to use AI applications in the top secret clouds has been a game changer.
...
We have released thousands, tens of thousands of documents related to the assassinations of JFK and Senator Robert F. Kennedy. And we have been able to do that through the use of AI tools far more quickly than what was done previously, which is to have humans go through and look at every single one of these pages. So looking at classification and declassification you know, looking for things, for example, that might be sensitive for living family members to be made aware of.
I am going to assume she used an LLM hosted on Top Secret Gov Cloud servers. If that's the case, this is perfectly secure and legitimate use of LLM tooling in my opinion. I think it's stupid to trust them to process that much information or treat them as authoritative tools, but there are no national security risks.
The use case where this makes sense are problem sets when a false negative error isn't a big deal, and humans are reviewing all output for false positives. Meaning, if there are documents that are...
The use case where this makes sense are problem sets when a false negative error isn't a big deal, and humans are reviewing all output for false positives.
Meaning, if there are documents that are fine to release, but the LLM didn't flag them as fine to release, them not being released isn't going to be a problem. Additionally, when the LLM flags documents as fine to release, that's confirmed by a human reviewer.
This specific situation probably fits that problem set, as long as a human is actually reviewing all of the documents as thoroughly as they would via any other declassification process.
I'm not very confident that was actually done, but AI being involved doesn't inherently mean there was anything here was anything problematic with the process.
The title says "admits", but it seems like it was more of a boast during a conference. That framing feels a little slimy from the Daily Beast.
Regardless, this is clearly an inappropriate method for choosing anything of such importance. And while they don't specify which AI was used (only that it's an "intelligence community chatbot"), at this point I wouldn't be surprised if it was a cloud-hosted option which would have serious data policy concerns. There's very few examples of proper data handling techniques being used by this administration.
Cool cool cool cool cool.
Well guess it can't get deleted due to the court case?
I'm sure there's no security implications in this whatsoever. She probably just uploaded a ton of classified files to DeepSeek to get an opinion from one of the latest AIs. No biggie...
What conference was she presenting at? I can't find the details from the articles.
https://www.dni.gov/index.php/newsroom/speeches-interviews/speeches-interviews-2025/4079-dni-transcript-at-aws-summit
Man. I really really hope this means that she was using AWS's self-hosted LLMs. Those are actually private and keep your data isolated to your own cloud. AWS also offers even stricter Gov Cloud services, so here's hoping she used them.
I understand exactly why a director would invite her to speak at the conference, but this is probably not the kind of press Amazon wants.
Found the relevant parts from the transcript (thanks u/updawg!):
I am going to assume she used an LLM hosted on Top Secret Gov Cloud servers. If that's the case, this is perfectly secure and legitimate use of LLM tooling in my opinion. I think it's stupid to trust them to process that much information or treat them as authoritative tools, but there are no national security risks.
The use case where this makes sense are problem sets when a false negative error isn't a big deal, and humans are reviewing all output for false positives.
Meaning, if there are documents that are fine to release, but the LLM didn't flag them as fine to release, them not being released isn't going to be a problem. Additionally, when the LLM flags documents as fine to release, that's confirmed by a human reviewer.
This specific situation probably fits that problem set, as long as a human is actually reviewing all of the documents as thoroughly as they would via any other declassification process.
I'm not very confident that was actually done, but AI being involved doesn't inherently mean there was anything here was anything problematic with the process.