The National Security Agency is using Anthropic's Mythos AI model despite the Pentagon placing the company on a restricted list, according to multiple media reports published Sunday and Monday.
What do 1,000 journalists and PR pros know about AI that you don't? They took AI Quick Start, a 1-hour live class from The Media Copilot. 94% satisfaction. Find out how to work smarter with AI in just 60 minutes. Get 20% off with the code AIPRO: https://mediacopilot.ai/
The NSA's adoption of Mythos—Anthropic's next-generation model known for its advanced coding and autonomous task execution capabilities—puts the intelligence agency at odds with Defense Department officials who have flagged Anthropic as a "supply chain risk" and pushed for phasing out its technology across federal systems. The Information was first to report that the NSA has continued using the model even as the Pentagon's restrictions remain in place.
The dispute between Anthropic and the Defense Department stems from the company's refusal to allow unrestricted use of its AI models in sensitive military and surveillance contexts. Anthropic has resisted deployments involving autonomous weapons and mass surveillance capabilities, leading to the breakdown in relations that prompted the DoD's supply chain designation, according to reports.
For national security agencies, the appeal of advanced AI systems like Mythos appears to outweigh the restrictions. Cybersecurity experts believe such models can identify vulnerabilities and enhance defensive operations—priorities that have kept the NSA turning to Anthropic despite the broader federal backlash. The deployment of Mythos has already drawn scrutiny from other governments; earlier this month, UK and US financial regulators held emergency meetings regarding the model's cybersecurity implications.
The situation illustrates a growing challenge for governments worldwide: balancing rapid AI adoption against security, ethical, and regulatory concerns. As AI systems become more capable and integral to defense infrastructure, policymakers face difficult trade-offs between operational effectiveness and risk management.
The NSA's continued use of Anthropic's technology marks a notable fracture in the otherwise coordinated federal response to the company. While the Pentagon has moved to restrict Anthropic's products from wider defense procurement, intelligence community operations appear to have carved out their own path.
The controversy is unlikely to subside. Congressional scrutiny of the NSA's decision is expected, and technology policy advocates say the episode underscores how quickly AI capabilities are outpacing the regulatory frameworks meant to govern them.







