Anthropic withholds AI model over security concerns amid government interest
Anthropic declined to release its Claude Mythos Preview AI model, citing it as too powerful for public use, while reports suggest government interest in testing.

Anthropic, the artificial intelligence company, has decided not to release its latest AI model called Claude Mythos Preview to the public, warning that the program is "too powerful" for general availability. The company expressed concerns about the model falling into the "wrong hands," according to statements made during a CBS Mornings interview.
The Claude Mythos Preview model was specifically designed to identify security vulnerabilities in software systems, a capability that could have significant implications for cybersecurity applications. Anthropic's decision to withhold the model reflects growing industry caution around releasing advanced AI systems without proper safety measures.
Reports have emerged suggesting that Trump administration officials may be encouraging financial institutions to test the Mythos model, though the specific nature of any government involvement remains unclear. This potential government interest comes at a time when Anthropic faces scrutiny from federal agencies.
The situation has created an apparent contradiction in government policy, as the Department of Defense recently classified Anthropic as a supply-chain risk. This designation typically indicates concerns about potential security vulnerabilities or foreign influence in a company's operations or technology.
The withholding of the Claude Mythos Preview model highlights ongoing debates within the AI industry about responsible deployment of advanced systems. Companies are increasingly grappling with how to balance innovation with safety concerns, particularly for AI models designed for sensitive applications like cybersecurity analysis.