OpenAI CEO Sam Altman has criticized rival Anthropic for its approach to marketing the new AI model Claude Mythos, suggesting the company is using fear to justify keeping the technology in limited hands.
Speaking on the Core Memory podcast, Altman argued that Anthropic’s strategy amounts to “fear-based marketing.” He said that while there are real safety concerns, such messaging can be used to push for restricted access. “If what you want is control of AI for just a few people, fear is probably the most effective way to justify that,” he noted.
Altman also said it’s not always easy to balance new capabilities with the goal of keeping AI accessible. He added that rhetoric about highly dangerous models will likely increase as the technology improves, but not all claims should be taken at face value.
Claude Mythos sparks debate
Anthropic’s Claude Mythos model has drawn intense attention since it was revealed last month. Tests suggest it can autonomously identify software vulnerabilities and carry out complex cyber operations. The model is only available to a limited set of organizations through a restricted program.
This rollout reflects a wider industry divide. Some companies push for controlled access, while others argue for broader distribution to speed up innovation and understanding of the technology.
Mythos has been framed by Anthropic as both a defensive breakthrough—allowing faster detection of software flaws—and a potential risk if misused. Early this month, it identified hundreds of vulnerabilities in Mozilla’s Firefox browser during testing and demonstrated the ability to simulate multi-stage cyberattacks.
Restricted access and security concerns
Anthropic has restricted access to Mythos through Project Glasswing, granting select companies including Amazon, Apple, and Microsoft the ability to test its capabilities. The company has also committed resources to supporting open-source security efforts, arguing that defenders should benefit from the technology before it becomes more widely available.
Security experts warn that the same capabilities that identify vulnerabilities could also be used to exploit them at scale. Tests by the UK’s AI Security Institute found the model could autonomously complete complex cyber operations. Anthropic has acknowledged that many current cybersecurity benchmarks are no longer sufficient to measure the capabilities of its latest system.
However, a group of researchers claimed last week they were able to reproduce Mythos’ findings using publicly available models. This has raised questions about how unique or dangerous the model truly is.
Calls to halt and government interest
Despite calls within parts of the U.S. government to halt use of the technology due to concerns about warfare and surveillance, the National Security Agency has reportedly begun testing a preview version on classified networks. On prediction market Myriad, users put a 49% chance on Claude Mythos being released to the wider public by June 30.
Altman suggested that while there will be very dangerous models that require careful handling, not all claims of extreme danger should be believed. “I’m sure Mythos is a great model for cybersecurity but I think we have a plan we feel good about for how we put this kind of capability out into the world,” he said.
Altman also dismissed rumors that OpenAI is scaling back infrastructure spending. “I don’t know where that’s coming from,” he said. “Very soon it will be again, like, ‘OpenAI is so reckless. How can they be spending this crazy amount?’”









