Anthropic’s potent new AI model is a “wake-up call,” security experts say


Anthropic’s latest AI technology, called Mythos, is so powerful at revealing software vulnerabilities that the company is afraid to release the model publicly lest it fall into the hands of bad actors.

The company, the developer behind the Claude AI chatbot, said in a post on its website this week that the new tool has already uncovered thousands of weak points in “every major operating system and web browser.” That is stirring concern that hackers could exploit Mythos to attack banks, hospitals, government systems and other critical infrastructure.

Preparing for the “storm”

Rather than releasing Mythos to the public, Anthropic is sharing the tech with a select group of major companies, including Amazon, Apple, Cisco, JPMorgan Chase and Nvidia, so they can test the model and strengthen their own systems against cyberattacks. Called Project Glasswing, the effort is aimed at helping key companies harden their defenses before hackers get access to Mythos or similar AI models, according to Anthropic. 

“What we need to do is look at this as a wake-up call to say, the storm isn’t coming — the storm is here,” Alissa Valentina Knight, CEO of cybersecurity AI company Assail, told CBS News. “We need to prepare ourselves, because we couldn’t keep up with the bad guys when it was humans hacking into our networks. We certainly can’t keep up now if they’re using AI because it’s so much devastatingly faster and more capable.”

Mythos’ capabilities are also sparking concern among federal officials. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with top bank CEOs in a closed-door meeting on Tuesday to discuss Mythos and other emerging cybersecurity risks stemming from AI. Anthropic also briefed senior U.S. government officials and key industry stakeholders on Mythos’s capabilities, CBS News has learned.

Separately, IMF Managing Director Kristalina Georgieva said in an interview set to air Sunday on “Face the Nation with Margaret Brennan” that the world does not have the ability “to protect the international monetary system against massive cyber risks.” 

“The risks have been growing exponentially,” Georgieva said. “Yes, we are concerned. We are very keen to see more attention to the guardrails that are necessary to protect financial stability in the world of AI.” 

Anthropic didn’t return a request for comment. In its post, however, the company underscored the risks of misusing tools like Mythos. “The fallout — for economies, public safety, and national security — could be severe,” the company said. 

The weakest link 

Such stark warnings mask another troubling reality: Hackers already have access to advanced AI models, according to cybersecurity experts.

Other AI tools, while not yet as effective as Mythos in exposing the soft underbelly in software, are already amplifying the risks to consumers, businesses and governments. For instance, hackers are already tapping AI to sharpen so-called phishing attacks aimed at prying loose confidential information, said Zach Lewis, the chief information officer at the University of Health Sciences and Pharmacy in St. Louis. 

“It’s been used to really script those dialogues, those conversations, those phishing emails, to specific people — and really customize them to make them a lot more difficult to detect and identify if these are fake or not,” he told CBS News.

AI is also driving more ransomware attacks, with a recent PwC report finding that posts on ransomware leak sites — public disclosures of stolen data when a company does not pay a ransom — surged 58% in 2025 from the prior year.

“Once [Mythos] drops, we’re going to see a lot more vulnerabilities, probably a lot more attacks,” Lewis said. “Cyberattacks are definitely going to increase until we get to a point where we’re patching up all those vulnerabilities almost in real time.”

AI is more effective than humans at finding software bugs because it can quickly scan thousands of lines of code and detect problems, something people are not necessarily good at, Knight explained.

“Humans are the weakest link in security,” Knight noted. “Humans have the ability to make mistakes when we’re writing code. It’s possible for vulnerabilities in source code to have never been found by humans.” 

On brand for Anthropic?

Some security experts questioned the motives behind Anthropic’s incremental approach to rolling out Mythos, speculating that the limited release could be aimed at stirring intrest from other prospective customers. 

Meanwhile, both Anthropic and rival OpenAI are expected to launch initial public offerings by the end of the year, according to the Wall Street Journal — a possible incentive to drum up headlines, said Peter Garraghan, Founder and Chief Science Officer at Mindgard, an AI security platform.

“I suspect Anthropic may be using this as a marketing ploy, perhaps towards IPO,” he said.

Anthropic has sought to distinguish its brand from OpenAI and other rivals by publicly emphasizing AI safety, highlighting its guardrails for keeping the technology in line. Anthropic’s decision to hold off on releasing Mythos and launching Project Glasswing aligns with that image, noted Columbia Business School marketing lecturer Malek Ben Sliman.

“When facing the tough decisions, Anthropic has actually been true to its values,” he said. Curating the release of Mythos “does allow them to look to be the protectors of this responsible AI, but it also is a great marketing and advertising tool.”



Source link