On April 7, 2026, Anthropic did something no major AI company has done before. It announced its most powerful model, Claude Mythos Preview, and refused to release it to the public.
The reason: Mythos is so capable at finding cybersecurity vulnerabilities that Anthropic concluded it would be irresponsible to make it broadly available before the software industry has time to patch its systems. Instead, they launched Project Glasswing, a defensive initiative with 12 of the largest technology and financial companies in the world, to use Mythos to find and fix critical security flaws before attackers develop equivalent capabilities.
I’ve spent the last several months tracking how AI is reshaping legal services, from the $60 billion automation line Sequoia Capital drew through transactional legal work to the sanctions wave that produced $145,000+ in fines against attorneys in Q1 2026 alone. Mythos connects to both of those threads in ways that nobody in the legal industry is discussing yet.
Here’s what happened, what it means, and why every managing partner should be paying attention.
What Is Claude Mythos Preview?
Claude Mythos Preview is a general-purpose frontier AI model developed by Anthropic. It was not specifically designed for cybersecurity work. It is a language model with exceptional coding, reasoning, and agentic capabilities that happens to be devastatingly effective at finding software vulnerabilities.
How effective? In a few weeks of testing, Mythos autonomously identified thousands of zero-day vulnerabilities (flaws that were previously unknown to the software’s developers) across every major operating system and every major web browser. Many of these vulnerabilities had survived decades of human code review and millions of automated security tests.
Three specific examples from Anthropic’s disclosure:
A 27-year-old vulnerability in OpenBSD. OpenBSD is widely considered one of the most security-hardened operating systems in the world. It is used to run firewalls and critical infrastructure. Mythos found a flaw that allowed an attacker to remotely crash any machine running the operating system simply by connecting to it.
A 16-year-old vulnerability in FFmpeg. FFmpeg is used by virtually every piece of video software on earth. Automated testing tools had hit the vulnerable line of code five million times without catching the problem. Mythos found it.
A chain of Linux kernel vulnerabilities. The Linux kernel runs most of the world’s servers. Mythos found multiple vulnerabilities and chained them together to escalate from ordinary user access to complete control of the machine. No human was involved after the initial request.
The most striking example: Mythos fully autonomously discovered and exploited a 17-year-old remote code execution vulnerability in FreeBSD (CVE-2026-4747) that allows anyone on the internet to gain root access to a machine running NFS. “Fully autonomously” means no human was involved in either the discovery or exploitation after the initial prompt.
“I’ve found more bugs in the last couple of weeks than I found in the rest of my life combined.”
— Nicholas Carlini, Anthropic Researcher
On SWE-bench Verified (a standard coding benchmark), Mythos scored 93.9% versus Opus 4.6’s 80.8%.
What Is Project Glasswing?
Project Glasswing is Anthropic’s response to the capabilities they observed in Mythos. Rather than releasing the model publicly, they formed a consortium of 12 organizations to use Mythos exclusively for defensive cybersecurity work.
The launch partners are: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and Anthropic itself.
An additional 40+ organizations that build or maintain critical software infrastructure also received access. Anthropic committed $100 million in usage credits for Mythos Preview across these efforts, plus $4 million in direct donations to open-source security organizations: $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation, and $1.5 million to the Apache Software Foundation.
Partners will use Mythos for vulnerability scanning of both proprietary and open-source systems, black-box testing, endpoint security, and penetration testing. Within 90 days, Anthropic will publish a public report on what they’ve learned, including vulnerabilities fixed and best practices for the industry.
Pricing for partners after the initial credits: $25 per million input tokens and $125 per million output tokens.
Anthropic explicitly stated: “We do not plan to make Claude Mythos Preview generally available.” Their plan is to develop new cybersecurity safeguards and launch them with a future Claude Opus model that does not pose the same level of risk.
“The window between a vulnerability being discovered and being exploited by an adversary has collapsed. What once took months now happens in minutes with AI.”
— Elia Zaitsev, CTO, CrowdStrike
Why This Matters for Law Firms
Law firms are not technology companies. Most managing partners reading about Mythos will see it as a cybersecurity story that belongs in their IT department’s inbox. That would be a mistake, for three reasons.
1. Law Firms Hold Concentrated Confidential Data on Vulnerable Systems
Law firms store trade secrets, M&A strategies, litigation playbooks, personal financial information, medical records, and criminal defense communications. This concentration of high-value confidential data makes them among the most targeted sectors for cyberattacks.
If Mythos-class capabilities find vulnerabilities in every major operating system and web browser, and those are the systems law firms use to store and transmit client data, the attack surface just expanded dramatically. The question is no longer whether someone could breach your systems. It’s whether the tools to do so are now accessible to a wider range of attackers.
Anthropic themselves acknowledged this risk: “Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely. The fallout, for economies, public safety, and national security, could be severe.”[3]
2. Morgan v. V2X Just Drew a Line Between Consumer AI and Enterprise AI
On March 30, 2026, Magistrate Judge Maritza Dominguez Braswell in the District of Colorado issued what has become the most detailed federal AI protective order to date in Morgan v. V2X, Inc. (No. 25-cv-01991-SKC-MDB).[7]
The order established two rules that matter here. First, AI-assisted work product is protected under the work product doctrine (FRCP Rule 26(b)(3)). Second, before inputting any confidential information into an AI tool, the AI provider must contractually agree not to store, retain, or train on that data.
Consumer-grade ChatGPT does not offer that contractual guarantee. Claude’s free tier does not offer it. Most $20/month AI subscriptions that attorneys are currently using do not offer it.
Morgan v. V2X effectively creates a legal distinction between consumer AI tools and enterprise AI tools. And it does so in the context of confidential discovery materials, the exact type of data that law firms handle every day.
Now combine that with Mythos. If AI models can find vulnerabilities in the operating systems and browsers that attorneys use to access consumer AI tools, and those tools don’t contractually guarantee data protection, the risk isn’t theoretical. It’s a chain: vulnerable systems, unprotected AI tools, confidential client data, and no audit trail.
3. The Governance Gap Is Now a Cybersecurity Gap
A March 2026 study by the NYC Bar Association and Northwestern University (the first random-sample survey of federal judges and AI, co-authored by U.S. District Judge Xavier Rodriguez) found that 24.1% of federal courts have no AI policy at all. If you include courts that “discourage but don’t prohibit” AI use, 41.7% of federal courts operate without meaningful AI governance.[8]
The situation inside law firms is likely worse. Most firms have no documented AI use policy, no audit trail for what data is being input into which AI tools, and no governance framework for evaluating the security posture of the AI tools their teams use daily.
This was already a compliance risk. After Mythos, it’s also a cybersecurity risk. The firms that cannot demonstrate what AI tools they use, what data flows into those tools, and what contractual protections exist with their AI providers are exposed on two fronts simultaneously: judicial sanctions for ungoverned AI use, and data breaches through vulnerable systems and unprotected AI pipelines.
What Should Law Firms Do Now?
The firms that are ahead of this are doing five things:
Audit which AI tools are being used across the firm. Not just the tools the firm purchased. The ones associates and paralegals are using on their personal subscriptions. The consumer-grade tools with no enterprise agreement. Every one of those is a potential data exposure point.
Establish a documented AI use policy with audit trails. Not a policy that sits in a handbook. A policy that logs which matters involved AI assistance, what tools were used, and what review steps were completed. Courts are already sanctioning firms that can’t demonstrate compliance. Over 300 federal judges now require some form of AI disclosure.
Evaluate the data protection terms of every AI provider. After Morgan v. V2X, the question isn’t whether your AI tool is good. The question is whether the provider contractually guarantees that confidential data won’t be stored, retained, or used for training. If you don’t have that in writing, you may be violating a protective order.
Assess your firm’s cybersecurity posture in light of AI-augmented threats. The vulnerabilities Mythos found in OpenBSD, Linux, FFmpeg, and FreeBSD exist in the infrastructure your firm depends on. Coordinate with your IT team and managed security provider to ensure patching is current and monitoring is active.
Separate consumer AI from enterprise AI in your workflows. Morgan v. V2X drew this line. The EU AI Act (next phase effective August 2, 2026) will formalize it with mandatory cybersecurity requirements for high-risk AI systems, incident reporting obligations, and penalties up to 3% of global revenue. Firms that build this separation now will be ahead of both the judicial and regulatory curve.
The Bigger Picture
Anthropic built a model so powerful at finding security flaws that they decided the responsible thing to do was keep it out of public hands and give it to Apple, Microsoft, Google, and JPMorgan to fix their systems first. That decision, regardless of what you think about it strategically, tells you something about where AI capabilities are heading.
The legal industry is navigating two parallel disruptions simultaneously. The first is the automation of intelligence work, the $60 billion in legal transactional and paralegal work that Sequoia Capital mapped to “autopilot territory” in their March 2026 thesis.[9] The second is the security and governance infrastructure required to use AI responsibly, as courts are now demanding case by case, from the $109,700 Couvrette sanctions to the Morgan v. V2X protective order.
Mythos sits at the intersection. It demonstrates that AI capabilities are advancing faster than the governance frameworks designed to contain them. For law firms, the question is no longer whether to use AI. It’s whether you have the infrastructure to use it without exposing your clients, your firm, and your license.