The UK High Court handed down its ruling in Getty Images v. Stability AI on November 29, 2025 — and it rewrote the playbook for AI copyright litigation worldwide. Justice Joanna Smith held that Stability AI's Stable Diffusion model weights don't constitute copies of Getty's 12 million training images. That single finding gutted Getty's primary copyright claim and forced the company to abandon it mid-trial.
But Getty didn't walk away empty-handed. The court found trademark liability for the ghostly Getty watermarks appearing in Stable Diffusion outputs — a narrow but real win. Stability AI is now appealing. This is the first common-law jurisdiction ruling on whether AI training constitutes copyright infringement, and every US judge watching the Thomson Reuters, NYT, and visual artists cases is paying attention.
What Did the UK High Court Actually Decide in Getty v Stability AI
Justice Joanna Smith's ruling addressed two core questions. First: does training an AI model on copyrighted images create infringing copies? The court said no — model weights are mathematical parameters, not reproductions of the underlying works. Getty had originally claimed that the 3.16 billion parameters in Stable Diffusion 1.x constituted copies of its images. When the court rejected that theory, Getty abandoned its primary copyright infringement claim.
Second: does generating images that resemble copyrighted works create liability? Here the court was more nuanced. It found that outputs occasionally reproducing Getty watermarks constituted trademark infringement under the UK Trade Marks Act 1994. But it didn't find broad copyright infringement in the outputs themselves. The damages assessment for the trademark claim is still pending.
The ruling also addressed Stability AI's data sourcing. The court confirmed that Stability used LAION-5B, a dataset compiled by scraping the open web, which included millions of Getty-watermarked images. But scraping alone wasn't enough — the court needed to see actual copies, and model weights didn't qualify.
Why Getty Abandoned Its Primary Copyright Claim
This is the part US lawyers need to understand. Getty initially argued that every image ingested during training was reproduced inside the model. The technical evidence destroyed that theory. Expert witnesses demonstrated that Stable Diffusion's latent diffusion architecture compresses training data into statistical relationships — not retrievable copies.
Justice Smith wrote that model weights represent "learned mathematical relationships between concepts" rather than stored copies of any specific image. Once that finding landed, Getty's legal team made the strategic decision to withdraw the claim rather than risk a definitive adverse ruling on appeal. They pivoted entirely to the trademark and database right claims.
For US practitioners tracking Andersen v. Stability AI (N.D. Cal.) and Thomson Reuters v. Ross Intelligence, this is a signal. The "weights as copies" theory is dead in the UK. US courts haven't ruled yet, but the technical evidence is the same everywhere.
The Trademark Watermark Problem for AI Companies
Getty's trademark win is narrower than headlines suggest, but it's real. Stability AI's model sometimes generated images containing faint Getty Images watermarks — artifacts from training on watermarked images scraped from the web. The court held this constituted use of Getty's registered trademarks "in the course of trade."
The practical implication: AI companies that train on watermarked content face trademark exposure even if they dodge copyright claims. This isn't hypothetical. During trial, Getty presented over 100 output examples containing visible watermark artifacts. Stability AI argued these were unintentional, but intent doesn't matter under UK trademark law — use in commerce is enough.
For US firms, the parallel is Lanham Act Section 43(a) — likelihood of confusion from AI-generated content bearing source identifiers. If your client is training models on scraped data, audit for watermarks and source identifiers in training sets now.
How Getty v Stability AI Impacts US AI Copyright Cases
The UK ruling isn't binding in the US, but it's the first fully litigated common-law decision on AI training and copyright. US judges in the Southern and Northern Districts of California, the Southern District of New York, and the District of Delaware are all watching.
The key US cases affected: Andersen v. Stability AI (Judge Orrick, N.D. Cal.), Thomson Reuters v. Ross Intelligence (D. Del., settled but precedent-adjacent), and NYT v. OpenAI (S.D.N.Y., Judge Rakoff). Each involves some version of the "training is copying" theory. The UK court's rejection of that theory — backed by detailed technical findings over a 14-day trial — gives defendants ammunition.
But there's a critical difference. US law has fair use (17 U.S.C. Section 107), which the UK doesn't. American defendants have an additional defense layer. The UK ruling is most useful for its factual findings about how diffusion models work — findings that US courts can adopt without importing UK legal conclusions.
Getty has confirmed it's appealing to the UK Court of Appeal. A hearing is expected in late 2026. The appeal will test whether the "weights aren't copies" finding survives scrutiny — and every AI company with US litigation exposure is funding amicus briefs.
What Law Firms Advising AI Companies Should Do Now
Three immediate action items. First, audit training data for trademark artifacts. The watermark finding is the easiest claim for plaintiffs to replicate. If your client's model produces outputs with visible source identifiers, that's trademark liability regardless of copyright outcomes.
Second, preserve the technical expert pipeline. This case turned on expert testimony about how diffusion models store information. The experts who testified — including Professor Michael Wooldridge (Oxford) on AI architecture — are now the most credentialed witnesses in the field. Retain comparable expertise early.
Third, update litigation hold protocols. Getty's team obtained Stability AI's training logs, LAION-5B download records, and internal Slack communications about data sourcing. If your client is training models, assume that every data decision is discoverable. Document the legal reasoning behind dataset choices — "we used LAION because it was publicly available" is better than no documentation at all.
The Bottom Line: The UK High Court ruled that AI model weights aren't copies of training data — killing the primary copyright theory — but found trademark liability for watermark artifacts in outputs, giving both sides ammunition for the US cases that'll actually set global precedent.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
