The UK's Solicitors Regulation Authority (SRA) hasn't issued specific AI guidance for solicitors — and that silence is creating more risk, not less. Without dedicated AI rules, solicitors are navigating AI adoption under existing competence, integrity, and client service obligations that were written decades before ChatGPT. The question isn't whether those rules apply to AI. It's how aggressively the SRA will enforce them when AI-related failures occur.

The UK has already seen consequences. In 2023, solicitor Harshita Sheryl Handa and barrister Hasan Siddique Mallick faced professional discipline after submitting AI-generated fabricated case citations to a tribunal. The UK Bar Council has published AI guidance for barristers, but solicitors are still operating without specific direction — relying on general principles while the technology transforms how they practice.


What the SRA's Silence Actually Means

The SRA regulates through principles-based standards rather than prescriptive rules. Their position, as of early 2026, is that existing regulations adequately cover AI use. The SRA Principles require solicitors to act with independence, honesty, integrity, and in the best interests of clients. The Code of Conduct requires competent service delivery and effective risk management.

The practical effect of this approach is that every solicitor is individually responsible for figuring out how AI fits within these principles — with no specific guidance on acceptable tools, verification requirements, or disclosure obligations. Compare this to the US, where the ABA has published Formal Opinion 512 specifically addressing AI, and over 30 federal courts have issued standing orders on AI disclosure. UK solicitors have less clarity, not more flexibility.

The Handa & Mallick Case: What It Established

In the Harjit Kaur v. Secretary of State for the Home Department case before the Immigration and Asylum Tribunal, solicitor Harshita Handa and barrister Hasan Mallick submitted skeleton arguments containing fabricated case citations generated by AI. The tribunal identified the false citations and referred both practitioners for disciplinary proceedings.

Handa was suspended from practice. The case established several principles that UK practitioners should treat as binding precedent for professional conduct purposes. Using AI does not transfer professional responsibility. The solicitor who files the document bears full responsibility for its accuracy. Failure to verify AI output is a competence failure. The SRA's competence requirements don't have a carve-out for AI-generated work. The tribunal treated this as a serious matter — not a minor administrative error, but a fundamental breach of the duty of candor.

UK Bar Council Guidance vs. SRA Silence

The Bar Council published guidance for barristers on AI use that covers several key areas: verification obligations (barristers must independently verify all AI-generated content), confidentiality (client information should not be entered into AI tools without appropriate safeguards), disclosure (barristers should inform the court when AI has been used to generate substantive content), and competence (barristers must understand the limitations of AI tools they use).

This guidance, while not binding regulation, creates an asymmetry in the UK legal profession. Barristers have a reference framework for AI use. Solicitors don't. In practice, many solicitors are adopting the Bar Council guidance informally as a floor for their own AI practices. That's a reasonable approach, but it's not a substitute for SRA-specific guidance — particularly on issues like client confidentiality in AI tools, where solicitors' obligations differ from barristers'.

How the UK Approach Compares to the US

The US approach is more fragmented but more specific. The ABA's Formal Opinion 512 explicitly maps existing ethics rules to AI tools. Individual state bars are issuing their own guidance — some more restrictive, some less. Federal courts are issuing standing orders that create disclosure requirements and verification obligations.

The UK's principles-based approach offers theoretical flexibility but practical uncertainty. A US lawyer can point to Opinion 512 and know that competence requires understanding AI limitations and verifying output. A UK solicitor knows they must be competent, but has to infer what that means for AI without authoritative guidance.

The enforcement trajectory is what matters. When the SRA eventually brings enforcement actions related to AI (and they will — the Handa case was just the beginning), the decisions will retroactively define what the existing rules required all along. Solicitors who assumed the silence meant permissiveness may find themselves on the wrong side of standards that were always there but never spelled out.

What UK Solicitors Should Do Now

Don't wait for specific SRA guidance. The existing Principles and Code of Conduct create obligations that clearly apply to AI use. Build your practice accordingly.

Adopt the Bar Council guidance as a minimum standard. It's the closest thing to authoritative AI guidance in the UK legal profession. Verify all AI output, maintain confidentiality, understand your tools' limitations, and consider disclosure obligations.

Document your AI governance. When the SRA comes knocking — whether through a thematic review, a complaint investigation, or an enforcement action — the first question will be what policies you had in place. Having documented policies demonstrates the competence and risk management the Code of Conduct requires.

Watch the EU AI Act. UK firms with EU clients face EU AI Act obligations regardless of UK domestic regulation. And the UK government's own approach to AI regulation, while lighter-touch than the EU, is evolving. The UK's AI Safety Institute and ongoing regulatory consultations signal that more specific requirements are coming.

Monitor the disciplinary pipeline. The Handa case won't be the last. Each new SRA or SDT decision involving AI will add specificity to what the existing rules require. Treat each decision as guidance the SRA hasn't published yet.

The Bottom Line: The SRA's lack of specific AI guidance doesn't mean UK solicitors have fewer obligations — it means they have less clarity about obligations that already exist. The Handa case demonstrated that existing competence and integrity rules apply fully to AI use, with serious consequences for non-compliance. Solicitors who treat the regulatory silence as permission are betting that the SRA's eventual interpretation will be lenient. Given the trajectory of AI regulation globally, that's a bad bet.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.