On August 8, 2025, the Executive Office for Immigration Review issued Policy Memorandum 25-40 — its first-ever public guidance on generative AI in immigration proceedings. That memo didn't ban AI. It didn't require blanket disclosure. What it did was put every immigration attorney on notice that AI-fabricated citations in removal proceedings will trigger professional discipline, while leaving individual judges free to impose their own standing orders.

Immigration courts process over 3 million pending cases with roughly 600 judges, and the pro se rate in removal proceedings exceeds 60%. That combination — crushing volume, limited judicial resources, and a majority of unrepresented respondents turning to AI for help — makes EOIR the federal tribunal system where AI adoption is happening fastest and with the least oversight.


EOIR Policy Memorandum 25-40: What It Actually Says

PM 25-40 takes a deliberately measured approach. EOIR has neither a blanket prohibition on generative AI nor a mandatory disclosure requirement. Instead, the memo does three things. First, it confirms that existing professional conduct standards — particularly the duty not to knowingly offer false evidence — apply to AI-generated content. Second, it authorizes individual immigration judges and the Board of Immigration Appeals to adopt their own standing orders or local operating procedures regulating AI use and disclosure. Third, it warns practitioners that submitting AI-generated content with fake case citations or factually inaccurate information will trigger disciplinary proceedings. The practical effect is a decentralized enforcement model. Some immigration judges will require AI disclosure; others won't. Some will sanction aggressively; others will rely on the existing disciplinary framework. For practitioners appearing before multiple judges, the compliance burden is real — you need to know each judge's approach.

Pro Se Respondents and AI: The Access-to-Justice Tension

More than 60% of respondents in removal proceedings appear without counsel, and that number is higher for detained individuals. These pro se respondents are increasingly using AI tools to draft asylum applications, motions to reopen, and responses to Notices to Appear. The quality varies wildly — some AI-generated filings are competent summaries of legitimate asylum claims, while others cite nonexistent BIA decisions or misstate the elements of withholding of removal. EOIR faces the same tension the Tax Court confronts with self-represented filers: restricting AI use effectively restricts access to justice for people who can't afford attorneys in life-or-death proceedings. But allowing unrestricted AI use floods already-overwhelmed immigration judges with filings they can't trust. PM 25-40's approach — enforce accuracy standards without banning the tool — is a practical compromise, though it shifts the verification burden onto judges who are already managing dockets of 2,000+ cases.

RFE Automation and Removal Defense Applications

For immigration attorneys, AI's highest-value applications are in two areas: responding to Requests for Evidence and building removal defense packages. RFE responses are document-heavy, deadline-driven, and often follow predictable patterns — exactly the kind of work where AI excels at first-draft generation. An attorney who receives an RFE on an employment-based petition can use AI to generate an initial response framework, identify the specific deficiency alleged, and draft language addressing common RFE grounds. Similarly, removal defense requires assembling country conditions evidence, personal declarations, and legal arguments that synthesize across multiple sources. AI tools can accelerate the research phase — pulling current country conditions reports, identifying relevant BIA precedent, and flagging potential relief eligibility — while the attorney focuses on case-specific strategy and credibility assessment. The key limitation: immigration law changes rapidly through policy memos, interim rules, and executive action. AI models trained on data even six months old may miss critical changes in asylum standards, TPS designations, or prosecutorial discretion policies.

When the Judge Uses AI: The Emerging Controversy

In a development that caught the immigration bar off-guard, Migrant Insider reported in 2025 that an immigration judge was caught using AI to draft court decisions. This raises a fundamentally different set of concerns than attorney AI use. When a judge uses AI to generate decisions in removal proceedings — where the respondent's life may literally be at stake — questions of due process, judicial independence, and individualized consideration arise that PM 25-40 doesn't address. The memo focuses on practitioner obligations, not judicial AI use. EOIR's internal policies on judicial AI use remain opaque. The National Association of Immigration Judges has raised concerns about judicial workload as a driver of AI adoption — when judges are expected to complete cases at rates that make individualized analysis nearly impossible, the temptation to use AI for decision drafting increases. Managing partners should be aware that AI-generated judicial decisions may contain the same hallucination and accuracy problems as AI-generated briefs, creating potential appeal issues.

Compliance Strategy for Immigration Practices

Immigration practitioners need a compliance framework that accounts for PM 25-40's decentralized approach. Track individual judge requirements. Maintain a database of which immigration judges in your practice areas have issued AI-specific standing orders. The AILA practice advisories are a good starting point, but direct monitoring of local court notices is essential. Verify country conditions and legal citations obsessively. Immigration AI errors are uniquely dangerous because they can affect asylum seekers' safety. Every country conditions citation must link to a current, verifiable source. Every BIA or circuit court citation must be Shepardized. Build AI-assist disclosures into your practice. Even where PM 25-40 doesn't require disclosure, proactive disclosure builds credibility with judges who are skeptical of AI. A simple statement that AI tools were used for research assistance and all content has been independently verified costs nothing and prevents problems.

The Bottom Line: EOIR's PM 25-40 is the first formal AI guidance for immigration proceedings, and it takes a decentralized approach — no blanket ban, no mandatory disclosure, but clear consequences for false AI-generated content. With 3 million pending cases, 600 judges, and a 60%+ pro se rate, immigration courts are where AI adoption is outpacing regulatory frameworks fastest. Immigration practitioners should track individual judge requirements, verify every AI-generated citation as if lives depend on it — because in removal proceedings, they do.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.