AI-generated contracts are already enforceable under existing U.S. law. That's not a prediction — it's what the E-SIGN Act and the Uniform Electronic Transactions Act have said since 1999 and 2000, respectively. The legal framework for electronic agents forming binding agreements has been on the books for over two decades. What's new is that AI systems are now sophisticated enough to actually do it at scale, and the law is catching up to the implications.
The question isn't whether an AI can form a contract. It's who's bound when it does. The Uniform Electronic Transactions Act explicitly contemplated systems that "learn and modify their own instructions" — language that maps directly onto modern agentic AI. When an AI agent clicks "accept" on behalf of a business, existing law says that action is attributable to the person who deployed the agent. The California AI Transparency Act adds new disclosure wrinkles. And the courts are just starting to grapple with what happens when two AI agents negotiate with each other.
The E-SIGN Act and Electronic Agents: The Framework Already Exists
The federal Electronic Signatures in Global and National Commerce Act (E-SIGN) is clear: a contract "may not be denied legal effect, validity, or enforceability solely because its formation, creation, or delivery involved the action of one or more electronic agents so long as the action of any such electronic agent is legally attributable to the person to be bound." That's 15 U.S.C. Section 7001. It's been law since 2000. The Uniform Electronic Transactions Act (UETA), adopted by 49 states plus D.C., goes further. Section 14 specifically addresses automated transactions and provides that contracts can be formed by the interaction of electronic agents, even when no individual was aware of or reviewed the specific terms. The drafters of UETA specifically mentioned systems that could learn and modify their own instructions. In 2000, that was theoretical. In 2026, it's every agentic AI system on the market. The legal foundation for AI-formed contracts isn't ambiguous. The attributability question — connecting the AI's action to the person who deployed it — is where the litigation will happen.
Agentic AI and the 'Who Clicked Accept' Problem
Agentic AI systems don't just draft contracts — they negotiate, modify terms, and execute agreements autonomously. When your firm's AI agent accepts vendor terms or your client's AI procurement system agrees to a licensing deal, traditional contract formation concepts get stressed. Proskauer Rose's 2025 analysis frames the core issue: contract law requires mutual assent, and courts have historically required human intent behind that assent. But E-SIGN and UETA created statutory carve-outs for electronic agents precisely to avoid this problem. The legal attribution framework works like this: if you deploy an AI agent with authority to enter agreements, the agent's actions bind you the same way an employee's authorized actions would. The parallel to agency law is intentional. The harder cases involve AI agents that exceed their intended scope — accepting terms the deploying party wouldn't have agreed to, or committing to obligations the party can't fulfill. Traditional agency law concepts like apparent authority and ratification will likely govern, but there's almost no case law yet. Managing partners need to understand this: if your firm deploys AI agents that interact with external systems, you need clear scope limitations and monitoring. An AI agent that binds your firm to unfavorable terms isn't a tech problem. It's a malpractice problem.
The California AI Transparency Act: Disclosure Requirements
California's AI Transparency Act (SB-942), signed into law in 2024, adds a transparency layer to AI-generated content. The Act requires providers of generative AI systems to make AI detection tools available, offer users the option to include a "manifest disclosure" that content is AI-generated, include a "latent disclosure" in AI-generated content, and maintain these capabilities through contractual obligations with licensees. For contract formation, this matters because a contract generated by AI may need to carry disclosure metadata. If you're using AI to draft contracts and sending them to California counterparties, the transparency requirements could apply. The enforcement mechanism has teeth: the California Attorney General, city attorneys, and county counsel can bring actions with civil penalties of $5,000 per day for violations. The Act was amended in October 2025 to expand its scope and clarify compliance obligations. Firms generating high volumes of AI-drafted contracts — particularly in real estate, employment, or consumer-facing practice areas — need to assess whether their AI workflows comply with these disclosure requirements.
Can Two AI Agents Form a Binding Contract With Each Other?
This isn't science fiction. Agentic AI systems are already being deployed in procurement, supply chain management, and commercial negotiations. When Company A's AI agent negotiates terms with Company B's AI agent and they reach agreement, is that a binding contract? Under E-SIGN and UETA, the answer is almost certainly yes. UETA Section 14(1) states that a contract may be formed by the interaction of electronic agents even if no individual was aware of or reviewed the electronic agents' actions or the resulting terms. The statutory framework doesn't require human involvement in the formation process — only that the electronic agent's actions be legally attributable to a party. The practical challenges are significant. Traditional contract defenses — mistake, unconscionability, fraud — were designed for human parties. If an AI agent agrees to terms that no reasonable human would accept, courts will need to decide whether the deploying party bears the risk of their agent's poor judgment. The UCC's existing framework for electronic contracting in commercial transactions provides some guidance, but the edge cases will require new precedent. For law firms advising clients on AI-enabled commerce, the key recommendation is clear: define the agent's authority explicitly, log all AI-initiated transactions, and build human review triggers for high-value or unusual terms.
What Law Firms Need to Do Right Now
First, audit your own AI use. If your firm uses AI tools that send emails, schedule meetings, file documents, or interact with court systems, those actions may create binding obligations. Understand what your AI tools can do autonomously versus what requires human approval. Second, update your client engagement letters. If you're using AI to draft contracts, review documents, or conduct research, clients need to know. ABA Formal Opinion 512 requires informed consent for AI use with client data, and the California AI Transparency Act may require disclosure in the documents themselves. Third, advise clients on agentic AI risk. Any client deploying AI agents in commercial transactions needs to understand the attribution framework: their agent's actions bind them. Scope limitations, transaction caps, and human-in-the-loop requirements for high-value commitments should be standard. Fourth, watch the case law. We're in the early innings of AI contract formation litigation. The first major appellate decisions on AI agent authority, AI-to-AI contract formation, and AI contract defenses will shape this area for decades. Track them and brief your clients proactively.
The Bottom Line: AI-generated contracts are enforceable today under the E-SIGN Act and UETA — the legal framework has existed since 2000. The real issues are attribution (who's bound when an AI acts), authority scope (what happens when AI exceeds its mandate), and transparency (California's disclosure requirements). Law firms need to audit their own AI workflows, update engagement letters, and prepare clients for a world where AI agents are forming binding agreements at machine speed.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
