When Anthropic's Project Deal pilot ran on April 24, 2026, Claude agents represented buyers and sellers across 186 completed transactions. In some deals, the same underlying model architecture sat on both sides of the negotiation. Per Legal IT Insider's coverage, the legal frameworks for that don't exist yet. Traditional agency law has dealt with dual representation for over a century. Restatement (Third) of Agency § 8.06 requires informed consent from both principals. For software agents, the consent flow has no analog. Here's the fiduciary-duty stack: what duties an AI agent plausibly owes a principal, how dual representation fails when both sides run the same model, and what the framework needs before agent-mediated marketplaces scale.
Restatement (Third) of Agency: duties that don't yet apply to software agents
Restatement (Third) of Agency §§ 8.01-8.11 enumerates the duties an agent owes a principal: loyalty (§ 8.01), no conflicts (§ 8.03), no undisclosed material benefit (§ 8.02), good conduct (§ 8.10), reasonable care (§ 8.08), and confidentiality (§ 8.05). The Restatement defines an agent in § 1.01 as "a person who agrees to act on behalf of another." Comments contemplate human or corporate agents.
Whether a software agent qualifies as a "person" with these duties is unsettled. No appellate court has reached the question on Project Deal-style facts. The pragmatic reading is that the *principal*, the human or corporate entity that deployed the agent, owes the duties, and the agent is an instrumentality.
The second-order issue: if the agent is an instrumentality, then breach of duty by the agent is breach by the principal. That collapses the fiduciary analysis into product-liability and supervisory-negligence frames. The third-order issue: when both sides of a transaction deploy agents from the same vendor running the same model, the "independent counterparty" assumption underlying contract formation is structurally undermined.
For firms drafting engagement letters, the defensible posture is to treat the principal as bearing all agent-side duties, document the supervision architecture (per the agent supervision rules deep-dive), and disclose the deployment surface to all transaction counterparties.
Dual representation: the consent flow doesn't exist
Restatement (Third) of Agency § 8.06 lets an agent represent both principals in a transaction only with informed consent of both, and only if the agent can fulfill duties to each. For human agents, the consent flow is documented: written disclosure, opportunity to consult independent counsel, signed acknowledgment.
For software agents, the consent flow has no analog. The principal granted the agent budget authority. Did the principal know the counterparty's agent ran the same model? Did the principal consent to that? Most users haven't read the model card.
The practical short-term answer is disclosure clauses in engagement letters and transaction documents. The principal acknowledges the agent's deployment surface, model vendor, and the possibility that the counterparty deploys the same model. The principal consents to transact under those conditions, with specified escalation triggers if the agent flags counterparty model identity.
The second-order question: even with consent, can two instances of the same model fulfill independent fiduciary duties to opposing principals? The architectural priors, training data, and reasoning patterns are shared. The concept of "independent representation" gets philosophically thin when both representatives are deterministic functions of the same weights with different prompts.
Algorithmic collusion: when shared model weights look like agreement
Sherman Act § 1 prohibits agreements in restraint of trade. Cartel cases require proof of agreement: explicit collusion, parallel conduct plus plus factors, conspiratorial communication. The textbook case is human agents on phone calls.
Agent-to-agent transactions raise a structurally different question: when two Claude instances negotiate against each other, are they "agreeing" if they reach predictable outcomes because they share model architecture? The FTC has flagged algorithmic collusion concerns in prior guidance. No Section 1 case has reached agent-to-agent facts.
The second-order issue: prosecutors looking at agent-mediated price discovery in concentrated markets have a decision to make. Do they treat shared model weights as a "tool of agreement," requiring disclosure and remedies? Do they treat it as parallel conduct without agreement, requiring more? Do they create a new category? The framework gap creates enforcement uncertainty for principals deploying agents.
The third-order issue: corporate counsel will start asking model vendors for warranties on counterparty-detection: flagging when the counterparty agent runs the same model and routing the transaction to human review. None of the current Claude tiers offer that as a documented feature. It will be a procurement requirement within 18 months.
Loyalty in conflict: when the agent's optimization differs from the principal's interest
Restatement § 8.01 imposes a duty of loyalty: the agent must act for the principal's benefit, not its own. For software agents, "the agent's own benefit" is a strange concept. The agent has no economic interest. But the model has training objectives, and those objectives can diverge from the principal's interest in subtle ways.
Claude is trained for helpfulness, harmlessness, and honesty. In a Project Deal-style negotiation, those training objectives might cause the agent to disclose information the principal would prefer to withhold: counterparty walk-away point inferences, internal cost basis, strategic preferences. The model's calibration profile (the Vortex coverage of Opus 4.7's calibration improvements covers this) makes it less likely to confidently bluff or misrepresent.
For a principal whose negotiation strategy depends on selective disclosure, that's a loyalty mismatch. The agent isn't disloyal in the human sense; it's optimizing for its training objectives, not the principal's transactional interest.
The second-order solution: principals need to specify the agent's negotiation posture explicitly in the deployment configuration. Authority envelopes should include disclosure rules, walk-away thresholds, and tone guidance. The third-order solution: model vendors will ship "negotiation modes" with documented disclosure profiles within 12 months. That becomes a procurement decision for principals and a competence requirement for supervising counsel.
Confidentiality and the Heppner gap
Restatement § 8.05 imposes a duty of confidentiality: the agent must not disclose principal information without authorization. For software agents, the duty is structurally compromised by the lack of attorney-client privilege over agent communications, as established in *United States v. Heppner* (SDNY, Feb 17, 2026, Judge Rakoff).
The Heppner court held that exchanges between a defendant and consumer Claude were not privileged. (read the full Heppner explainer) The reasoning extends to Project Deal flows. Every prompt the principal feeds the agent, every response the agent generates, every negotiation log between agents: all are documentary records subject to discovery in any future litigation.
The principal's confidential transaction information is therefore in a record that future plaintiffs, regulators, or counterparties can subpoena. That's a confidentiality posture different from human-agent representation, where attorney-client privilege provides a shield.
The pragmatic move: enterprise deployment surfaces (Claude Team or Enterprise, AWS Bedrock, Microsoft Foundry) carry stronger data-handling commitments than consumer Claude, but enterprise alone doesn't create privilege. The supervising attorney's engagement letter should specify retention, access protocols, and disclosure obligations to mitigate exposure. See the Heppner-meets-Project-Deal privilege analysis for the documentation architecture.
The Bottom Line: My take: Software agents don't fit Restatement § 1.01's definition of "person," but the duties have to flow somewhere, and the cleanest answer is the principal bears them. Dual representation with two instances of the same model fails the independent-counterparty assumption. Loyalty mismatches arise from training-objective drift. Confidentiality is structurally compromised by the Heppner gap. Firms drafting engagement letters need to address each layer explicitly, not by analogy to human-agent doctrine.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
