The Canadian Bar Association (CBA) published its guide on the "Ethics of AI for Legal Practitioners" — positioning Canada's legal profession as more cautious on AI adoption than the US but more structured than the UK. The guide addresses confidentiality, competence, and supervision obligations specific to AI tools, giving Canadian lawyers a clearer ethical framework than many of their international counterparts have.
But here's the complication: legal regulation in Canada is provincial, not national. The CBA guide is influential but not binding. Each provincial law society sets its own rules, and their approaches to AI range from progressive to nearly silent. For US firms with cross-border practices — and there are a lot of them along the border states — understanding these variations isn't optional. The ethical obligations that apply to your Canadian matters may be stricter than what you're used to at home.
What the CBA Guide Actually Says
The CBA's guide addresses AI through the lens of existing ethical obligations, similar in structure to ABA Formal Opinion 512 but with a more conservative tone. Key positions include:
Competence includes technological literacy. Lawyers must understand the capabilities and limitations of AI tools before using them. The CBA is explicit that "I didn't know AI could hallucinate" is not a defense to a competence complaint.
Confidentiality requires informed assessment. Before entering client information into any AI tool, lawyers must assess where data goes, how it's stored, who has access, and whether the tool uses input data for training. The CBA treats this as a threshold question — if you can't answer it, you can't use the tool for client work.
Supervision obligations extend to AI. Lawyers who supervise others must ensure that those under their supervision use AI competently. This explicitly includes paralegals, law clerks, and articling students.
Client communication about AI use is encouraged, particularly when AI tools are used in ways that materially affect the representation.
Provincial Law Society Variations
The real regulatory power in Canada sits with the provincial law societies, and their approaches differ significantly.
Law Society of Ontario (LSO): The largest provincial regulator has issued practice resources on AI but hasn't amended its Rules of Professional Conduct specifically for AI. The LSO's technology competence requirements, added in 2020, provide a framework but lack AI-specific detail.
Law Society of British Columbia (LSBC): Has been more proactive, publishing practice advisories that address AI tools directly. BC's approach emphasizes the confidentiality risks of cloud-based AI tools and requires lawyers to conduct due diligence on AI vendors.
Law Society of Alberta (LSA): Published guidance acknowledging AI tools while emphasizing that existing competence and confidentiality rules apply. Alberta's approach mirrors the UK's principles-based stance — less specific than BC, more structured than Ontario's silence on specific issues.
Barreau du Québec: Operating under the civil law tradition, Québec's approach has unique dimensions. The Barreau has addressed AI in the context of its Professional Code of Ethics, with particular emphasis on the lawyer's personal responsibility for all work product — a principle that takes on new significance when AI generates the first draft.
How Canada's Approach Compares to the US
The philosophical difference is significant. The US approach is fragmented but permissive — the ABA says use AI competently, individual courts add disclosure requirements, and the general stance is that AI tools are acceptable if properly supervised. The American framework focuses on how you use AI rather than whether you should.
Canada's approach is more cautious by default. The CBA guide emphasizes risk assessment before adoption, and several provincial law societies have taken positions that implicitly discourage AI use for certain sensitive tasks until the technology matures. The Canadian framework places a heavier burden on pre-use assessment — understanding the tool before deploying it, rather than learning through use.
This difference matters for cross-border practices. A US firm's AI workflow that passes muster under ABA Opinion 512 might not satisfy the more conservative requirements of certain Canadian law societies. Lawyers admitted in both jurisdictions need to apply the stricter standard to any matter touching Canadian interests.
Cross-Border Implications for US Firms
US firms handling Canadian matters face a layered compliance challenge. Federal Canadian matters (immigration, intellectual property, competition law) involve the relevant provincial law society where the lawyer is admitted. Provincial matters involve whichever province's law society governs the proceeding.
The practical risks concentrate in several areas. Confidentiality standards may be stricter. Some Canadian provincial rules impose stronger protections on client data than their US equivalents, which affects which AI tools you can use on Canadian matters. Disclosure expectations may differ. Canadian courts have been developing their own AI disclosure requirements, which may not align with US federal court standing orders.
Cross-border M&A and commercial work is the highest-volume intersection point. US firms routinely use AI tools for due diligence, contract analysis, and document review on Canadian transactions. If the Canadian counterparty or regulatory body has expectations about AI governance that your firm can't meet, it creates a problem that surfaces at the worst possible time.
What US Firms with Canadian Practices Should Do
Identify which provincial law societies govern your Canadian work. The obligations vary, and applying Ontario's approach when your matter is in British Columbia (or vice versa) is a compliance failure.
Conduct a gap analysis between your current AI policies and Canadian requirements. If your firm has an AI governance policy built to ABA Opinion 512 standards, map it against the CBA guide and the relevant provincial requirements. The gaps are likely in pre-use assessment depth and confidentiality documentation.
Apply the stricter standard. When a matter touches both US and Canadian interests, apply whichever jurisdiction's AI ethics requirements are more restrictive. This is the same principle firms apply to conflict of interest rules in cross-border work.
Engage Canadian counsel on AI governance. If your firm has Canadian-admitted lawyers, they should be involved in AI policy development for cross-border matters. If you rely on local Canadian counsel, confirm their AI governance practices align with your firm's standards.
Monitor provincial developments. Canadian AI regulation is evolving faster at the provincial level than at the federal level. Subscribe to updates from the law societies governing your Canadian work.
The Bottom Line: Canada's legal profession is taking a more conservative approach to AI ethics than the US, with the CBA providing a national framework that provincial law societies are implementing at varying speeds. For US firms with cross-border practices, the practical implication is clear: your US-compliant AI workflow may not satisfy Canadian requirements. Identify which provincial rules apply to your Canadian work, conduct a gap analysis, and apply the stricter standard. The cost of getting this wrong on a cross-border matter is measured in both regulatory consequences and client relationships.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
