Your board is going to ask about AI governance. The question isn't if — it's whether you'll have a coherent answer when they do. In 2026, AI risk sits alongside cybersecurity and data privacy as a top-5 board concern, according to Deloitte's Annual Corporate Governance Survey.

Most GCs are flying blind here. They've deployed AI tools without a governance framework, or they've locked everything down so tight that the business routes around legal entirely. Neither approach survives a board-level conversation. What you need is a structured AI governance program that manages real risk while enabling the business to move — and a way to communicate it to directors who don't care about technical details.


What Your Board Actually Wants to Know

Strip away the buzzwords and board members care about five things. What AI tools are we using and what data are they touching? They want an inventory, not a narrative. What's our exposure if something goes wrong? They want risk categories and likelihood estimates, not reassurance. Who's accountable? They want names and reporting lines, not committees. Are we compliant with applicable regulations? They want a gap analysis against the EU AI Act, state-level laws, and industry-specific rules. How does this compare to our peers? They want benchmarking. If you can answer these five questions clearly, you'll be the most prepared GC in the room.

Building the AI Governance Framework

A functional framework has four layers. Layer 1: Inventory and Classification — every AI tool in use, categorized by risk level (low, moderate, high) based on what data it accesses and what decisions it influences. Layer 2: Policies and Controls — acceptable use policies, data handling requirements, human oversight requirements, and vendor management standards. Layer 3: Monitoring and Reporting — regular audits, incident tracking, and performance metrics. Layer 4: Board Reporting — a quarterly dashboard that translates Layers 1-3 into the five questions above. Most GCs try to build all four layers at once and end up with nothing usable. Start with Layer 1. You can't govern what you haven't inventoried.

The Risk Reporting Model That Actually Works

Ditch the 40-page risk report. Board members scan, they don't study. Use a one-page AI risk dashboard with four quadrants: Tool Inventory (number of tools, risk tier distribution, new tools added this quarter), Incidents and Near-Misses (count, severity, resolution status), Compliance Status (regulatory requirements mapped to current state, with green/yellow/red indicators), and Strategic Metrics (cost savings from AI, efficiency gains, adoption rates). Update it quarterly. Send it 48 hours before the board meeting. Present it in 10 minutes. Leave 20 minutes for questions. This format has been adopted by several Fortune 500 legal departments and consistently gets positive board feedback.

Vendor Management: Where Most Governance Frameworks Fail

Your governance framework is only as strong as your vendor oversight. Every AI tool your team uses processes your data through a third-party system. You need to know: where does the data go, who can access it, is it used for model training, what happens at contract termination, and what's the vendor's security posture. Build a standard AI vendor assessment that covers these questions. Require it for every new tool. Review existing vendors annually. The biggest governance failures we've seen in corporate legal aren't from internal misuse — they're from vendors who changed their data practices without notifying customers. Your procurement process is your first line of defense.

How to Present AI Strategy to Leadership Without Losing the Room

Lead with business impact, not technology. 'We reduced contract cycle time by 40% and avoided $500K in outside counsel spend' beats 'we deployed a large language model for contract analysis' every time. Frame AI governance as risk management that enables growth — not as a compliance burden. Show the cost of not governing (the Mata v. Avianca case cost a firm its reputation; data breaches at AI vendors have triggered regulatory investigations). Then show your framework as the thing that lets the company capture AI's upside while managing the downside. Close with what you need from the board: budget, executive sponsorship, and a mandate for cross-functional cooperation. Give them a decision to make, not just information to absorb.

The Bottom Line: Board-level AI governance isn't about having all the answers — it's about having a structured approach to the questions your directors will ask. Build the inventory first, establish policies second, create a reporting dashboard third, and present it as risk management that enables business value. The GCs who get this right become strategic advisors to the board. The ones who don't get replaced by someone who will.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.