On April 7, 2026, NIST released its AI RMF Profile for Trustworthy AI in Critical Infrastructure — the clearest signal yet that AI risk reporting is becoming a board-level compliance obligation, not a GC's discretionary briefing. Directors are asking questions about AI risk. If you don't have structured answers, someone else at the table will fill the vacuum.

The GCs who manage this well aren't dumbing down technical risk into corporate platitudes. They're using the NIST AI Risk Management Framework as a shared language that unifies conversations across risk, compliance, technology, and legal — and they're delivering quarterly reports that give directors enough context to govern without enough jargon to confuse.


Why Boards Are Asking About AI Risk Now

Three things changed in 2025-2026 that put AI on every board's agenda. First, enforcement arrived. The EU AI Act's penalty structure — up to 35 million euros or 7% of global revenue — turned AI governance from a best practice into a fiduciary obligation. Directors face personal liability questions if they didn't exercise adequate oversight. Second, the first sanctions hit. 2026 saw the first prosecutor sanction and the first circuit court sanction for AI-related issues in litigation. Judge Rakoff's ruling in United States v. Heppner — that AI chatbot documents weren't protected by attorney-client privilege — created a new category of risk boards hadn't previously considered. Third, AI adoption reached critical mass. With 87% of corporate legal departments now using AI tools, the question shifted from 'should we adopt AI?' to 'are we governing what we've already deployed?' Boards that haven't asked this question yet will — likely prompted by their D&O insurer, an activist investor, or a regulatory inquiry.

The NIST AI Risk Management Framework: Simplified for Board Reporting

The NIST AI RMF 1.0 is the gold standard for AI risk governance, and it's what most GCs should anchor their board reporting to. It organizes AI risk management into four functions. Govern — establish policies, roles, and accountability structures for AI risk. Board-level question: 'Do we have a formal AI governance framework with named owners?' Map — identify and categorize all AI systems in use, including their risk profiles. Board-level question: 'Do we have a complete inventory of AI systems and their risk classifications?' Measure — assess AI risks using defined metrics and testing. Board-level question: 'How are we quantifying AI risk, and what are the current risk levels?' Manage — implement controls, monitor performance, and respond to incidents. Board-level question: 'What controls are in place, and how do we respond when something goes wrong?' Primary ownership of NIST AI RMF adoption should sit with the General Counsel, CISO, or Chief Risk Officer — whoever can operationalize AI risk management as part of the broader enterprise risk strategy. The GC's advantage is the ability to interpret emerging regulatory requirements and assess legal exposure, which is exactly what directors need.

The Quarterly Board Report Template

Keep it to three pages maximum. Directors receive hundreds of pages per board meeting — they won't read a 20-page AI risk briefing. Page 1: AI Risk Dashboard. A single-page visual showing: number of AI systems deployed, risk classification breakdown (high/medium/low using NIST categories), compliance status with active regulations (EU AI Act, Colorado AI Act, applicable state laws), and any open incidents or near-misses. Use red/yellow/green status indicators — boards are trained to read this format. Page 2: Key Developments and Actions. Three to five bullet points covering: new regulations enacted or proposed since last report, changes to your AI inventory (new deployments, decommissions), completed risk assessments and their findings, and remediation actions taken or in progress. Each bullet should include a 'so what' — not just what happened, but what it means for the organization. Page 3: Forward-Looking Risk Assessment. Upcoming regulatory deadlines (next 6 months), emerging risk areas identified through monitoring, budget implications of upcoming compliance requirements, and recommended board actions or approvals needed. Deliver this report quarterly, aligned with regular board meetings. Supplement with ad-hoc briefings only for material developments — a new enforcement action in your industry, a significant regulatory change, or an internal incident.

The Five Questions Directors Will Ask (And How to Answer Them)

Prepare for these five questions at every board meeting where AI risk is on the agenda. 'What's our exposure?' Answer with specifics: 'We use 12 AI systems, 3 are classified as high-risk under the EU AI Act. Our maximum regulatory exposure is [X] based on revenue thresholds. Our current compliance posture covers [Y]% of requirements.' 'How do we compare to peers?' Reference industry benchmarking data — ACC surveys, Gartner assessments, or industry-specific compliance studies. Directors want to know if you're ahead, behind, or at parity. 'Who's accountable?' Name names. The GC owns governance, the CISO owns security, the CTO owns technical implementation. Each has defined responsibilities documented in the AI governance framework. Boards don't want committees; they want individuals with clear authority. 'What happens if something goes wrong?' Walk through your incident response plan in 60 seconds. Who gets notified, what's the escalation path, how do you communicate externally, and what's the remediation process. 'What do we need to approve?' Have a specific ask ready — budget for compliance tools, approval of a new AI governance policy, authorization for a risk assessment program. Directors want to act, not just listen.

Common Mistakes GCs Make in Board AI Risk Reporting

Mistake 1: Leading with technology. Directors don't need to understand how large language models work. They need to understand what risks AI creates for the organization and what you're doing about it. Skip the AI primer. Mistake 2: Reporting only compliance status. Boards need forward-looking risk assessment, not just a checklist of regulations you've met. Tell them what's coming in the next 6-12 months and what resources you'll need. Mistake 3: No incident reporting framework. If your first AI incident report to the board is also your first AI incident, you've failed at governance. Establish a reporting threshold and cadence before something goes wrong. Mistake 4: Treating AI risk as separate from enterprise risk. AI risk should be integrated into your existing enterprise risk management framework, not siloed in a separate report. Use the same risk scoring methodology, the same escalation thresholds, and the same reporting format. Directors already understand ERM — leverage that familiarity. Mistake 5: No quantification. 'AI poses a significant risk' means nothing. 'Our maximum regulatory exposure under the EU AI Act is 22 million euros based on our global revenue, and we've addressed 78% of compliance requirements' is actionable.

The Bottom Line: Board-level AI risk reporting isn't optional in 2026 — it's a governance obligation driven by the EU AI Act, Colorado AI Act, and mounting enforcement precedent. Use the NIST AI RMF as your framework. Deliver a three-page quarterly report: risk dashboard, key developments, and forward-looking assessment. Prepare for five specific director questions with quantified answers. Integrate AI risk into your existing enterprise risk management framework — don't create a silo. The GCs who own this conversation strengthen their position. The ones who avoid it lose it to the CTO or CISO.

AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.