Most firms skip the readiness assessment and jump straight to buying tools — then wonder why adoption stalls at 20%. AI readiness isn't about whether your firm can afford Harvey. It's about whether your infrastructure, culture, policies, and people are prepared to use AI effectively and safely.
This 20-question self-assessment takes 30 minutes and gives you a clear picture of where your firm stands. Score each question, calculate your total, and use the interpretation guide to determine your next steps. No vendor required. No consulting engagement required. Just honest answers.
Section 1: Infrastructure Readiness (Questions 1-5)
Question 1: Does your firm have enterprise-grade internet security (firewall, VPN, endpoint protection)? Score: Yes = 5, Partial = 3, No = 0. AI tools require secure connectivity. Without baseline security, every AI interaction is a potential data leak.
Question 2: Can your IT team (or provider) configure SSO and access controls for new cloud applications? Score: Yes, in-house = 5, Yes, outsourced = 3, No = 0. Enterprise AI tools require SSO integration. If provisioning a new cloud app takes your firm months, AI deployment will stall.
Question 3: Does your firm have a documented data classification policy (what's confidential, what's restricted, what's public)? Score: Documented and enforced = 5, Documented but unenforced = 2, No = 0. AI requires knowing which data can go into which tools. Without classification, attorneys can't make informed decisions about what to enter into AI systems.
Question 4: Are your document management and practice management systems current and functional? Score: Modern, well-adopted DMS/PMS = 5, Dated but functional = 3, Chaotic or non-existent = 0. AI tools integrate with existing systems. If your DMS is a mess, AI integration will be a mess.
Question 5: Does your firm have bandwidth for technology projects beyond daily operations? Score: Dedicated legal tech/IT staff = 5, Shared IT with some bandwidth = 3, Zero bandwidth = 0. AI deployment requires project management, training, and ongoing support. If your entire IT capacity is consumed by keeping email running, AI adoption will fail.
Section 2: Policy and Governance Readiness (Questions 6-10)
Question 6: Does your firm have a written AI use policy? Score: Comprehensive and enforced = 5, Basic/draft = 3, No = 0. You need a policy before you deploy tools. Otherwise, attorneys will use AI with no guardrails.
Question 7: Does your firm have a process for vetting and approving new technology vendors? Score: Formal process with security review = 5, Informal review = 3, No process = 0. AI vendor selection requires evaluating data handling, security, and liability — not just features and pricing.
Question 8: Does your firm track which courts require AI disclosure in filings? Score: Maintained database/tracking system = 5, Informal awareness = 2, No tracking = 0. Litigators using AI must comply with disclosure requirements. Without tracking, you're one filing away from sanctions.
Question 9: Has your firm's engagement letter been updated to address AI use? Score: Updated with AI clauses = 5, Under review = 2, No AI provisions = 0. Client relationships need AI transparency. See our engagement letter clauses guide.
Question 10: Has your malpractice carrier been consulted about AI use? Score: Consulted, confirmed coverage = 5, Not yet consulted = 0. Some carriers have AI-related requirements or exclusions. Deploying AI without carrier awareness is an unforced coverage risk.
Section 3: People and Culture Readiness (Questions 11-15)
Question 11: What percentage of your attorneys have used any AI tool for legal work? Score: >70% = 5, 40-70% = 3, <40% = 1. Baseline AI familiarity predicts adoption success. Firms where most attorneys have experimented with AI deploy enterprise tools faster.
Question 12: Does your firm have a designated AI champion or committee? Score: Named committee with authority = 5, Informal champions = 3, Nobody owns it = 0. AI adoption without ownership dies. Someone needs to be responsible for selection, training, and governance.
Question 13: Is your firm's leadership (managing partner, executive committee) supportive of AI adoption? Score: Actively supportive, using AI themselves = 5, Supportive in principle = 3, Skeptical or resistant = 0. Top-down support determines adoption velocity. If leadership doesn't use AI, nobody will.
Question 14: Does your firm invest in attorney training beyond mandatory CLE? Score: Regular technology training programs = 5, Occasional ad hoc training = 2, CLE only = 0. AI adoption requires training investment. Firms that don't train on existing technology won't successfully train on AI.
Question 15: How does your firm respond to technology failures or mistakes? Score: Learning-oriented (investigate, improve) = 5, Blame-oriented (find fault, punish) = 1. AI tools will produce errors. Firms that punish mistakes drive AI use underground. Firms that learn from mistakes improve their AI practices.
Section 4: Strategic Readiness (Questions 16-20)
Question 16: Has your firm identified specific workflows that AI could improve? Score: Specific workflows mapped = 5, General sense = 2, No analysis = 0. Successful AI deployment targets specific pain points, not vague 'innovation.'
Question 17: Does your firm measure attorney productivity and efficiency? Score: Tracked with data = 5, Anecdotal awareness = 2, Not measured = 0. If you can't measure productivity before AI, you can't measure AI's impact. Without measurement, you can't justify continued investment.
Question 18: Is your firm prepared to adjust billing practices as AI improves efficiency? Score: Already discussing value-based billing = 5, Aware of the issue = 2, No consideration = 0. AI disrupts hourly billing by reducing time on tasks. Firms that don't adapt billing models will lose revenue as AI makes them faster.
Question 19: Does your firm have a competitive intelligence function that tracks what peer firms are doing with AI? Score: Active tracking = 5, Occasional awareness = 2, No tracking = 0. Understanding competitor AI adoption informs your urgency and strategy.
Question 20: Has your firm allocated budget for AI tools and implementation? Score: Specific budget allocated = 5, Can allocate if needed = 3, No budget = 0. AI deployment costs money — tools, training, support. Firms that budget proactively deploy faster and more successfully than firms that treat AI as an unfunded mandate.
Scoring Interpretation and Next Steps
80-100 points: AI Ready. Your firm has the infrastructure, policies, people, and strategy to deploy AI successfully. Start with your highest-pain workflow, run a 60-day pilot with 2-3 tools, and move to firm-wide deployment based on results. Your challenge is tool selection and vendor negotiation, not readiness.
60-79 points: Mostly Ready. You have a solid foundation with gaps to address. Identify the sections where you scored lowest and prioritize those before large-scale deployment. Common gaps: AI policy, vendor vetting process, and training infrastructure. Address these in 60-90 days, then launch pilots.
40-59 points: Foundation Needed. You're not ready for enterprise AI deployment, but you can get ready in 3-6 months. Priorities: write an AI policy, designate an AI champion, consult your malpractice carrier, and start individual attorney experimentation with Claude Pro. Build the foundation before spending on enterprise tools.
Below 40 points: Start From Scratch. Your firm needs fundamental technology and governance improvements before AI is viable. Focus on: basic IT security, practice management systems, data classification, and leadership buy-in. AI deployment without these basics will fail or create unmanaged risk. Timeline: 6-12 months of foundational work before AI pilots.
Regardless of score: every firm should have individual attorneys experimenting with Claude Pro ($20/month) right now. You learn AI readiness by using AI, not by planning to use it.
The Bottom Line: Twenty questions, thirty minutes, and you know exactly where your firm stands on AI readiness. Most firms score 40-65 — mostly ready with specific gaps to address. The scoring isn't the point. The point is identifying the 3-5 gaps between where you are and where you need to be. Fix those gaps, then deploy. The firms that skip the assessment waste money on tools their people can't use.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
