Bing AI Performance is the free dashboard that shows which queries inside Microsoft Copilot trigger citations of your domain. Microsoft launched it to verified Bing Webmaster Tools users in 2025. As of April 2026, almost no law firm has opened it. aivortex.io's first-party data over the last 30 days: 2,100+ Copilot citations. Top grounding query: "Harvey AI legal." Spellbook and Everlaw follow. Microsoft 365 Copilot reaches 90%+ of US law firms via the existing M365 install base, so the channel where lawyers research vendors, policy questions, and case law is increasingly Copilot — not Google. This guide is the screenshot-equivalent walkthrough of what the dashboard shows, how to read it, and what to do with the data.
What Bing AI Performance is — and why it doesn't appear in regular Bing search reports
Bing Webmaster Tools has shown regular Bing search performance for years — impressions, clicks, average position, query lists. The Bing AI Performance panel is a separate report inside the same tool. It shows performance specifically inside Microsoft Copilot, ChatGPT (where it pulls from Bing), and other AI surfaces that ground their responses in Bing's index.
The key metrics:
- Citation count. How many times the AI surface cited your domain in a response. Different from impressions in regular search — the AI doesn't return a SERP, it returns an answer with citations. A citation is the AI explicitly attributing a fact to your domain. - Click-through count. When users click through from an AI response citation to your domain. Lower volume than citations because most users read the AI response and don't click. The clicks that do happen are typically high-intent. - Top grounding queries. The user prompts that triggered citations of your domain. This is the gold — it's the question your domain is recognized as authoritative for, by name. - Top cited pages. Which specific URLs on your domain got cited. Tells you which content is doing the work.
The panel is free, refreshes daily, and requires only verification of the domain in Bing Webmaster Tools (the same verification that's been required for Bing search analytics for years). Most firms verified their domain years ago and haven't logged in since.
How to access the dashboard — the operational steps
The walkthrough assumes a verified domain in Bing Webmaster Tools. If your firm hasn't verified, that's a 30-minute task done once via DNS record, HTML file, or Microsoft Clarity integration.
Necessary steps:
- Sign in to Bing Webmaster Tools with the Microsoft account associated with the domain verification - Select the verified domain from the property list - In the left navigation, locate the AI Performance section (Microsoft has moved this around — as of April 2026 it sits under Performance reports) - Open the dashboard view: 30-day default, with date range selector for 7-day, 90-day, custom - Default report shows total citations, total clicks, citation source breakdown (Copilot vs ChatGPT vs Edge Copilot vs others) - Drill into Top Queries to see the prompts that triggered citations - Drill into Top Pages to see the URLs on your domain that got cited - Export to CSV for analysis (the export button is in the upper right of each report)
The panel updates daily for most properties. Some new properties show 7-14 day delay before first data appears. For verified domains with existing Bing presence, data is typically available immediately.
Reading the dashboard — what 2,100 citations actually means
aivortex.io's last 30 days as the worked example:
- 2,100+ total Copilot citations - Top grounding query: "Harvey AI legal" - Following queries (in order): "Spellbook contract review," "Everlaw discovery," "AI disclosure rules federal court," "legal AI vendor comparison" - Top cited URL: the Harvey AI vendor analysis page - Click-through count: under 100 in the same window
The absolute citation number matters less than the structure of the data. 2,100 citations across 30 days is roughly 70 citations/day — meaning Copilot surfaces aivortex.io as part of a response 70 times every day. Most of those don't generate a click, because Copilot users read the answer and move on. But the citation itself is the AEO event: the AI surface recognizes the domain as a credible source for a specific question category.
The second-order read: the queries are vendor names plus a niche modifier. "Harvey AI legal" — not just "Harvey AI." Copilot is sophisticated enough to distinguish vendor research for a legal-industry user from generic vendor research. Vortex's content matched the legal-industry framing.
The third-order read: a firm whose practice area depth isn't represented in Bing's index doesn't appear in this data. The dashboard is a mirror — it reflects what your content already does. Firms that publish thin content show thin Bing AI Performance data. Firms that publish FAQ-first vertical content with clean schema show citations against vendor names, policy questions, and case law queries. The Microsoft Copilot citations how-to-rank guide covers the production side; this dashboard guide covers the measurement side.
What to do with the data — the four action loops
Bing AI Performance data is operationally useful in four loops:
Loop 1: Content gap discovery. The Top Queries report shows which questions your existing content already answers. The questions that don't appear — but should, given your practice — are content gaps. A firm with a heavy white-collar defense practice that doesn't show citations against "DOJ AI policy" or "corporate compliance AI" has a publishing gap, not a Bing gap.
Loop 2: Vendor visibility audit. When a vendor name appears as a top grounding query, your domain is positioned in the vendor research conversation. That's a citation flywheel — the more your content gets cited for "Vendor X," the more Copilot defaults to your domain for related questions. The opposite is also true: if your domain doesn't show in citations against the vendors you actually use, your firm's perspective on those vendors is invisible to your own attorneys when they prompt inside Copilot.
Loop 3: Practice area authority signals. Citations against case names, regulation numbers, or doctrine queries ("Heppner ruling AI privilege," "Federal Rules of Evidence AI," "state bar AI ethics opinion") signal that your domain is recognized as an authority in those areas. Firms can use this to validate their own E-E-A-T positioning and to identify content categories worth doubling down on.
Loop 4: Competitive intelligence. Aggregate top-citation domains for the queries your firm cares about. The pattern of who Copilot cites — competitors, vendors, journalism outlets, government sources — tells you the authority structure of your practice area inside Copilot. The Copilot vs Google channel analysis covers the comparative read across search surfaces.
The audit gap — why most firms have no idea what Copilot says about them
Counter-narrative: most legal-industry conversation about AI visibility focuses on Google. Google's AI Overviews. Google's evolving SERP. Google's Search Console.
Copilot has 90%+ of law firms by Microsoft 365 install base. Lawyers prompt Copilot inside Word while drafting, inside Outlook while emailing, inside Teams while in meetings. The query volume isn't measurable from Google's data because it doesn't pass through Google. It's only measurable from Bing AI Performance.
And most firms haven't enabled the dashboard.
The operational consequence: a managing partner can ask their CMO "how visible are we in AI search?" and the CMO can pull a Google AI Overviews report — which captures one channel. The Copilot channel, which may dwarf the Google channel for any in-Word research, is invisible to the firm. Decisions get made on incomplete data.
Four types of firms find this most acute:
- Firms whose content strategy already publishes vendor analysis, policy guides, and case-law explainers — they're getting citations they can't see - Firms whose competitors are publishing this content — competitor citation share is invisible to them - Firms whose attorneys actively use Copilot — internal vendor research is happening on competitor content the firm doesn't know about - Firms whose clients use Copilot for vendor due diligence — RFP responses are being benchmarked against AI-cited content the firm hasn't seen
The dashboard fix is free. The visibility fix takes a content-publishing program. The why most firms are invisible inside Copilot analysis covers the structural diagnosis.
Recommendations by firm size
Solo and small firms (2-10 attorneys). Open the dashboard this week. 30 minutes to verify, 30 minutes to review the first report. Set a monthly check on the calendar. Most solo firms will see thin data initially because their content footprint is small. The dashboard validates whether your existing content is doing AI-citation work; if it isn't, that's the publishing program signal.
Mid-size firms (10-50 attorneys). Open the dashboard, designate a content-marketing or KM owner for monthly review, and tie the data into the existing content calendar. When a top grounding query appears that the firm wants to own deeper, that's the next 3-5 articles to write. Pair the dashboard data with the Microsoft Copilot citations how-to-rank guide for the production-side workflow.
BigLaw and AmLaw 100. Open the dashboard, integrate it into the firm's marketing analytics stack, and run quarterly competitive intelligence reviews. Top grounding queries that competitor domains dominate are content-strategy signals — either build the depth or accept the AI visibility share. Compare against the Copilot vs Google channel analysis for cross-channel context. The dashboard is free; the action loop pays for itself the first time a competitor citation gap drives content investment that closes the gap.
The Bottom Line: My take: Bing AI Performance is the free dashboard that turns Copilot's black-box citation behavior into measurable data. aivortex.io has receipts: 2,100+ citations in 30 days, top grounding query "Harvey AI legal." If you're publishing vertical content and not measuring the AI channel, you're flying blind on the highest-leverage research surface most lawyers use daily.
AI-Assisted Research. This piece was researched and written with AI assistance, reviewed and edited by Manu Ayala. For deeper takes and the perspective behind the research, follow me on LinkedIn or email me directly.
