Every B2B company considering AI visibility consulting is also considering buying an AI visibility tool. Both are usually premature.
Before you need either, you need to know what the market is actually signaling about your company to AI systems. Most of that signal is in places you can check yourself — without a tool, without a consultant, and without a quarterly subscription.
This is the checklist I'd run before spending a dollar on either.
Five sections. Twenty-four checks. About ninety minutes end to end.
Want to see how you rank in AI search?
We'll audit your brand across ChatGPT, Perplexity, and Gemini — free.
SECTION 01
Entity Identity and Naming
AI systems identify your company as an entity across sources. Inconsistency in how you're named — or missing entries where you should exist — produces hedged descriptions and diluted signal.
☐ Canonical name consistency across core platforms
How: Open your LinkedIn, Crunchbase, G2/Capterra, Wikipedia (if any), and your site footer. Confirm the company name is identical everywhere.
Why: Variant naming ("Acme" vs "Acme, Inc." vs "Acme Corp") fragments your entity signal and forces AI to guess which form is canonical.
☐ Wikipedia entry (if you qualify)
How: Search your company name at en.wikipedia.org. If no entry exists, review Wikipedia's notability criteria for organizations.
Why: Wikipedia is one of the highest-weight entity signals in AI training data. Many B2B companies qualify and have never pursued it.
☐ Google Knowledge Panel
How: Search your brand name on Google while signed out. Does a knowledge panel appear on the right side?
Why: A knowledge panel signals Google has resolved you as a distinct entity. Its absence usually indicates fragmented entity signals across the web.
☐ Unique founder and exec names searchable
How: Search each named founder/exec. Confirm their results are about them, not collisions with other people of the same name.
Why: Name-collision problems dilute human-signal weight AI systems apply to your company's leadership.
☐ Claimed and active social profiles on core platforms
How: Confirm LinkedIn company page, X/Twitter, YouTube, and GitHub (if technical) are claimed, verified where possible, and active in the last 90 days.
Why: Dormant or unclaimed profiles fragment signal. Active and claimed profiles consolidate it.
SECTION 02
Structured Data Foundation
This is the machine-readable layer AI systems parse most directly. Clean, current schema produces confident descriptions. Fragmented or stale schema produces hedged ones.
☐ Organization schema on homepage
How: View-source on your homepage. Search for "Organization" in the schema blocks. Confirm one exists.
Why: Organization schema is the anchor entity AI systems use to identify your company. Its absence is a foundational gap.
☐ No duplicate Organization entities
How: In the same view-source, count how many Organization or LocalBusiness blocks appear. If more than one, plugins or theme are fighting each other.
Why: Duplicate entities force AI to guess which one is canonical. Usually it hedges — which produces generic descriptions.
☐ sameAs references to social profiles
How: Your Organization schema should include a sameAs array linking to your LinkedIn, X, Crunchbase, and Wikipedia URLs.
Why: sameAs is how you tell AI systems that all these identities resolve to the same entity. Most B2B sites either omit this or have stale links.
☐ knowsAbout reflects your actual category
How: Check the knowsAbout property in your Organization schema. Does it accurately describe what you currently do?
Why: Stale knowsAbout values often reference old categories or pivots. AI reads them as authoritative.
☐ llms.txt at yoursite.com/llms.txt
How: Open yoursite.com/llms.txt in a browser. Does a well-formed file exist?
Why: Not yet adopted by major platforms, but a low-cost forward-compatibility signal and a useful positioning exercise.
SECTION 03
Third-Party Presence
AI systems weight third-party coverage far more heavily than first-party content. Your own blog is advocacy; other people's coverage is evidence.
☐ Claimed and current review site profiles
How: Check G2, Capterra, TrustRadius (or vertical equivalent). Are profiles claimed? Is product info current? Are responses to reviews non-defensive?
Why: Review sites are heavily cited in AI descriptions. Unclaimed or neglected profiles produce outdated or hostile impressions.
☐ Review volume and recency
How: Count reviews posted in the last 90 days. If fewer than three per month, your review pipeline is thin.
Why: AI systems weight recent reviews more heavily. Thin recent activity signals a stale or declining product.
☐ Reddit presence in category-relevant subreddits
How: Search "[your company name] site:reddit.com". Are you mentioned organically? In which subreddits? What's the sentiment?
Why: Reddit discussion is a disproportionately strong predictor of AI category-query wins. Most B2B companies have no idea what's there.
☐ Trade press mentions in last 12 months
How: Search your company name in the 3-5 publications your buyers read. How many substantive mentions in the last 12 months?
Why: Trade press citations anchor you in the category AI associates you with. Silence signals you're not a category participant.
☐ Podcast appearances by founders or execs
How: Search your founders' names on podcast directories. How many category-relevant appearances in the last 12 months?
Why: Podcast transcripts are indexed citable content. A founder with ten category appearances builds more AI signal than a hundred company blog posts.
☐ Analyst coverage (if applicable)
How: Check Gartner Peer Insights, Forrester Now Tech, IDC MarketScape, and vertical-specific analysts. Are you listed?
Why: For mid-market and enterprise B2B, analyst mentions are among the most authoritative sources AI systems cite.
SECTION 04
Human Signal
AI systems apply higher weight to named human contributors than to anonymous corporate voice. Most B2B companies underinvest in this layer.
☐ Founder or CEO active on LinkedIn
How: Check your founder's LinkedIn activity. Consistent, substantive posts over the last 12 months — or dormancy?
Why: Founder LinkedIn presence outperforms most content strategies as a predictor of AI-described credibility.
☐ Named human bylines on content
How: Open your blog. What percentage of posts have a named human author with a real LinkedIn profile vs. "Team" or "Staff" bylines?
Why: Named human bylines consolidate human signal. Corporate-anonymous bylines produce generic entity patterns.
☐ Consistent exec bios across platforms
How: Compare founder bios on LinkedIn, your About page, Crunchbase, and podcast appearances. Consistent? Current?
Why: Inconsistent bios fragment the human entity signal for your leadership.
☐ Published substantive writing by a company voice in last 12 months
How: Count pieces authored by a real named human on behalf of the company in the last 12 months. Not social posts — substantive publications.
Why: This is the raw material AI systems use to build a sense of who your company is, in human voice. Silence here is a major entity gap.
SECTION 05
Manual AI Query Testing
Every previous section tests inputs. This section tests outputs. The inputs can look clean while the outputs reveal problems the inputs didn't predict.
☐ Brand-name query in four AIs
How: Open ChatGPT, Perplexity, Gemini, and Claude. Ask each: "What does [company] do and who is it for?"
Why: Tests whether you exist coherently in AI and whether the basic description is accurate. Tag each: ACCURATE / INACCURATE / ABSENT.
☐ Category-level query in four AIs
How: Ask each AI: "What are the best [your category] tools in 2026?" or "Who should I consider for [use case]?"
Why: Tests competitive positioning. Note which companies are named and where you fall. This is the query that predicts pipeline.
☐ Accuracy deep-dive in one AI
How: Pick the AI with the most substantive brand-name answer. Ask follow-ups: pricing, customers, pros/cons, competitors. Read critically.
Why: Tests specificity of AI's knowledge. Invented features or wrong customers are high-interest liabilities you can't detect without this check.
☐ Competitor-favored comparison query
How: Ask: "How does [you] compare to [top competitor]?" in each AI. Note which company is positioned more favorably.
Why: Tests competitive narrative. Consistently losing head-to-head comparisons in AI is diagnostic of a category-authority gap.
What to Do With the Results
Most teams running this for the first time find gaps in all five sections.
Don't try to fix everything at once. Prioritize in this order:
First: Section 2 (structured data) and Section 1 (entity identity). These are cheap, technical fixes that compound fast. Inconsistent naming, duplicate Organization schemas, missing sameAs references — all repairable in days, not quarters.
Second: Section 3 (third-party presence). Claim review profiles. Get active on the two or three sources AI is citing in your category. This is medium-effort, high-signal work.
Third: Section 4 (human signal). The founder visibility investment is a year-long commitment. Start now anyway. It compounds more slowly but the asset it builds is the most durable on the list.
Fourth: Section 5 (AI query testing) becomes ongoing monitoring. Rerun these queries quarterly. They're the closest thing to a direct-observation metric for AI visibility outcomes.
The Point
Most teams spending on AI visibility tooling have never done this audit. They assume the tool will surface the same gaps. It won't — most of what's on this checklist isn't in any tool dashboard.
Ninety minutes of manual inspection will tell you more about your AI visibility than any dashboard. And will tell you which of the gaps are cheap to close this quarter versus which need the longer commitment.
Run the checklist honestly. Flag what's missing. Sort by the priority above.
Then decide whether you need a tool, a consultant, or just an afternoon of cleanup work.
For most B2B companies, it's the third one — at least to start.
