I've been building a GEO audit toolkit — tools that measure how visible a website is to AI search engines like ChatGPT, Perplexity, and Google's AI Overviews. Citability scoring, crawler access analysis, brand mention scanning, schema validation, llms.txt checks. The plan is to offer this as a service through Alpha Dog Agency.
The first real client was a plumbing contractor in Florida. Family-owned, second-generation, Licensed Master Plumber with 200+ five-star Google reviews and over 1,300 completed projects. The kind of business that should own their local market online.
They scored 44 out of 100.
What the audit found
Nine pages total. Service descriptions averaging fifteen words each. No structured data. No FAQ section, even though the navigation promised one. Six neighborhood landing pages that could have been strong local signals but were too thin to matter. An /ai-info page — someone had already thought about AI visibility — but it was readable text with no machine-parseable structure behind it.
Crawler access was the bright spot: fully permissive robots.txt, everything open. But access without content is an open door to an empty room. AI systems could reach the site. They just had nothing worth citing once they got there.
I built a schema package and the client implemented it — full Plumber/LocalBusiness JSON-LD with AggregateRating and six services, BreadcrumbList on every location page, WebSite schema, OG tags, Twitter Cards. Solid work, done through Framer's custom code injection.
Then the scanner called it a 5
I ran a second audit to measure the improvement. Schema scored 5 out of 100. The overall score dropped to 34. I'd just watched the client implement everything correctly, and the tools said it wasn't there.
So I pulled the raw HTML with curl. Three JSON-LD blocks on the homepage, all valid. BreadcrumbList and Plumber schema on every location page. Everything present, everything structured correctly.
The problem was the scanner, not the site. My audit tools convert HTML to markdown before analysis, and that conversion strips all script tags. JSON-LD lives inside script tags. Every check for schema saw a blank page because the tool was architecturally blind to structured data.
Corrected score: 44/100. Schema went from 5 to 62. Still not great overall, but the story changed from "everything is broken" to "the technical foundation is solid — the gap is content depth."
The real problem was worse
Then I searched for the business on Google. Direct brand search. Their name, exactly as written.
Page eight.
A business with 200+ five-star reviews, a verified Google Business Profile, and valid schema markup — buried behind directories, aggregators, and other people's content. For their own name.
This reframed everything. I'd been measuring how well the site performs for AI search engines. But the site wasn't performing for regular Google search either. Schema and llms.txt and the /ai-info page are forward-looking investments. They don't matter if Google can't surface the domain at all.
A second domain — the business's full legal name as a URL — might be fragmenting the brand signal. The site is built on Framer, which handles pre-rendering well, so it's probably not a rendering issue. Could be an indexing problem, a domain authority gap, or something else entirely. I connected Google Search Console at the end of the session. That data will tell the real story.
What the first audit taught me
Always verify structured data with raw HTML. Markdown-based scanners can't see JSON-LD. This will bite anyone who relies on content-extraction tools to evaluate schema.
An AI visibility audit that doesn't check basic Google visibility first is measuring the penthouse while the foundation is cracked. I need a traditional search check at the top of the audit, before anything AI-specific.
And the broader lesson: you build a service by doing the work, not by designing the service. Every theory about GEO auditing I had before this client was incomplete. The scanner blindness, the priority reframing, the realization that content depth matters more than technical fixes — all of it came from running a real audit on a real site with a real business owner waiting for results. The methodology got sharper in two sessions than it would have in a month of building tools in isolation.