I've been building a GEO audit service through Alpha Dog Agency — tools and methodology for measuring how visible a website is to AI search engines. ChatGPT, Perplexity, Gemini, Google AI Overviews. The whole new layer of search that's eating into traditional results.
I'd already run an audit for a plumbing client and learned hard lessons about scanner blindness and broken assumptions. But I hadn't turned the thing on myself. That felt like a problem. You can't sell a service with confidence if you haven't sat in the chair first.
So I ran the full audit on Alpha Dog Agency's website. 52 out of 100. Fair territory. Not failing, but far from where I'd want it before telling other businesses they need to fix theirs.
The llms.txt surprise
One thing caught me off guard: Alpha Dog already had an llms.txt file at the root domain. Most agencies don't have one at all. Most agencies don't even know the file exists. So there was a moment of genuine satisfaction — we were ahead of the curve on something that's going to matter a lot over the next year.
Then I read the file. It was scoped entirely to dental marketing. Alpha Dog works across education, manufacturing, and construction too, but none of those verticals appeared in the llms.txt. From an AI model's perspective, Alpha Dog is a dental marketing agency. Period. The other half of the business is invisible.
This is a useful kind of embarrassment. It's not that the work wasn't done — it's that the work was done with blinders on. Someone (probably me) set up the llms.txt during a phase when dental was the primary focus, and nobody revisited it as the agency grew.
The brand authority gap
Brand mentions turned out to be the biggest weakness. Alpha Dog has a solid LinkedIn presence and shows up in twelve-plus business directories. That's the baseline. But the platforms AI models weight most heavily — Wikipedia, Reddit, YouTube — are completely empty. No mentions, no presence, no signal.
AI systems build their understanding of a brand the same way a person would if they had to research a company in thirty seconds: they look for it in the places they trust. If you're not on Wikipedia, Reddit, or YouTube, you barely exist to these models. Directories alone don't cut it.
There's also an identity problem. Another agency called "Alpha Dog Advertising" operates out of Lancaster, PA. Completely different company, different services, different market. But AI models don't always parse that distinction cleanly. When someone asks an AI about Alpha Dog and marketing, there's a real chance the model conflates the two entities. That kind of confusion erodes whatever brand signal you've built.
What was actually strong
The case studies. This is where Alpha Dog's real ammunition lives, and the audit confirmed it. Zent Family Dentistry: 126% organic traffic growth. Bethel University: 30% enrollment increase. Curtis Products: 10% revenue growth. These are specific, measurable outcomes attached to named clients. Exactly the kind of content AI models love to cite.
When a model is asked "what marketing agency gets results in education?" and the only agency with a named client and a specific percentage sits in its training data, that agency gets cited. Alpha Dog has those numbers. They're the strongest citable assets on the site.
The flip side: multiple service pages came back empty or nearly empty during the crawl. I'm fairly sure this is a Framer JS rendering issue — the content exists in the page builder but doesn't survive the crawl because it loads client-side. Same class of problem I hit with the plumbing audit, where schema in script tags was invisible to the markdown converter. Different manifestation, same root cause: what the browser sees and what the crawler sees are two different things.
Why this matters for the service
Every real audit I run — whether it's the plumbing client, Alpha Dog's own site, or the next prospect — does two things at once. It sharpens the methodology, and it produces a deliverable that doubles as a case study. The plumbing audit taught me that my scanner was blind to structured data. This audit taught me that llms.txt files rot if you don't revisit them, that brand mentions are a bigger gap than most agencies realize, and that Framer rendering issues will keep showing up until I build a workaround into the crawler.
There's something clarifying about auditing your own work. The findings aren't abstract. They're personally uncomfortable. I know exactly why the llms.txt is scoped to dental — I wrote it during a sprint where dental was all we were selling. I know why the brand mentions are thin — we've been heads-down on client work instead of building the agency's own public presence. Every gap in the report maps to a decision I made or a thing I deprioritized.
The fix list is concrete now. Expand the llms.txt to cover all four verticals. Build presence on Reddit and YouTube. Publish something — anything — that a Wikipedia editor could eventually reference. Resolve the entity confusion with clearer schema and consistent naming. And fix the crawler to handle Framer's client-side rendering so the next audit sees what's actually there.
The score is 52. The target is 80. The gap between those two numbers is the roadmap.