A robot representing TopicalBoost ascribing topics to a creative work.

Building Entity Authority

Understanding how AI makes use of entities and how they generate traffic.

Search Engine Land published a piece by Benu Aggarwal on Monday arguing that entity authority is the foundation of AI search visibility. I couldn’t agree more with the core thesis of this piece. We believe that brands that connect their content to well-defined, machine-readable entities will maintain visibility as search evolves. We’ve watched this play out across more than twenty publisher sites over the past two years.

But the article frames entity authority as an enterprise discipline. It calls for semantic audits, governance mandates, entity ownership roles, and five-step implementation playbooks. That’s a reasonable framework if you’re large enterprise or well-funded SaaS startup with a dedicated SEO team, but most publishers simply don’t have the budget or bandwidth for such an approach.

Our customers are typically 5-to-20-person editorial teams publishing under deadline pressure. They want to build authority, and while governance of how to implement an entity or topical authority strategy indeed matters, the delivery mechanism has to match the reality of how publishers work.

That’s why we built an API and companion plugins for WordPress and Drupal that allow publishers to add and manage entities right in their CMS. By providing post-by-post and site-wide controls for using or ignoring entities our software automatically detects in their content, our clients are able to put a governance system into place with the clicks of a few buttons. We’ve also created reporting that lets clients easily review newly added entities and see the overall prevalence of entities across their site.

How Entity Markup Actually Flows Through Google’s Systems

The article introduces a useful concept it calls the “comprehension budget.” The idea is that AI systems burn expensive GPU cycles trying to understand your content, and structured data reduces that cost. It’s a compelling metaphor, but we think it describes the wrong mechanism.

Here’s a quick analogy. The Library of Congress Classification system worked because it was standardized. The Library of Congress printed and sold catalogue cards so that every library in the country could classify books the same way. When you walked into any library and looked for a book on American history, you knew where to find it because the classification was universal.

Google’s Knowledge Graph serves the same function for the web. It’s a standardized classification system for concepts, people, organizations, and things. When you connect your content to the Knowledge Graph through Schema.org markup, you’re essentially filing your content under the right call numbers. But instead of a single linear classification, it’s multi-dimensional and interconnected. A single article might be classified under multiple people, organizations, and concepts simultaneously and Google understands the relationships between those things.

But this isn’t subsidizing AI inference costs. It falls at a different point in the process. When you add entity markup to your content, Google crawls it and uses that markup to understand what your page is about at the indexation layer. That understanding improves how and where your page ranks. When an LLM needs context to answer a query, it retrieves pages from search results. The LLM reads your text, not your schema. But your schema is the reason your page was retrieved in the first place.

Schema makes indexation cheaper for Google. You’re telling it: these are the most important topics covered in this piece, so index it around these topics. As long as Google can trust that declaration, you’ve made its job easier. That trust is where the real work happens.

What Actually Matters in Entity Markup

The article also claims that deep nested relationships that trace your “business lineage” hierarchically are important.

There may be some value in doing this in the way a plugin like Yoast SEO does out of the box. It makes it clear to Google how the site publisher relates to the website, the website to the article, and the article to the author.

But nested relationships aren’t necessary when it comes to the content of the articles themselves. Google already maintains the Knowledge Graph. It already knows how Barack Obama relates to the United States presidency, how the Federalist Society relates to constitutional law, how carbon capture relates to climate policy. You don’t need to recreate those relationships in miniature on your website. You just need to declare which entities are present in your content and point to the Knowledge Graph so Google knows exactly which entities you mean.

It’s that “pointing” through use of the sameAs property that gives Google the big assist. When you link an entity mention to its Wikidata entry or Knowledge Graph identifier, you eliminate ambiguity. Google doesn’t have to guess whether you mean the city of Paris or the figure from Greek mythology. Again, this disambiguation happens at the indexation layer, improving how your content is classified and ranked.

Using a system like TopicalBoost also solves the problem of consistency. Every publisher we’ve worked with has the same issue: inconsistent tagging. One editor tags a post “Obama.” Another uses “President Obama.” A third writes “Barack Obama.” These are all the same entity, but without standardization they fragment your topical authority across three separate tags.

By referencing the Knowledge Graph, you canonicalize tags using what is essentially the mother of all tag lists, the card catalogue to end all card catalogues. Every editor, regardless of how they think about an entity, maps to the same canonical identifier.

This is exactly the kind of governance the Search Engine Land article calls for. But it doesn’t happen through documentation and ownership roles. It happens through software that embeds the right decisions into the editorial workflow. When an editor is writing a post and TopicalBoost’s NLP identifies “Obama” as an entity, it maps to the same Knowledge Graph entry no matter what. The governance is built into the system.

Missing: Internal Linking

The Search Engine Land article doesn’t mention internal linking. For publishers, this is a significant omission.

Schema is the classification. It tells Google what entities your content covers. But internal links are the architecture. They build topical hubs and redistribute authority your site has already earned.

When a publisher connects every article about a given entity to that entity’s topic archive page through internal links, two things happen:

  • PageRank flows from older, high-authority articles to the topic page and then out to newer content.
  • Google sees a clear structural signal: this site covers this topic systematically, not incidentally.

We know this because we’ve tested it. We’ve implemented schema-only solutions for clients who needed more time to build out the templates for their topic archive pages. Schema alone provides some lift in visibility and traffic. But adding internal linking to topic pages is what really pushes rankings for those topics through the roof. The combination is where the compounding happens.

Google Discover: Where Entity Authority Shines

The article focuses on AI chat platforms like ChatGPT, Perplexity, and Google’s AI Overviews. These matter, and they’ll matter more over time, but for publishers right now, the clearest signal that entity authority is working is Google Discover.

Discover recommends your content to users based on their interests and your topical relevance. It’s not keyword matching. It’s not a response to a query. It’s Google saying: this publisher has demonstrated authority on topics this user cares about, so surface this content proactively. That’s the entity authority thesis in action, measurable today.

The article’s absence of Discover likely reflects the author’s experience with SaaS and eCommerce, where Discover isn’t a major channel. For publishers, Discover traffic can be transformative.

In the past year, the Foundation for Defense of Democracies saw their overall Google Discover traffic increase by 8000%. A single article generated over 350,000 sessions.

Illinois Policy saw 104% increase in Google Discover traffic in 60 days. Washington Policy Center saw an 1,100% Google Discover increase. Reason nearly quadrupled their Google Discover traffic.

So while we’re seeing AI visibility metrics improve across the board for our customers, in our experience, AI visibility is downstream of search visibility and channels like Discover are generating 10x or 20x the traffic of AI chatbots. So, for now, our recipe for success is: improve your entity authority, improve your rankings, and AI visibility follows.

The Shift Is Happening Now

The Search Engine Land article is right about the direction. Entity authority is a foundational piece of SEO and GEO success going forward. The publishers who build that authority now are building a competitive advantage that will only grow as AI search and content discovery matures.

Where we differ is on implementation. A five-step enterprise implementation playbook is great thing to have if you have the managerial bandwidth to maintain it. But whether you can or not, the practical constraints of publishing on deadlines demands that you have a system that:

  • identifies entities at the point of content creation,
  • connects them to the Knowledge Graph,
  • builds an internal linking architecture that turns topical coverage into topical authority.

That’s what we built TopicalBoost to do.

Connect

Contact

1 Lee Hill Road
Lee, NH 03861

‪(978) 238-8733‬

Contact Us

Subscribe

Join our email list to receive the latest updates.