Internal Linking Confusion

Stop linking backwards and start forwarding your link authority.

Most publishers think they do internal linking. Some do, but usually not the kind that moves traffic.

Here’s what typically happens: a writer publishes a new article and links to an older piece that provides context. Maybe it’s an earlier story in a developing situation, or a backgrounder that fills the reader in. That’s good editorial practice. It helps readers and adds credibility, which are both good things.

But this sort of linking does almost nothing for SEO.

This sort of linking is backward-facing—every new article points back to older content. Link authority flows back in time to pages that are now old news. Meanwhile, new content launches with no authority behind it at all.

Why Categories and Tags Don’t Fix This

When I point this out, the first response is usually: “But we have categories and tags.”

Categories are organizational tools. They sort content into sections like “1st Amendment Law” or “US-China Policy.” That’s useful for readers who want to browse a topic area. But people don’t search for broad categories. They search for “Free Speech Coalition v. Paxton” or “Phase One trade deal.” They’re looking for specific people, organizations, court cases, policies, and concepts—what Google calls entities.

Tags get closer to being entities. A publisher might tag articles with “Marco Rubio” or “Food and Drug Administration.” But in practice, tags are used inconsistently. One editor tags an article, another doesn’t. The same concept gets three different tag names. And even when a tag matches a Google Knowledge Graph entry exactly, it carries no structured data. There’s nothing telling Google that your tag page about the FDA is connected to the FDA entity in the Knowledge Graph. It’s just a page with a word on it.

Why You Can’t Fix This Manually

We learned this the hard way with The Federalist Society. Their site has detailed commentary on Supreme Court cases, and we wanted every one of those pieces to a dedicated case page. We did this manually—with the help of a lot of SQL commands.

Even with developer bro tricks up our sleeves, reviewing thousands of publications for every possible variation of case names and adding the appropriate links was exhausting. This was unsustainable and we were working with a fixed list of only a few hundred cases.

Then there was a deeper problem we couldn’t solve even with more man hours: timing.

When you add a link to an article after it’s already been published and indexed, Google has to re-crawl that page to see the change. That might take hours or even days. By then, the news cycle has passed. You’re always a step behind.

The solution is a fully-automated, article-to-topic link graph. What we call the “two hop” solution.

When TopicalBoost processes an article, it identifies the entities mentioned—people, organizations, places, policies, concepts—using a natural language processing (NLP) and large language model (LLM) hybrid system. It sorts those entities into three tiers of relevance based on Schema.org standards, then creates links from the article to a dedicated topic archive pages on publish.

This creates a two-hop link graph. Every article you’ve ever published about the FDA now points to an FDA topic page. That topic page displays your latest FDA coverage at the top. Decades of accumulated backlinks and PageRank from your archive flow through the topic page and forward to your newest content.

The topic page itself carries Schema.org structured data that connects it to the FDA entity in Google’s Knowledge Graph. Google doesn’t just see a page with the words “Food and Drug Administration” on it. It understands that this is your hub for FDA coverage, connected to a real-world entity that Google already knows about.

To avoid adding dozens of topics to every article, we use a three-tier system based on Schema.org standards. Each article gets one mainEntity (the primary subject), up to four about topics (directly discussed), and up to twenty mentions (referenced in passing). We use saliency—a measure of relevance—to keep links meaningful. Fewer topics per article, more link equity passed per topic.

And because this happens automatically at the moment of publication, Google sees the full internal linking context the first time it crawls the page. No delays. No re-crawling necessary.

The Results

The Foundation for Defense of Democracies has roughly 20,000 articles spanning two decades of national security analysis. Their archive includes backlinks from back when Google servers were held together with velcro. Before TopicalBoost, all of that authority was distributed across broad category pages. After restructuring internal links around entities like “Hamas,” “IRGC,” and “Hezbollah,” they’ve seen roughly 50% organic traffic growth per year, more Top Stories placements, and Google Discover traffic they’d never had before. Last year, a single blog post generated 350,000 sessions from Google Discover.

Illinois Policy Institute has about 9,000 articles. We activated TopicalBoost on September 8, 2025. In the first 60 days, organic traffic grew 62%. At six months, sustained organic growth held at 37% above baseline, and Google Discover traffic tripled. Average search position improved from 10.5 to 7.0. Top-3 keyword positions increased 26%.

The most telling part of the Illinois Policy data is the compounding pattern. Early on, Discover spikes were driven by a single article—one piece generating 98% of a day’s Discover clicks. By month six, Google was surfacing four or five articles simultaneously across different topics. That’s what topical authority looks like in data. Google developed enough confidence in the site’s coverage to recommend multiple articles at once.

When customers implement TopicalBoost, results typically appear within a couple of weeks. Rankings move first, then organic traffic follows, then Discover traffic spikes as Google tests the new topical signals.

The Enterprise Precedent

This approach isn’t experimental. The largest publishers in the world already do it.

The Washington Post built an internal system called Taxonomy Manager, developed by a dedicated engineering team, to solve exactly this problem—organizing content by entities and topics rather than broad sections. The Guardian and Bloomberg use similar NLP and LLM-powered auto-tagging systems.

IBM and the Reuters Institute predict that generative AI will handle back-end newsroom tasks like tagging, categorizing, adding metadata, and SEO suggestions—freeing journalists for higher-judgment work. Major outlets like the New York Times already use AI for production-level tasks.

As the Reuters Institute put it, generative AI works best for “language tasks that don’t require introducing new information not already present in the document.” That’s exactly what entity detection and internal linking is—no new content, just better structure around the content you already produce.

The difference is that enterprise publishers spend years and significant engineering resources building these systems in-house. TopicalBoost delivers the same capability as a WordPress plugin on a monthly subscription.

The future that IBM is predicting is already here. It’s just not evenly distributed yet.

Connect

Contact

1 Lee Hill Road
Lee, NH 03861

‪(978) 238-8733‬

Contact Us

Subscribe

Join our email list to receive the latest updates.