Google is requiring JavaScript to be turned on in order to view search results. This technical change reveals a deeper truth about the relationship between AI and search: LLMs fundamentally depend on search engines to function effectively.
While much of the hype around this change to Google is focused on SEO tools, this is really missing the point. Yes, the HTML-only version of Google has been used to capture things like keyword rankings, ad placement, and what features show up on the results page—things like top stories, merchant listings, or knowledge panels.
AI Companies are the Real Target
But this JavaScript requirement doesn’t just block or increase the costs of SEO Tools like SimilarWeb, SERanking, or ZipTie.dev. Requiring JavaScript also hinders AI companies from using Google to inform the answers their LLMs generate or to improve their own search engines.
Patrick Hathaway, the co-founder and CEO of Sitebulb, pointed this out on LinkedIn:
The timing just makes too much sense. We know that almost all LLMs are not executing JavaScript, so in the short term this should mean they will struggle to access Google search results. In the long term, presumably it will just make the task more expensive and therefore less viable.
Google do not want LLMs accessing their search results or their AI Overviews (and which queries trigger them) – that’s the reason for this outage. Keyword tracking tools are just collateral damage.
Pierluigi Vinciguerra over at the great SubStack “The Web Scraping Club” concurs with Hathaway:
Of course, this move mainly targets generative AI companies. Google is trying to keep them out of its courtyard by making it more difficult for them to suggest the right sources for their answers to users’ prompts.
More Evidence that Search Will Live On
This reinforces Tallest Tree’s operating theory of how search will work going forward.
LLMs will not replace search engines or render them obsolete.
LLMs need search to select which documents they should consider when answering a user query and to verify the accuracy of their answers.
Google Search Systems
This is true simply because LLM don’t capture the same information as these three key Google search systems:
PageRank
Google uses PageRank to assess webpage authority/importance. PageRank is a link graph and weighting system consisting of hundreds of billions of webpages and the connections made between them.
LLMs have no such concept within their training data and we know that it’s a challenge for LLMs to make use of very basic structured data—such as simple data tables—so they can’t incorporate wildly complex links graphs.
Instead, LLMs have to be fed the results that those graphs produce in the form of a list of documents.
Document Embeddings
Vector embeddings are a way of representing language using numbers. Both LLMs and Google Search use vector embeddings, but they’re used in very different ways.
LLMs use embeddings to attach meaning to “tokens,” words or phrases that make up its vocabulary.
Google uses embedding to encode topical relevance into a document.
In other words, LLMs are focused on encoding the meaning of language, whereas Google is focused on creating a fast, efficient, continuously updated, document retrieval system.
Knowledge Graph
In addition to vector embeddings, Google also understands topics through its Knowledge Graph, a structured way of representing information as a network of entities and their relationships.
Imagine a huge map interconnecting circles. Each circle is a nodes represent people, places, or things (including abstract ideas or events). The connections between those nodes describe how they relate to each other.
Notably, the properties of each nodes and its connections are absolute. They aren’t likely to be true or probabilistically true, they are determined by their structure. This sort of system avoids the “hallucination” problem of LLMs.
Different “Ways of Knowing”
So it’s no wonder that AI companies are using Google results to improve the outputs of the LLMS or refine their own search engines. Google has access to other “ways of knowing” about the world at scale that most AI companies don’t.
This compensates for what LLMs can’t do:
- LLMs can’t assess the authority of a document like Google can, so they’t can’t determine the trustworthiness of a document on their own.
- LLMs are token-centered rather than document-centered, so they can’t determine a relevant source of information on their own.
- LLMs are probabilistic, rather than deterministic, so they produce results that “sound right” or have “truthiness” without necessarily having any fidelity to actual facts.
Search is Not Dead, Neither is SEO
So this leak doesn’t show that SEO is dead. Neither are the tools—one of the big SEO tools compensated for this change in under 24 hours.
Instead, it shows that AI companies are desperate to get their hands on high-quality search data. So much so that they’ve just been scraping search data from Google results.
This really tips their hand. For LLMs to become useful tools, they must become part of larger systems that use algorithms like PageRank, vector embedding of documents, or knowledge graphs to ensure they are conveying authoritative, relevant, and accurate information.
AI companies know this, but now they’ve got to develop their own search technologies or at least pay a lot more to scrape that information from Google.