Your website might look perfect in a browser. Every product listing loads, every interactive element works, every piece of content appears exactly where it should. But when GPTBot, ClaudeBot, or PerplexityBot crawl that same page, they see almost nothing — just an empty shell with a loading spinner that never resolves.
This is the JavaScript rendering gap, and it is one of the most common reasons businesses are invisible to AI search engines despite having strong content and solid traditional SEO. Vercel's analysis of over 569 million GPTBot requests found zero evidence of JavaScript execution. Not partial rendering. Not delayed rendering. Zero.
If your site depends on client-side JavaScript to display content, AI search engines literally cannot see it.
Key Takeaways
- Vercel's analysis of over 569 million GPTBot requests found zero evidence of JavaScript execution — AI crawlers see only the raw HTML your server sends.
- 69% of AI crawlers cannot execute JavaScript, meaning client-side rendered React, Vue, and Angular sites are invisible to most AI search engines.
- Googlebot renders JavaScript with headless Chromium, but GPTBot, ClaudeBot, and PerplexityBot do not — ranking well in Google does not mean AI search can see your content.
- Server-side rendering (SSR) is the most reliable fix, ensuring every AI crawler receives complete, content-rich HTML on every request.
- AI bots now account for roughly 40-50% of Googlebot-level crawl activity across the web, making this a business-critical visibility issue.
What AI Crawlers Actually See
When a human visits your website, their browser downloads the HTML, loads the JavaScript files, executes them, and renders the final page. A client-side rendered React or Vue application might serve an initial HTML file that looks like this:
<html>
<body>
<div id="root"></div>
<script src="/app.bundle.js"></script>
</body>
</html>
The browser runs app.bundle.js, which populates <div id="root"> with your actual content — product information, articles, navigation, everything. Humans see the finished page. AI crawlers see the empty div.
Google is the exception. Googlebot has a full rendering pipeline — it downloads the page, executes JavaScript using a headless Chromium browser, waits for the content to load, and indexes the rendered result. This is why your JavaScript-heavy site might rank perfectly in Google while being completely invisible to ChatGPT, Perplexity, and Claude.
The assumption that "if Google can index it, AI search can too" is wrong — and it is costing businesses citations every day.
The Data Behind the Rendering Gap
The evidence is consistent across multiple studies. SearchViu's analysis of AI crawlers in 2025 found that 69% of AI crawlers cannot execute JavaScript, missing dynamic content like product listings, user-generated reviews, and real-time updates.
Here is where each major AI crawler stands:
Cannot render JavaScript:
- GPTBot (OpenAI/ChatGPT) — fetches JavaScript files about 11.5% of the time but never executes them
- ClaudeBot (Anthropic/Claude) — downloads JavaScript at a higher rate (23.8% of requests) but does not run it
- PerplexityBot — fetches HTML only, no JavaScript execution
- Bytespider (TikTok) — no rendering capability
Partial rendering through other means:
- Googlebot — full JavaScript rendering via headless Chromium
- Google Gemini — accesses Google's pre-rendered index rather than rendering JavaScript itself
- AppleBot — has a browser-based crawler that processes JavaScript, CSS, and Ajax requests
The scale of AI crawling makes this gap critical. Cloudflare's 2025 analysis showed that GPTBot surged from 5% to 30% of all AI crawler traffic between May 2024 and May 2025. AI bots now account for roughly 40 to 50% of Googlebot-level crawl activity across the web. That is an enormous volume of visits where your content simply does not exist.
Why This Matters More Than You Think
The rendering gap is not just a technical curiosity. It has direct business consequences.
AI engines cannot cite what they cannot read. When a user asks ChatGPT for product recommendations, service comparisons, or expert information, the AI retrieves content from its crawled sources. If your pages returned empty HTML shells when GPTBot visited, your content is not in that retrieval pool — regardless of how authoritative or relevant it is.
The gap is widening, not shrinking. AI crawler traffic grew 18% between May 2024 and May 2025, and AI-driven search queries now represent a growing share of how consumers discover businesses. If you understand how ChatGPT sources the web, you know that citation depends entirely on what the crawler could read at fetch time.
Your competitors with server-rendered sites have an automatic advantage. They did not need to do anything special — their content was simply available in the initial HTML response. Your client-side rendered content, no matter how superior, never entered the competition.

How to Check If Your Site Has This Problem
Before implementing fixes, confirm whether the rendering gap affects your site. There are three quick methods:
View source vs. inspect element. Right-click your page and select "View Page Source" — this shows the raw HTML that crawlers receive. Then right-click and select "Inspect" — this shows the rendered DOM after JavaScript executes. If View Source shows your content, you are fine. If it shows an empty container with script tags, AI crawlers cannot see your content.
Use curl to fetch your page. Run curl -s https://yoursite.com | head -100 in a terminal. The output is essentially what AI crawlers receive. If your headlines, paragraphs, and product data are there, your site is accessible. If you see only a JavaScript bundle reference, it is not.
Check your framework's rendering mode. If your site uses React (Create React App), Vue (default mode), or Angular without server-side rendering configured, it almost certainly serves empty HTML shells to crawlers. Next.js, Nuxt, and SvelteKit default to server-side rendering, but it is possible to override this — verify your actual configuration.
For a comprehensive analysis of how AI engines currently perceive your site — including rendering, structured data, content structure, and citation testing — run a free AI visibility scan.
The Fix: Server-Side Rendering and Its Alternatives
The core principle is straightforward: everything that matters for AI visibility must be present in the initial HTML response, before any JavaScript executes. Here are the main approaches, from most to least comprehensive.
Server-Side Rendering (SSR)
SSR generates the full HTML on the server for every request. When GPTBot fetches your page, it receives complete, content-rich HTML — exactly what a browser would show after JavaScript renders.
Best for: Dynamic content that changes frequently — e-commerce product pages, news sites, personalised dashboards.
Frameworks: Next.js (React), Nuxt (Vue), SvelteKit (Svelte), Angular Universal.
SSR is the gold standard because it serves the same complete HTML to every crawler, every time. The performance cost of server-side rendering has dropped significantly with modern frameworks, and solutions like incremental static regeneration (ISR) in Next.js eliminate most of the overhead.
Static Site Generation (SSG)
SSG pre-builds every page as static HTML at build time. Crawlers receive pre-rendered HTML with zero server processing delay.
Best for: Content that does not change per-request — blog posts, marketing pages, documentation, landing pages.
Trade-off: Rebuilds required for content updates. For sites with thousands of pages, build times can become significant.
Prerendering / Dynamic Rendering
Prerendering detects crawler user agents and serves them a pre-rendered HTML version while serving the standard JavaScript application to regular browsers.
Best for: Legacy JavaScript applications where migrating to SSR is impractical.
Caution: Google has deprecated dynamic rendering as a long-term solution. It works for AI crawlers today, but a proper SSR migration should be on your roadmap.
Hybrid Approaches
Most modern frameworks support mixing rendering strategies per route. Your marketing pages and blog can be statically generated. Your product pages can use SSR. Your internal dashboard can remain client-side rendered since crawlers do not need to access it.
This is the pragmatic approach: apply server rendering where AI visibility matters, keep client rendering where it does not.
Beyond Rendering: What Else AI Crawlers Need
Fixing the rendering gap gets your content in front of AI crawlers. But being visible is not the same as being cited. Once AI engines can read your pages, the quality signals that drive AI search engine optimisation take over:
- Structured data reinforces what your page is about in machine-readable format. Our schema markup guide covers the implementation details.
- Content chunking ensures AI engines can extract specific passages to cite. Well-structured sections with clear headings outperform monolithic walls of text.
- Authority signals — domain reputation, backlinks, entity presence in knowledge bases — determine whether AI engines trust your content enough to cite it.
- Technical SEO foundations like clean URL structures, proper robots.txt configuration, and fast load times all influence AI search visibility beyond just rendering.
The rendering fix is the prerequisite. Without it, none of these other optimisations matter — AI crawlers cannot evaluate content they cannot see.
The Emerging AI Discovery Layer
The relationship between websites and AI systems is evolving beyond simple crawling. Two emerging standards are worth watching:
llms.txt is a proposed protocol (similar to robots.txt) that provides AI systems with a structured overview of your site's content, purpose, and key pages. It gives AI crawlers a roadmap rather than forcing them to discover your site structure through brute-force crawling.
Structured feeds and APIs designed specifically for AI consumption are beginning to appear. Rather than relying on crawlers to parse HTML, some sites now expose machine-readable content endpoints that AI systems can query directly.
These are supplements to server-side rendering, not replacements. The HTML your crawlers receive remains the primary channel — but providing additional AI-friendly access points can improve how completely and accurately AI engines understand your content.
What to Do Next
Start with the diagnostic: check whether your site's raw HTML contains your actual content. If it does not, prioritise the rendering fix above all other AI visibility work — it is the single highest-impact change you can make.
If your site already serves content in its initial HTML, focus on the quality layer: structured data, content structure, and the full AI visibility checklist.
Frequently Asked Questions
How do I check if AI crawlers can see my website content?
Right-click your page and select "View Page Source" to see the raw HTML that crawlers receive. If your content (headlines, paragraphs, product data) appears in the source, AI crawlers can read it. If you only see an empty container with script tags, your content is invisible. You can also run curl -s https://yoursite.com in a terminal to see exactly what AI crawlers receive.
Does Google ranking guarantee AI search visibility?
No. Googlebot has a full JavaScript rendering pipeline using headless Chromium, but GPTBot, ClaudeBot, and PerplexityBot do not execute JavaScript at all. A JavaScript-heavy site can rank perfectly in Google while being completely invisible to ChatGPT, Perplexity, and Claude. The assumption that "if Google can index it, AI search can too" is incorrect.
What is the best rendering approach for AI visibility?
Server-side rendering (SSR) is the gold standard because it serves complete HTML to every crawler on every request. Frameworks like Next.js (React), Nuxt (Vue), and SvelteKit (Svelte) support SSR natively. For content that doesn't change per-request — blog posts, marketing pages, documentation — static site generation (SSG) is equally effective and faster to serve.
Can prerendering solve the JavaScript visibility problem for AI crawlers?
Prerendering (detecting crawler user agents and serving pre-rendered HTML) works for AI crawlers today, but Google has deprecated dynamic rendering as a long-term solution. It's a viable short-term fix for legacy JavaScript applications where migrating to SSR is impractical, but a proper SSR migration should be on your roadmap.
For a complete picture of how AI engines currently see your website — including rendering analysis, live citation testing across nine AI platforms, and actionable recommendations — run a free AI visibility scan or explore SwingIntel's AI Readiness Audit.






