Skip to main content
Semantic search and NLWeb protocol connecting websites to AI agents through vector retrieval and Schema.org data
AI Search

Semantic Search and NLWeb: How AI Agents Query Your Website

SwingIntel · AI Search Intelligence17 min read
Read by AI
0:00 / 16:31

Every AI search engine has already left keyword matching behind. ChatGPT, Perplexity, Gemini, Claude, Google AI Overview — none of them look for exact phrase matches anymore. They retrieve content by meaning. That is semantic search, and it is the retrieval mechanism underneath the entire AI search ecosystem.

NLWeb is the next layer. It is an open protocol — built by Microsoft, authored by the same person who created RSS and Schema.org — that turns a website into a live, queryable endpoint for AI agents. Semantic search is how AI finds answers. NLWeb is how your website serves answers back, on your terms, from your own structured data.

Businesses that understand both layers will be discoverable to AI agents. Businesses that do not will stay invisible to the fastest-growing search channel on the web. Here is how both work, how they connect, and what to do about it.

Key Takeaways

  • Semantic search is the vector-based retrieval mechanism behind every major AI platform — ChatGPT, Perplexity, Gemini, Claude, Google AI Overview, and the rest — and it ranks content by meaning rather than keyword density.
  • NLWeb is Microsoft's open protocol that turns websites into natural language endpoints AI agents can query directly, positioned as "the HTML of the agentic web."
  • Every NLWeb instance automatically functions as a Model Context Protocol (MCP) server, making it immediately accessible to the entire ecosystem of AI agents built on MCP.
  • Schema.org structured data is the shared foundation — both semantic retrieval and NLWeb depend on clean, interconnected markup to understand what your business is.
  • Early adopters on NLWeb include Eventbrite, Shopify, Tripadvisor, O'Reilly Media, Common Sense Media, Hearst, and Chicago Public Media, with Yoast building WordPress integration via Schema Aggregation.

How Semantic Search Actually Works

Traditional keyword search operates on a simple principle: match the words in a query to the words in a document. Search "affordable accounting software for freelancers" and the engine looks for pages containing those exact terms. Pages with different wording — "budget-friendly bookkeeping tools for independent contractors" — might not appear at all, even if they answer the same question.

Semantic search replaces this with meaning-based retrieval. Three core technologies work together to make it happen.

Vector embeddings convert text into mathematical representations — coordinates in a high-dimensional space where similar meanings cluster together. The query "affordable accounting software" and the document about "budget-friendly bookkeeping tools" end up near each other in that space because they mean the same thing, even though they share no keywords.

Natural language processing (NLP) parses the structure and intent of the query. It identifies that "affordable" is a price modifier, "accounting software" is the product category, and "for freelancers" is the audience constraint. This parsing lets the system weight different elements of the query appropriately.

Knowledge graphs add a layer of entity understanding. Google's Knowledge Graph knows that "freelancers" and "independent contractors" refer to the same group. It knows QuickBooks is accounting software and that Xero is a competitor. These entity relationships connect queries to relevant content even when the surface-level language differs completely.

The combined result is a retrieval system that finds content by what it means rather than what it literally says. According to Google Cloud's semantic search documentation, this approach lets search engines deliver results that match user intent even when query and content share no vocabulary in common.

Semantic Search vs Keyword Search: 5 Key Differences

The distinction is not just technical — it changes how content needs to be written, structured, and optimised.

Intent vs exact match. Keyword search finds documents containing specific terms. Semantic search finds documents that answer the user's question, regardless of terminology. A keyword search for "best laptop for design" requires pages to include those words. Semantic search understands the user wants a high-performance portable computer with a colour-accurate display and strong GPU — and retrieves pages describing exactly that, even in different words.

Context awareness. The word "apple" means something different in "apple pie recipe" and "Apple stock price." Keyword search treats both identically. Semantic search uses surrounding context to disambiguate, delivering fruit-related results for the first and financial data for the second.

Synonym and concept handling. Keyword search struggles with synonyms. Semantic search handles them natively because vector representations capture meaning directly. "Car," "automobile," and "vehicle" occupy similar positions in the embedding space.

Query complexity. Keyword search works well for simple, direct queries ("weather London"). Semantic search excels with complex, conversational queries ("what should I wear in London this weekend") because it can decompose intent — weather conditions, clothing recommendations, location, time frame — and match against content that addresses the combined meaning.

Ranking signals. In keyword search, ranking depends heavily on keyword density, backlink authority, and exact-match optimisation. In semantic search, content clarity, structured data, and topical authority carry significantly more weight because the system evaluates meaning rather than word frequency.

AI applications of semantic search showing how artificial intelligence processes and connects search queries to relevant content

How AI Search Engines Use Semantic Retrieval

Every major AI search platform — ChatGPT, Perplexity, Gemini, Claude, Google AI Overview, Grok, DeepSeek, Microsoft Copilot, and Meta AI — uses semantic search as its primary retrieval mechanism. When a user asks Perplexity "what's the best way to improve my website's visibility to AI agents," the platform does not look for pages containing those exact words. It semantically interprets the query, then retrieves and synthesises content from sources that address that intent.

This is how retrieval-augmented generation (RAG) works under the hood. Before an AI model generates an answer, a semantic retrieval layer finds the most relevant source content. The quality of that semantic match directly determines which sources get cited in the final response. If the retrieval layer cannot parse your content, no amount of brand strength or backlink authority will put you in the citation.

Three implications follow for businesses competing in AI search.

Content written for keyword matching may be invisible. A page optimised for the exact phrase "AI SEO services" might not surface for the semantic query "how do I make my website show up in ChatGPT answers" — even though both queries seek the same thing. AI search engines evaluate meaning, not keywords.

Structured data becomes a competitive advantage. Schema.org markup, clear heading hierarchies, and well-defined entities give semantic retrieval systems machine-readable signals about what content means. This structured layer separates content AI agents can parse efficiently from content they skip over.

Factual specificity gets rewarded. Semantic search systems rank content higher when it contains concrete, verifiable facts. "Our audit covers 24 checks across structured data, content clarity, and technical signals" is semantically richer and more retrievable than "we run a comprehensive audit." Specificity gives the system more dimensional data to match against queries.

From Retrieval to Protocol: What Is NLWeb

Semantic search explains how AI finds your content. NLWeb addresses the other side of the exchange: how your website serves content back to AI agents in a structured, controlled, real-time way.

NLWeb is an open protocol that turns any website into a natural language endpoint. Instead of waiting for AI crawlers to scrape pages and hope the models interpret content correctly, NLWeb lets your site answer questions directly — AI agent asks a natural language query, your site responds with a structured Schema.org JSON answer drawn straight from your own data.

The protocol was conceived by R.V. Guha, a Technical Fellow and Corporate Vice President at Microsoft. Guha's track record is hard to overstate: he created RSS, RDF, and Schema.org — three standards that fundamentally shaped how data is shared and structured across the web. Microsoft introduced NLWeb in 2025 with a bold framing: NLWeb is to the agentic web what HTML is to HTTP. Just as HTML gave the web a universal document format, NLWeb aims to give the AI web a universal query-and-response format.

The project is fully open source, with reference implementations in both Python and .NET 9 and over 6,000 stars on GitHub. It is MIT-licensed. Twelve organisations are already collaborating on the protocol, including Eventbrite, Shopify, Tripadvisor, O'Reilly Media, and Chicago Public Media. These are not experiments — production systems are being built on NLWeb today.

Microsoft's NLWeb protocol turning websites into natural language endpoints AI agents can query directly

How NLWeb Works Under the Hood

The technical flow is straightforward. NLWeb crawls your site and extracts Schema.org JSON-LD markup. That structured data is loaded into a vector database, which represents content as mathematical vectors rather than keywords — the same semantic retrieval model covered earlier, now running on your own data. When a user or AI agent sends a natural language query to the site's /ask endpoint, NLWeb combines vector search results with an LLM to generate a contextual, Schema.org-formatted JSON response.

Under the hood, NLWeb consists of five modules:

  • AskAgent — the central query processor that handles natural language questions against your Schema.org data
  • AgentFinder — a discovery service that helps AI agents locate NLWeb instances across the web
  • DataFinder — translates natural language queries into structured database requests for enterprise systems
  • ModelRouter — intelligently selects which LLM to use based on cost and quality thresholds
  • NLWebScorer — neural ranking models that evaluate search result relevance

The protocol supports three query modes: List (return matching items), Summarise (condense results), and Generate (create new content from the data). It works with every major LLM — OpenAI, Anthropic, Gemini, DeepSeek — and multiple vector databases, including Qdrant, Elasticsearch, PostgreSQL, and Azure AI Search. It runs on any operating system.

What makes NLWeb practical is that it leverages data most websites already have. If your site publishes Schema.org markup on product pages, recipe listings, event schedules, or business profiles, NLWeb can ingest and serve that data conversationally. You do not need to restructure your site or build a custom API. The protocol meets you where you are — provided your structured data is clean.

NLWeb, MCP, and the Agentic Web Protocol Stack

We Test What AI Actually Says About Your Business

15 AI visibility checks. Instant score. No signup required.

The detail that makes NLWeb strategically important rather than just technically interesting: every NLWeb instance automatically functions as a Model Context Protocol (MCP) server.

MCP, created by Anthropic, has become the universal standard for connecting AI applications to external tools and data sources. It reached 97 million monthly SDK downloads within its first year, with adoption from OpenAI, Google, and Microsoft. By making every NLWeb site an MCP server, Microsoft ensures that any website running NLWeb becomes immediately accessible to the entire ecosystem of MCP-compatible AI agents. When a customer asks ChatGPT, Claude, or any MCP-compatible assistant a question your website can answer, NLWeb provides the standardised pathway for that agent to query your content directly — not through a search engine intermediary, but from your own data.

NLWeb sits inside a wider protocol stack. Four protocols are collectively defining how AI agents interact with the web, and they are being adopted at unprecedented speed:

  1. MCP (Model Context Protocol) — Anthropic's transport layer for connecting AI to tools and data. 97 million monthly SDK downloads. Universal platform adoption in 12 months.
  2. A2A (Agent-to-Agent) — Google's protocol for agents from different vendors to discover and collaborate with each other. From 50 to 150+ organisations in three months.
  3. NLWeb — Microsoft's protocol for making websites conversationally queryable. Major publisher adoption at launch.
  4. AGENTS.md — Standardised guidance files for AI coding agents. 60,000+ open-source projects adopted within months.

These protocols are coordinated through the Linux Foundation's Agentic AI Foundation, whose platinum members include AWS, Anthropic, Google, Microsoft, and OpenAI. This is not speculative — the companies building the AI agents are simultaneously building the protocols those agents will use.

Four agentic web protocols — MCP, A2A, NLWeb, and AGENTS.md — forming the stack that connects AI agents to websites and tools

NLWeb vs llms.txt: Different Layers, Complementary Roles

If you have been following the llms.txt conversation, NLWeb is not a replacement. They address different layers of the same challenge.

NLWeb llms.txt
What it does Dynamic conversational endpoint Static Markdown file
Data format Schema.org JSON-LD Markdown with links
Interaction model AI agents query your site in real time AI agents read a file at crawl time
Content control Responses grounded in your structured data Curated table of contents
Adoption Connectors for all major LLM platforms 844,000+ implementations, no confirmed LLM ranking signal

The two protocols are complementary. An llms.txt file helps AI crawlers understand your site structure at crawl time. NLWeb enables AI agents to query your content in real time. One is a signpost. The other is a conversation. A well-optimised site in 2026 will likely do both — publish an llms.txt file for AI training crawlers, implement Schema.org markup for semantic retrieval and NLWeb consumption, and expose an NLWeb endpoint for real-time agent queries.

Who's Already Building on NLWeb

Microsoft launched NLWeb with twelve early adopters spanning publishing, commerce, and technology. The concrete use cases already in production tell you what the protocol is good for:

  • Shopify — product catalogue queries via natural language
  • Tripadvisor — conversational restaurant and hotel discovery. A query like "family-friendly restaurants in Barcelona with outdoor seating" returns structured Schema.org data rather than a list of links
  • Eventbrite — event discovery through conversational queries
  • O'Reilly Media — technical content accessible to AI agents
  • Common Sense Media — media reviews queryable by parents and AI assistants
  • Hearst — publishing content exposed as queryable endpoints
  • Chicago Public Media — archival and current content surfaced via structured queries

On the implementation side, Yoast announced Schema Aggregation in March 2026 — a feature that organises WordPress sites' structured data specifically to reduce the technical effort required to build NLWeb integration. This gives the millions of WordPress sites running Yoast a direct on-ramp. Custom stacks will need to watch the NLWeb GitHub repository for integration guides and community connectors as they appear.

Why Structured Data Is Now the Entry Ticket

Both layers — semantic retrieval and NLWeb — depend on the same foundation: clean, interconnected Schema.org markup. Semantic search systems use it as a machine-readable signal about what your content means. NLWeb consumes it directly to build the queryable index that backs the /ask endpoint. Without structured data, neither layer can represent your business.

This changes the calculus for every business investing in AI visibility. Content with proper Schema.org markup has a 2.5x higher chance of appearing in AI-generated answers — and that is for semantic retrieval alone, before NLWeb even enters the picture. Schema.org was already important for rich results in traditional search and for AI citation likelihood. NLWeb raises the stakes further. Your Schema.org implementation is no longer just a visibility tactic. It is the infrastructure that determines whether AI agents can interact with your website at all.

As Search Engine Land puts it: "Robust, entity-first schema optimization is no longer just a way to win a rich result; it is the fundamental barrier to entry for the agentic web."

Connecting this to practical execution: the entity-first approach that drives AI visibility is exactly what NLWeb needs. Make sure your brand is a well-defined entity in your structured data, with clear relationships to your products, services, locations, and people.

What Your Business Should Do Now

You do not need to deploy NLWeb today. You do need to prepare the foundation both semantic retrieval and NLWeb require — which, not coincidentally, is the same foundation that improves your AI visibility across every platform right now.

  1. Audit your Schema.org completeness and entity relationships. NLWeb consumes Schema.org markup. Semantic retrieval ranks content better when it is machine-parseable. If your structured data is incomplete, disconnected, or inaccurate, both layers fail. Audit your JSON-LD for entity relationships — do your Product, Organisation, LocalBusiness, and FAQPage types properly reference each other?

  2. Think entities, not pages. Write content around clear entity definitions. Your brand is an entity. Your products are entities. Your locations, people, and services are entities. Both semantic retrieval and NLWeb extract meaning from how these entities relate to each other — not from keyword density.

  3. Server-render your content. NLWeb crawls your site to extract structured data. AI crawlers do the same for semantic retrieval. If your content is rendered client-side with JavaScript, neither layer can access it. Server-side rendering is non-negotiable.

  4. Monitor the NLWeb ecosystem. Yoast's Schema Aggregation is the first major integration tool. If you run WordPress, evaluate it. If you run a custom stack, watch the NLWeb GitHub repository for integration guides and community connectors as they appear.

  5. Measure your current AI visibility baseline. Before optimising for any new protocol, know where you stand. The SwingIntel free AI readiness scan delivers a preview of the intelligence we gather — it takes 30 seconds and no signup, and shows you how AI agents currently perceive your brand. For the complete picture, the SwingIntel AI Readiness Audit delivers expert research across 9 AI platforms — queried on your behalf, with findings and a strategic roadmap delivered directly.

Frequently Asked Questions

How does semantic search differ from keyword search?

Keyword search matches the exact words in a query to words in a document. Semantic search interprets the meaning behind the query and retrieves content that addresses that meaning, even when different vocabulary is used. A page about "budget car insurance" can surface for "cheap auto coverage" in semantic search, while keyword search would require an exact term match.

What is NLWeb in simple terms?

NLWeb is an open protocol from Microsoft that lets websites answer natural language questions from both AI agents and human users. Instead of AI crawlers scraping your pages and guessing what the content means, NLWeb lets your site serve structured, authoritative answers directly through a standardised /ask endpoint. Every NLWeb instance also acts as an MCP server, so any MCP-compatible AI agent can query it.

How is NLWeb different from llms.txt?

llms.txt is a static Markdown file that AI crawlers read passively during training or retrieval. NLWeb is an interactive protocol — AI agents send natural language queries, and your site responds with real-time, structured answers drawn from your Schema.org data. They are complementary, not competing: llms.txt helps AI systems learn about your site, while NLWeb lets them query it live.

Do I need Schema.org markup for NLWeb to work?

Yes. NLWeb is designed to consume the structured data your site already publishes, primarily Schema.org JSON-LD markup. It can also work with RSS feeds and JSONL data, but robust Schema.org markup gives NLWeb the richest data to work with. If your site lacks structured data, implementing it benefits both traditional search rankings and NLWeb readiness simultaneously — and it is the same foundation semantic retrieval needs.

How can I optimise my website for semantic search and NLWeb?

Focus on content clarity, structured data, and factual specificity. Write in natural language that directly answers likely questions. Implement JSON-LD schema markup for your key entities — organisation, products, services, reviews, FAQs. Include verifiable facts rather than vague claims. Structure content with clear headings that signal each section's topic. Server-render every page so AI crawlers can actually read the content. These signals help both semantic retrieval and NLWeb queries — they are the same foundation.

The agentic web is being built on two layers: semantic understanding at the retrieval level, structured protocols at the interaction level. Strong brands will be findable at both. The investment is the same either way — better structured data, clearer entities, server-rendered content — and the businesses that treat their websites as queryable knowledge graphs rather than collections of pages will be the ones AI agents recommend.

ai-searchai-visibilitystructured-datasemantic-searchai-discoverability

More Articles

AI agents reading an llms.txt file — the Markdown protocol giving language models a curated map of a websiteAI Search

LLMs.txt Explained: What It Is, What the Data Shows, and How to Build One That Works

What llms.txt is, what the adoption data actually shows, and how to build one that drives AI visibility — with ecommerce patterns included.

17 min read
AI search visibility across fashion, fintech, SaaS, and law firm industriesAI Search

AI Search by Industry: The Visibility Playbook for Fashion, Fintech, SaaS, and Law Firms

AI agents name two or three brands before a human sees any results. The industry visibility playbook for fashion, fintech, SaaS, and law firms.

21 min read
Ecommerce in the AI era — product discovery, sourcing, and architecture reshaped by AI agentsAI Search

Ecommerce in the AI Era: A Complete Guide to Readiness, Strategy, and Growth

How AI reshapes ecommerce in 2026 — from product discovery and structured data to sourcing, distributed architecture, and the human layer behind lasting AI visibility.

22 min read
AI reshaping the landscape of search engine optimization — from traditional rankings to AI-powered citations, entity visibility, and multi-platform discovery in 2026AI Search

AI's Impact on SEO: What Changed, What Didn't, and How to Adapt Your Strategy

AI has split SEO into two jobs — ranking for humans and being cited by machines. This guide covers exactly what changed, what stayed the same, the data behind the shift, and the six strategy moves that earn AI visibility in 2026.

23 min read
AI search visibility concept showing how AI agents discover, evaluate and cite businessesAI Search

AI Search Visibility: The Complete Guide to Getting Your Brand Cited by ChatGPT, Perplexity and Gemini

The definitive guide to AI search visibility — why it matters, the six structural reasons brands are invisible, the five pillars of citation, how to measure where you stand, and the priority framework for getting cited across nine AI platforms.

22 min read
Agentic commerce in 2026 — AI agents reshaping how consumers discover, evaluate, and buy onlineAI Search

The Agentic Commerce Playbook: Shifts, Platforms, and How to Prepare Your Business

A complete guide to agentic commerce in 2026. The six structural shifts, the platform landscape, the readiness gap, the five trends defining the year, and a phased playbook for making your business agent-ready.

26 min read

We Test What AI Actually Says About Your Business

15 AI visibility checks. Instant score. No signup required.