In September 2024, Jeremy Howard, co-founder of fast.ai, proposed a new standard: llms.txt - a file that is supposed to introduce websites to AI systems in a structured form. The idea is plausible, the discussion since then lively. Around 844,000 websites have now implemented an llms.txt. At the same time, Google spokesperson John Mueller has publicly stated that no AI system currently uses llms.txt. What is true? And is it worth doing anyway?
llms.txt is a plain text file placed in the root directory of a website at /llms.txt. It contains a structured, AI-readable summary of the website: what does the site offer? Which sections are relevant for AI systems? Where is the most important content? The proposal is conceptually modelled on robots.txt - a standard that all search engines know and respect. The basic idea: instead of AI crawlers having to work through millions of sub-pages, the AI gets a kind of business card. 'Here is who we are, what we do, and what is relevant for you.' This is meant to improve the efficiency of indexing and ensure that AI systems find the actually important content. Adoption is respectable for such a young standard: according to Cloudflare data, around 844,000 domains have implemented an llms.txt as of early 2025 - a share of approximately 10% measured against all analysed domains. This corresponds roughly to the adoption of Schema.org in its first two years after introduction. It is a beginning, not a mass phenomenon. An important difference from robots.txt: robots.txt is technically binding - crawlers that respect the protocol follow the instructions. llms.txt, by contrast, is an offer, not a protocol. There is no specification that obliges AI providers to read the file or take its recommendations into account.
Try it now
Check your GEO Score in 60 seconds - free, no account needed. 42 factors analyzed.
Here is the honest assessment, without sugarcoating. What llms.txt does NOT currently do: it does not influence AI search answers. John Mueller clarified on the SearchLiaison platform in spring 2025: 'No AI system currently uses llms.txt.' This refers to Google systems - Google does not process llms.txt as a ranking signal for AI search answers. OpenAI's GPTBot also does not follow llms.txt in the sense that the file directly controls crawl behaviour or ranking decisions. It is not a replacement for robots.txt, not a Schema.org equivalent, and not an 'AI sitemap' that automatically leads to AI answers. What llms.txt can currently do: it serves as a structured reference point for AI systems that explicitly look for machine-readable summaries. Some smaller AI crawlers and multi-agent frameworks - such as AutoGPT-based systems and specialised research AIs - actively read llms.txt. For these systems (which are increasingly used for business research and automated decision processes), llms.txt can be a genuine advantage. It is also a future investment. The standard is young. If llms.txt develops similarly to robots.txt or Schema.org - from a niche proposal to a de facto standard - early adopters will benefit. This is, however, a bet, not a certainty. Bottom line for the assessment: llms.txt is not a reason to prioritise other measures lower. It is a sensible add-on with reasonable implementation effort and positive expected value - but not a driver of measurable visibility improvements on the major AI platforms in 2026.
If llms.txt is the new experiment, then JSON-LD is the proven standard. JSON-LD (JavaScript Object Notation for Linked Data) in combination with Schema.org vocabulary is the format that Google, Microsoft/Bing, Apple Siri, and most AI systems actively process. It is not a proposal - it is established practice for over ten years. For practice, a clear priority order emerges: first, implement complete JSON-LD - Product, LocalBusiness, Organization, FAQ, Review, BreadcrumbList. This has an immediate effect on Google Rich Results, AI search answers, and all AI systems that evaluate structured data. Second, allow GPTBot, ClaudeBot, PerplexityBot in robots.txt - what is blocked cannot be indexed. Third, provide llms.txt - as a future-proof signal, with realistic expectations. Fourth, offer a machine-readable product or service feed - this is more relevant for platforms like Perplexity than llms.txt. A concrete example of effort comparison: a complete JSON-LD setup with ten schema types costs a developer team three to five days. llms.txt costs one to two hours. Both should be done - but in that order. Transparency note: Beconova automatically generates machine-readable data feeds from your products and services - this includes JSON-LD feeds that AI crawlers can consume directly. llms.txt can be added optionally - we recommend it, but we do not promise specific results that go beyond the current adoption level of the standard.
llms.txt is a good standard with modest current impact. It is worthwhile as a low-cost measure that may become significantly more relevant in two to three years. Those who implement llms.txt are doing nothing wrong - but those who believe it alone secures their AI visibility are mistaken. JSON-LD, robots.txt configuration, and complete structured data have considerably more leverage in 2026. Do both - but prioritise correctly.
Check GEO Score for freeMarvin Malessa
Founder, Beconova
Founded Beconova in Germany in 2025 to help shops and service businesses become visible in AI search engines. Writes about GEO, AI visibility, and the future of search.
Get started with Beconova now and optimize your presence in AI search engines.
Get Started