CloudEval publishes two public machine-readable context files on the main product domain. They are part of the supported developer and agent experience.Documentation Index
Fetch the complete documentation index at: https://docs.cloudeval.ai/llms.txt
Use this file to discover all available pages before exploring further.
These files are public and supported. They are meant to help agents and developer tools route themselves quickly, not replace capability checks or the main docs.
Available files
llms.txt
Start here when you need a short product summary, core links, and the high-level CLI surface.
llms-full.txt
Use this when you need detailed workflows, command guidance, trust boundaries, and limitations.
Use the canonical
llms.txt and llms-full.txt URLs in automation.What these files cover
The two files are designed to help developers and agents understand CloudEval without scraping the UI. They cover:- what CloudEval does
- current provider and IaC support
- diagram and workspace review concepts
- the CLI command surface
- terminal UI versus automation guidance
- profile-aware setup and config
- diagnostics such as
statusanddoctor - searchable local session history and session resume workflows
- MCP client setup patterns, server usage, focused toolsets, resources, and prompts
- headless diagram image download commands and auth boundaries
- common workflows
- real import examples for local ARM JSON and GitHub template URLs
- GitHub URL project creation and redirect behavior
- visual review concepts for architecture maps, dependency views, reports, issue lists, CLI handoff, and MCP setup
- agent usage rules
- data grounding expectations
- limitations and freshness guidance
- public URLs such as the sitemap and search index
How they are discovered
CloudEval publishes these files as public URLs on the product site:https://cloudeval.ai/llms.txthttps://cloudeval.ai/llms-full.txthttps://cloudeval.ai/search-index.jsonhttps://cloudeval.ai/sitemap.xmlhttps://cloudeval.ai/robots.txt
llms.txt and llms-full.txt in the sitemap, and the robots file points agents at both endpoints directly.
When to use each file
Start withllms.txt when you want:
- a short product summary
- the main public URLs
- the current high-level CLI surface
- a quick pointer to the detailed file
llms-full.txt when you want:
- full CLI examples
- terminal UI guidance
- profile, diagnostics, sessions, and MCP details
- MCP setup patterns for Codex, Cursor, Claude Code, VS Code, generic JSON-configured clients, and Ollama-powered agent hosts
- auth and automation guidance
- ASK versus AGENT usage expectations
- grounding and trust model details
- current limitations and support boundaries
What these files do not replace
These files are helpful context, but they are not the only source of truth.- Use
cloudeval capabilities --format jsonbefore relying on a command in automation. - Use Create a project from a GitHub URL for the browser-first URL import flow.
- Use MCP client setup when connecting Codex, Cursor, Claude Code, VS Code, or another MCP-compatible client.
- Use Headless diagram image downloads when an agent needs architecture or dependency image bytes directly.
- Prefer current app behavior over an older copied example.
- Do not assume provider parity just because a provider name appears in the UI or a file.
- Treat the
Last updatedfield as a hint, not a guarantee that every example matches the current release.
Public boundaries
Thellms files are public documentation. They should describe what exists today, call out limitations honestly, and avoid exposing private operational detail.
That means:
- no speculative product claims
- no hidden APIs presented as public surface area
- no unsupported provider promises
