Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.cloudeval.ai/llms.txt

Use this file to discover all available pages before exploring further.

This page is for developers building scripts, internal tools, or agents on top of CloudEval.

Start with safe defaults

Use these defaults unless you have a good reason not to:
  • cloudeval capabilities --format json
  • cloudeval doctor --format json
  • cloudeval doctor --mcp --format json before MCP client setup
  • --format json
  • --non-interactive
  • --profile <name>
  • --print-url --no-open
  • stored cloudeval login --headless auth, or --machine when service-principal credentials are configured
  • --output <file> when the result must be persisted
These defaults reduce ambiguity and make the CLI easier to compose with other systems.

Stdout and stderr contract

For machine-readable commands:
  • stdout is the data channel
  • stderr is for prompts, warnings, auth flow text, and browser-open messages
Do not parse terminal UI output. If you need automation, use explicit subcommands such as setup, config, doctor, status, models, sessions, projects, reports, ask, connections, billing, or open.

Profiles for agents

Use named profiles when multiple agents, environments, or workspaces share the same host. A profile can hold default backend URL, frontend URL, project, model, and output preferences.
cloudeval setup \
  --non-interactive \
  --profile codex \
  --project <project-id> \
  --model gpt-5-nano \
  --format json
Then pass the same profile to automation commands:
cloudeval ask "Summarize project risk" \
  --profile codex \
  --format json \
  --non-interactive
Explicit flags still override profile defaults, so scripts can pin a project or model for one run without changing the stored profile.

MCP server for agents

Use cloudeval mcp serve when your agent framework already supports MCP and you want CloudEval as a live tool server instead of a shell command wrapper. Check local MCP discovery first:
cloudeval mcp status --format json
cloudeval doctor --mcp --format json
Example client configuration:
{
  "mcpServers": {
    "cloudeval": {
      "command": "cloudeval",
      "args": ["mcp", "serve"]
    }
  }
}
CloudEval can also generate setup guidance for common clients:
cloudeval mcp setup codex --dry-run
cloudeval mcp setup claude --dry-run
cloudeval mcp setup cursor --dry-run
cloudeval mcp setup generic --dry-run --toolset readonly --format json
Use generic for MCP-compatible clients that expect an mcpServers JSON entry. For Ollama-powered agents, configure the MCP host launched by Ollama with that generated CloudEval stdio entry. Use focused toolsets when an agent only needs part of the CloudEval surface:
cloudeval mcp serve --toolset readonly
cloudeval mcp serve --toolset projects
cloudeval mcp serve --toolset reports
cloudeval mcp serve --toolset billing
Important rules:
  • The server uses stdio.
  • Authenticate with stored cloudeval login credentials, stored cloudeval login --headless credentials, or --machine.
  • Run login before starting mcp serve; stdin is reserved for MCP protocol messages.
  • Treat MCP tool results as the same CloudEval data contract you would expect from the CLI: stable envelopes, returned IDs, and explicit errors.
  • Prefer focused MCP toolsets for assistants that should only inspect projects, reports, billing, or read-only data.
  • MCP clients that support resources and prompts can discover CloudEval capabilities, project context, billing summaries, latest reports, and review-oriented prompt templates.

Stable JSON envelope

CloudEval uses a stable JSON envelope for machine-readable success and error responses:
{
  "ok": true,
  "command": "projects create",
  "data": {},
  "frontendUrl": "https://cloudeval.ai/app/projects/..."
}
{
  "ok": false,
  "command": "reports run",
  "error": { "message": "Authentication required" }
}
Some commands also include fields such as:
  • warnings
  • filesWritten
  • traceId
When you use ndjson, arrays are emitted one JSON object per line instead of one wrapped array payload.

Ask mode vs agent mode

CloudEval supports two practical usage patterns:
  • ASK mode is best for one grounded answer, usually through cloudeval ask.
  • AGENT mode is best for multi-step workflows that may inspect projects, run reports, open deeplinks, or create CloudEval artifacts when explicitly requested.
Important guardrails:
  • ASK flows should stay read-first and should not silently create or change CloudEval artifacts.
  • AGENT workflows can be broader, but they still need explicit intent before taking write actions.
  • Do not claim CloudEval mutates customer cloud infrastructure unless a separately verified feature explicitly supports that behavior.

Session continuity for agents

Successful ask runs create local, profile-scoped session history. Use it when an agent needs to find or continue recent CloudEval work on the same machine.
cloudeval sessions search "cost review" --profile codex --format json
cloudeval sessions rename <thread-id> "Cost review" --profile codex --format json
cloudeval chat --resume "Cost review" --profile codex
cloudeval ask "Continue the same investigation" --thread <thread-id> --profile codex --format json --non-interactive
Rules for session use:
  • Session history is local to the machine and scoped by profile.
  • Use sessions search before assuming a thread ID.
  • Use sessions rename to make important threads easy to find later.
  • Use ask --thread only when a one-shot follow-up should stay attached to an existing conversation.

Grounding model

CloudEval answers are expected to be grounded in the data the product actually has access to. That can include:
  • project metadata
  • connection metadata
  • ARM or Bicep-derived ARM template content
  • resource graph and diagram relationships
  • saved cost reports
  • saved architecture or Well-Architected reports
  • report history and trend data where available
  • pricing and product metadata
  • chat thread history
  • local CLI session history for one-shot ask runs
If the evidence is missing, the right behavior is to say what is missing and suggest the next useful command.

Authentication and permissions

  • cloudeval login uses a browser-based login flow.
  • cloudeval login --headless uses a device-code flow for headless sessions.
  • Browser-based CLI login is restricted to loopback redirect targets on the local machine.
  • Use cloudeval login --headless for SSH, containers, or remote terminals.
  • Use stored login state or --machine when you run cloudeval mcp serve.
  • Always use IDs returned by CloudEval responses. Do not guess project, report, connection, or thread IDs.

Practical limits

  • Azure is the primary supported provider today.
  • ARM JSON is the strongest current IaC path.
  • AWS and GCP should not be treated as full-parity live sync or reporting paths unless current capabilities confirm it.
  • Diagram freshness depends on the latest successful import or sync.
  • Cost outputs can be estimates, not final billing truth.
  • Architecture and security findings are evaluations, not compliance attestations.
  • Some browser workflows are still easier or only available in the web app.

Verification before production use

Before shipping a new integration:
  1. Run cloudeval capabilities --format json.
  2. Run cloudeval doctor --format json for the profile or environment you will use.
  3. Run cloudeval doctor --mcp --format json if an MCP client is part of the workflow.
  4. Test the exact commands you plan to automate with --format json --non-interactive.
  5. Confirm the target project, report, connection, or thread IDs come from CloudEval output.
  6. Check that your workflow handles auth-required, backend-unavailable, and not-found failures cleanly.

Next step

Use llms.txt and llms-full.txt for the public context files, or CLI command reference for the exact command groups and flags.
Last modified on May 8, 2026