Skip to main content
CloudEval chat works best when you scope your request and ask for a concrete output format.

Why it matters

Good prompts shorten analysis loops and reduce noisy, generic responses.

Quick example

Objective: security review
Scope: internet-facing resources
Output: top 5 risks with one-line remediations
Expected output: A ranked list of risks tied to resources in your project context.

Step-by-step instructions

  1. Start broad: ask for architecture summary.
  2. Narrow scope: choose one subsystem or risk area.
  3. Request a fixed output format.
  4. Follow up with implementation-level questions.

Code examples

cloudeval ask "Objective: cost review; Scope: database resources; Output: table with likely savings" --project <project_id>
cloudeval ask "List security findings with severity and next action" --project <project_id> --json

Expected output

  • Answers reference project entities.
  • Follow-ups reuse earlier context in the same thread/workflow.
  • --json mode returns structured results for automation.

Common mistakes

  • Asking “analyze everything” without scope.
  • Forgetting to specify desired format.
  • Using chat as a replacement for policy sign-off.

Tips / best practices

  • Include objective + scope + output in one message.
  • Ask for confidence or uncertainty when needed.
  • Save high-signal prompts in team playbooks.
Last modified on March 5, 2026