CloudEval chat works best when you scope your request and ask for a concrete output format.
Why it matters
Good prompts shorten analysis loops and reduce noisy, generic responses.
Quick example
Objective: security review
Scope: internet-facing resources
Output: top 5 risks with one-line remediations
Expected output:
A ranked list of risks tied to resources in your project context.
Step-by-step instructions
- Start broad: ask for architecture summary.
- Narrow scope: choose one subsystem or risk area.
- Request a fixed output format.
- Follow up with implementation-level questions.
Code examples
cloudeval ask "Objective: cost review; Scope: database resources; Output: table with likely savings" --project <project_id>
cloudeval ask "List security findings with severity and next action" --project <project_id> --json
Expected output
- Answers reference project entities.
- Follow-ups reuse earlier context in the same thread/workflow.
--json mode returns structured results for automation.
Common mistakes
- Asking “analyze everything” without scope.
- Forgetting to specify desired format.
- Using chat as a replacement for policy sign-off.
Tips / best practices
- Include objective + scope + output in one message.
- Ask for confidence or uncertainty when needed.
- Save high-signal prompts in team playbooks.
Related pages
Last modified on March 5, 2026