Workflows
Workflows
Section titled “Workflows”Workflow design is where prompting becomes an operating system instead of a bag of isolated prompts. This section is about sequencing, review boundaries, escalation logic, and reusable runbooks.
Core paths
Section titled “Core paths” Operator runbooks Design prompt flows that align with ownership, approvals, and repeatable team operations.
Human review and approval workflows Map approval logic by risk so teams do not recreate the old queue with more software in the middle.
Do AI agents need human approval in production? Use this page when the team needs a direct rule for separating approval-gated actions from low-risk work that should move faster.
When should an AI agent escalate to a human? Use this page when the team needs a direct escalation rule based on authority, consequence, evidence quality, and human ownership.
Approval systems for coding agents Use this page when engineering teams need a risk-based approval model for repo read, write, merge, and deploy boundaries.
Read-only vs write-enabled coding agents Use this page when the team needs a cleaner line between exploratory coding agents and agents that can create actual repository change.
Deep research briefs Use this page when long-running research systems are producing bigger reports instead of better ones.
Deep research source quality Use this page when deep research quality is drifting because source quality and citation discipline are still implicit.
Policy as code for coding-agent permissions Use this page when coding-agent governance needs explicit permission tiers instead of reviewer intuition.
PR checks and merge gates for coding agents Use this page when the repository needs stronger checks before coding-agent changes can move toward merge.
Approval latency and risk budgets Use this page when approval systems are becoming so slow or so weak that coding-agent value starts collapsing.
Deep research runtime budgets Use this page when deep research runs are getting slower, more expensive, and harder to justify.
Human escalation thresholds for deep research Use this page when the team needs a clearer rule for when deep research should stop and hand work to a human.
Use cases Return to the business problem when workflow design starts drifting into tool-first complexity.
Tooling Choose the stack for prompt versioning, observability, and controlled rollout.
What good workflow pages should answer
Section titled “What good workflow pages should answer”- What starts the flow and what data does the system receive?
- Which steps are deterministic, which are model-driven, and where does a human intervene?
- What output is produced, where is it stored, and how is it verified?
- What happens when confidence is low, sources disagree, or policy rules are triggered?