Validation
Two layers of validation in AgentLed: structural checks that run before a workflow is published, and runtime data checks that gate individual steps during execution.
Publish-time Validation
Before a workflow can be published, the validation engine checks the entire pipeline structure. It runs automatically when you click Publish, or on demand via the validate_workflow MCP tool.
Schema & Required Fields
Every step must have a valid id and type. Required inputs on app actions must be wired. Unknown fields are flagged as warnings.
Template Variable References
Every {{steps.stepId.field}} reference is checked: the step ID must exist and precede the referencing step in execution order.
App Configuration
The referenced action ID must exist in the app registry. Required inputs must be present. Credentials are not validated at publish time — they are checked at execution.
Knowledge Graph Links
If a step references a Knowledge List by key, the list must exist in the workspace. Referenced fields must be in the list schema.
AI Model Availability
The model specified in AI steps must be supported, the provider must be available, and the model tier must be accessible under the workspace plan.
Runtime Data Validation
Data validation during execution is done through entry conditions on steps. These act as gates: if the data doesn't meet the criteria, the step skips, stops, or waits.
// Gate: only proceed if score is in valid range
entryConditions: {
onCriteriaFail: "stop",
criteria: [
{ variable: "{{steps.score.value}}", operator: ">=", value: 0 },
{ variable: "{{steps.score.value}}", operator: "<=", value: 100 }
]
}
// Gate: only proceed if email field is present
entryConditions: {
onCriteriaFail: "skip",
criteria: [
{ variable: "{{steps.enrich.email}}", operator: "isNotNull" }
]
}See Loops & Conditionals for the full list of operators and onCriteriaFail options.
AI-Based Validation
For validation logic that can't be expressed as a simple comparison — “is this email body professional?”, “does this summary actually address the question?” — use an AI step to judge the output.
// AI validation step
{
type: "aiAction",
id: "quality_check",
prompt: "Review this email draft for tone and relevance:
Draft: {{steps.draft.body}}
ICP: {{steps.icp.definition}}
Return: { passed: boolean, reason: string }",
responseStructure: {
passed: "boolean",
reason: "string"
}
}
// Gate downstream steps on AI judgement
entryConditions: {
onCriteriaFail: "stop",
criteria: [
{ variable: "{{steps.quality_check.passed}}", operator: "==", value: true }
]
}Fail Behaviors
| onCriteriaFail | When to use |
|---|---|
| skip | Optional step — data quality issue is acceptable for this item, continue the workflow. |
| stop | Hard requirement — bad data should halt the entire execution for investigation. |
| wait | Conditional timing — proceed only after an external condition is met (e.g., a field is populated by another system). |
validate_workflow MCP Tool
Run publish-time validation without publishing, from any connected MCP client:
validate_workflow({ workflowId: "wf_investor_scoring" })
// Returns: { valid: boolean, errors: [...], warnings: [...] }Errors block publishing. Warnings are advisory — the workflow can be published but may behave unexpectedly.
Next Steps
- Loops & Conditionals — Full entry conditions reference
- Auto-Fix — Automatically recover from step failures
- Human-in-the-Loop — Gate irreversible actions behind human review
