Evaluator
Score content based on custom criteria using AI.
Evaluator
The Evaluator node uses AI to score content on a scale of 0-100 based on custom criteria you define. It's ideal for quality assessment, relevance scoring, and content evaluation.
Overview
Use the Evaluator node when you need to:
- Score content quality or relevance
- Assess how well content meets specific criteria
- Rank or prioritize items based on scores
- Filter content based on quality thresholds
Configuration
| Field | Type | Required | Description |
|---|---|---|---|
model | string | Yes | The AI model to use for evaluation. |
criteria | string | Yes | The scoring criteria. Be specific about what constitutes high and low scores. |
includeJustification | boolean | No | When enabled, returns an explanation for the score. Default: false. |
Loop Mode
The Evaluator node supports Loop Mode for batch evaluation:
| Field | Type | Default | Description |
|---|---|---|---|
loopMode | boolean | false | Enable to evaluate each item in an array input separately. |
maxIterations | number | 100 | Maximum number of iterations when loop mode is enabled. |
concurrency | number | 1 | Number of parallel evaluations. |
onError | string | "stop" | Error handling: "stop" or "continue". |
Inputs
The Evaluator node accepts inputs from connected upstream nodes. All connected inputs are automatically concatenated and evaluated together.
Output
| Variable | Type | Description |
|---|---|---|
score | number | A score from 0-100 based on the criteria. |
justification | string | (Optional) Explanation for the score. Only present when includeJustification is enabled. |
Example Use Cases
Content Quality Assessment
Criteria:
Score based on:
- Clarity (0-30): Is the content easy to understand?
- Relevance (0-40): Does it address the topic appropriately?
- Accuracy (0-30): Is the information correct and well-supported?Input:
Machine learning is a subset of artificial intelligence that enables systems to learn from data...Output:
{
"score": 85,
"justification": "The content is clear and well-written (28/30), highly relevant to the topic (38/40), and accurate with good explanations (19/30)."
}Email Response Quality
Criteria:
Evaluate the email response on:
- Professionalism (0-25): Appropriate tone and language
- Completeness (0-35): Addresses all questions asked
- Helpfulness (0-25): Provides actionable information
- Clarity (0-15): Easy to understand and followLead Scoring
Criteria:
Score the lead based on:
- Intent signals (0-40): Keywords indicating purchase intent
- Company fit (0-30): Company size, industry, and relevance
- Engagement level (0-30): Interaction history and responsivenessResume Screening
Criteria:
Evaluate the resume for a Software Engineer position:
- Technical skills match (0-35): Relevant programming languages and technologies
- Experience level (0-30): Years and quality of relevant experience
- Education (0-15): Relevant degrees and certifications
- Communication (0-20): Clarity and presentation of informationBest Practices
-
Define clear point allocations: Break down your criteria into components with specific point values that add up to 100.
-
Be specific about thresholds: Describe what constitutes a high score (80+), medium score (50-79), and low score (0-49) for your use case.
-
Include examples in criteria: When possible, provide examples of what would score high or low.
-
Enable justification for transparency: The justification helps you understand and validate the scoring logic.
-
Use consistent criteria: For comparable scores across different inputs, keep the criteria consistent.
-
Calibrate with test data: Run several examples through the evaluator to ensure scores align with your expectations.
Combining with Other Nodes
The Evaluator node works well with:
- Router: Use the score to route content (e.g., scores > 80 go to "approved", < 50 go to "review")
- Classifier: Combine evaluation with classification for multi-dimensional analysis
- Extract Data: Evaluate extracted information quality