AgentLoops
AI Nodes

Evaluator

Score content based on custom criteria using AI.

Evaluator

The Evaluator node uses AI to score content on a scale of 0-100 based on custom criteria you define. It's ideal for quality assessment, relevance scoring, and content evaluation.

Overview

Use the Evaluator node when you need to:

  • Score content quality or relevance
  • Assess how well content meets specific criteria
  • Rank or prioritize items based on scores
  • Filter content based on quality thresholds

Configuration

FieldTypeRequiredDescription
modelstringYesThe AI model to use for evaluation.
criteriastringYesThe scoring criteria. Be specific about what constitutes high and low scores.
includeJustificationbooleanNoWhen enabled, returns an explanation for the score. Default: false.

Loop Mode

The Evaluator node supports Loop Mode for batch evaluation:

FieldTypeDefaultDescription
loopModebooleanfalseEnable to evaluate each item in an array input separately.
maxIterationsnumber100Maximum number of iterations when loop mode is enabled.
concurrencynumber1Number of parallel evaluations.
onErrorstring"stop"Error handling: "stop" or "continue".

Inputs

The Evaluator node accepts inputs from connected upstream nodes. All connected inputs are automatically concatenated and evaluated together.

Output

VariableTypeDescription
scorenumberA score from 0-100 based on the criteria.
justificationstring(Optional) Explanation for the score. Only present when includeJustification is enabled.

Example Use Cases

Content Quality Assessment

Criteria:

Score based on:
- Clarity (0-30): Is the content easy to understand?
- Relevance (0-40): Does it address the topic appropriately?
- Accuracy (0-30): Is the information correct and well-supported?

Input:

Machine learning is a subset of artificial intelligence that enables systems to learn from data...

Output:

{
  "score": 85,
  "justification": "The content is clear and well-written (28/30), highly relevant to the topic (38/40), and accurate with good explanations (19/30)."
}

Email Response Quality

Criteria:

Evaluate the email response on:
- Professionalism (0-25): Appropriate tone and language
- Completeness (0-35): Addresses all questions asked
- Helpfulness (0-25): Provides actionable information
- Clarity (0-15): Easy to understand and follow

Lead Scoring

Criteria:

Score the lead based on:
- Intent signals (0-40): Keywords indicating purchase intent
- Company fit (0-30): Company size, industry, and relevance
- Engagement level (0-30): Interaction history and responsiveness

Resume Screening

Criteria:

Evaluate the resume for a Software Engineer position:
- Technical skills match (0-35): Relevant programming languages and technologies
- Experience level (0-30): Years and quality of relevant experience
- Education (0-15): Relevant degrees and certifications
- Communication (0-20): Clarity and presentation of information

Best Practices

  1. Define clear point allocations: Break down your criteria into components with specific point values that add up to 100.

  2. Be specific about thresholds: Describe what constitutes a high score (80+), medium score (50-79), and low score (0-49) for your use case.

  3. Include examples in criteria: When possible, provide examples of what would score high or low.

  4. Enable justification for transparency: The justification helps you understand and validate the scoring logic.

  5. Use consistent criteria: For comparable scores across different inputs, keep the criteria consistent.

  6. Calibrate with test data: Run several examples through the evaluator to ensure scores align with your expectations.

Combining with Other Nodes

The Evaluator node works well with:

  • Router: Use the score to route content (e.g., scores > 80 go to "approved", < 50 go to "review")
  • Classifier: Combine evaluation with classification for multi-dimensional analysis
  • Extract Data: Evaluate extracted information quality

On this page