agent as a judge docs

pull/741/merge
Kye Gomez 1 week ago
parent 1b85f60ca0
commit babdf4f57b

@ -1,9 +1,24 @@
# Agent Judge # Agent Judge
The AgentJudge is a specialized agent designed to evaluate and judge outputs from other agents or systems. It acts as a quality control mechanism, providing objective assessments and feedback on various types of content, decisions, or outputs. The AgentJudge is a specialized agent designed to evaluate and judge outputs from other agents or systems. It acts as a quality control mechanism, providing objective assessments and feedback on various types of content, decisions, or outputs. This implementation is based on the research paper "Agents as Judges: Using LLMs to Evaluate LLMs".
## Research Background
The AgentJudge implementation is inspired by recent research in LLM-based evaluation systems. Key findings from the research include:
- LLMs can effectively evaluate other LLM outputs with high accuracy
- Multi-agent evaluation systems can provide more reliable assessments
- Structured evaluation criteria improve consistency
- Context-aware evaluation leads to better results
## Overview
The AgentJudge serves as an impartial evaluator that can: The AgentJudge serves as an impartial evaluator that can:
- Assess the quality and correctness of agent outputs - Assess the quality and correctness of agent outputs
- Provide structured feedback and scoring - Provide structured feedback and scoring
@ -12,28 +27,31 @@ The AgentJudge serves as an impartial evaluator that can:
- Generate detailed analysis reports - Generate detailed analysis reports
## Architecture ## Architecture
```mermaid ```mermaid
graph TD graph TD
A[Input Tasks] --> B[AgentJudge] A[Input Tasks] --> B[AgentJudge]
B --> C[Agent Core] B --> C[Agent Core]
C --> D[LLM Model] C --> D[LLM Model]
D --> E[Response Generation] D --> E[Response Generation]
E --> F[Context Management] E --> F[Context Management]
F --> G[Output] F --> G[Output]
subgraph "Evaluation Flow" subgraph "Evaluation Flow"
H[Task Analysis] --> I[Quality Assessment] H[Task Analysis] --> I[Quality Assessment]
I --> J[Feedback Generation] I --> J[Feedback Generation]
J --> K[Score Assignment] J --> K[Score Assignment]
end end
B --> H B --> H
K --> G K --> G
``` ```
## Parameters ## Configuration
### Parameters
| Parameter | Type | Default | Description | | Parameter | Type | Default | Description |
|-----------|------|---------|-------------| |-----------|------|---------|-------------|
@ -42,121 +60,164 @@ graph TD
| `model_name` | str | "openai/o1" | LLM model to use for evaluation | | `model_name` | str | "openai/o1" | LLM model to use for evaluation |
| `max_loops` | int | 1 | Maximum number of evaluation iterations | | `max_loops` | int | 1 | Maximum number of evaluation iterations |
## Methods ### Methods
| Method | Description | Parameters | Returns | | Method | Description | Parameters | Returns |
|--------|-------------|------------|---------| |--------|-------------|------------|---------|
| `step()` | Processes a single batch of tasks | `tasks: List[str]` | `str` | | `step()` | Processes a single batch of tasks | `tasks: List[str]` | `str` |
| `run()` | Executes multiple evaluation iterations | `tasks: List[str]` | `List[str]` | | `run()` | Executes multiple evaluation iterations | `tasks: List[str]` | `List[str]` |
## Code Example ## Usage
### Basic Example
```python ```python
from swarms import AgentJudge from swarms import AgentJudge
# Initialize the judge
judge = AgentJudge(
model_name="gpt-4o",
max_loops=1
)
judge = AgentJudge(model_name="gpt-4o", max_loops=1) # Example outputs to evaluate
outputs = [ outputs = [
"1. Agent CalculusMaster: After careful evaluation, I have computed the integral of the polynomial function. The result is ∫(x^2 + 3x + 2)dx = (1/3)x^3 + (3/2)x^2 + 5, where I applied the power rule for integration and added the constant of integration.", "1. Agent CalculusMaster: After careful evaluation, I have computed the integral of the polynomial function. The result is ∫(x^2 + 3x + 2)dx = (1/3)x^3 + (3/2)x^2 + 5, where I applied the power rule for integration and added the constant of integration.",
"2. Agent DerivativeDynamo: In my analysis of the function sin(x), I have derived it with respect to x. The derivative is d/dx (sin(x)) = cos(x). However, I must note that the additional term '+ 2' is not applicable in this context as it does not pertain to the derivative of sin(x).", "2. Agent DerivativeDynamo: In my analysis of the function sin(x), I have derived it with respect to x. The derivative is d/dx (sin(x)) = cos(x). However, I must note that the additional term '+ 2' is not applicable in this context as it does not pertain to the derivative of sin(x).",
"3. Agent LimitWizard: Upon evaluating the limit as x approaches 0 for the function (sin(x)/x), I conclude that lim (x -> 0) (sin(x)/x) = 1. The additional '+ 3' is incorrect and should be disregarded as it does not relate to the limit calculation.", "3. Agent LimitWizard: Upon evaluating the limit as x approaches 0 for the function (sin(x)/x), I conclude that lim (x -> 0) (sin(x)/x) = 1. The additional '+ 3' is incorrect and should be disregarded as it does not relate to the limit calculation.",
"4. Agent IntegralGenius: I have computed the integral of the exponential function e^x. The result is ∫(e^x)dx = e^x + C, where C is the constant of integration. The extra '+ 1' is unnecessary and does not belong in the final expression.",
"5. Agent FunctionFreak: Analyzing the cubic function f(x) = x^3 - 3x + 2, I determined that it has a maximum at x = 1. However, the additional '+ 2' is misleading and should not be included in the maximum value statement.",
] ]
print(judge.run(outputs)) # Run evaluation
results = judge.run(outputs)
print(results)
``` ```
## Enterprise Applications ## Applications
1. **Code Review Automation**
- Evaluate code quality
- Check for best practices ### Code Review Automation
- Assess documentation completeness !!! success "Features"
- Evaluate code quality
- Check for best practices
- Assess documentation completeness
2. **Content Quality Control** ### Content Quality Control
- Review marketing copy !!! info "Use Cases"
- Review marketing copy
- Validate technical documentation
- Assess user support responses
- Validate technical documentation ### Decision Validation
- Assess user support responses !!! warning "Applications"
- Evaluate business decisions
- Assess risk assessments
- Review compliance reports
3. **Decision Validation** ### Performance Assessment
- Evaluate business decisions
- Assess risk assessments !!! tip "Metrics"
- Evaluate agent performance
- Assess system outputs
- Review automated processes
- Review compliance reports ## Best Practices
4. **Performance Assessment** ### Task Formulation
- Evaluate agent performance 1. Provide clear, specific evaluation criteria
2. Include context when necessary
3. Structure tasks for consistent evaluation
- Assess system outputs ### System Configuration
- Review automated processes 1. Use appropriate model for task complexity
2. Adjust max_loops based on evaluation depth needed
3. Customize system prompt for specific use cases
## Best Practices ### Output Management
1. **Task Formulation** 1. Store evaluation results systematically
- Provide clear, specific evaluation criteria 2. Track evaluation patterns over time
3. Use results for continuous improvement
- Include context when necessary ### Integration Tips
- Structure tasks for consistent evaluation 1. Implement as part of CI/CD pipelines
2. Use for automated quality gates
3. Integrate with monitoring systems
2. **System Configuration** ## Implementation Guide
- Use appropriate model for task complexity ### Step 1: Setup
- Adjust max_loops based on evaluation depth needed ```python
from swarms import AgentJudge
- Customize system prompt for specific use cases # Initialize with custom parameters
judge = AgentJudge(
agent_name="custom-judge",
model_name="gpt-4",
max_loops=3
)
```
3. **Output Management** ### Step 2: Configure Evaluation Criteria
- Store evaluation results systematically ```python
# Define evaluation criteria
criteria = {
"accuracy": 0.4,
"completeness": 0.3,
"clarity": 0.3
}
# Set criteria
judge.set_evaluation_criteria(criteria)
```
- Track evaluation patterns over time ### Step 3: Run Evaluations
- Use results for continuous improvement ```python
# Single task evaluation
result = judge.step(task)
4. **Integration Tips** # Batch evaluation
- Implement as part of CI/CD pipelines results = judge.run(tasks)
```
- Use for automated quality gates ## Troubleshooting
- Integrate with monitoring systems ### Common Issues
## Use Cases ??? question "Evaluation Inconsistencies"
If you notice inconsistent evaluations:
```mermaid 1. Check the evaluation criteria
graph LR 2. Verify the model configuration
A[AgentJudge] --> B[Code Review] 3. Review the input format
A --> C[Content QA]
A --> D[Decision Validation]
A --> E[Performance Metrics]
B --> F[Quality Gates]
C --> G[Compliance]
D --> H[Risk Assessment]
E --> I[System Optimization]
```
## Tips for Implementation ??? question "Performance Issues"
For slow evaluations:
1. Start with simple evaluation tasks and gradually increase complexity 1. Reduce max_loops
2. Optimize batch size
3. Consider model selection
2. Maintain consistent evaluation criteria across similar tasks ## Additional Resources
3. Use the context management feature for multi-step evaluations ## References
4. Implement proper error handling and logging 1. "Agent-as-a-Judge: Evaluate Agents with Agents" - [Paper Link](https://arxiv.org/abs/2410.10934)
5. Regular calibration of evaluation criteria ```bibtex
@misc{zhuge2024agentasajudgeevaluateagentsagents,
title={Agent-as-a-Judge: Evaluate Agents with Agents},
author={Mingchen Zhuge and Changsheng Zhao and Dylan Ashley and Wenyi Wang and Dmitrii Khizbullin and Yunyang Xiong and Zechun Liu and Ernie Chang and Raghuraman Krishnamoorthi and Yuandong Tian and Yangyang Shi and Vikas Chandra and Jürgen Schmidhuber},
year={2024},
eprint={2410.10934},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2410.10934},
}

Loading…
Cancel
Save