Merge branch 'master' into agent_judge0717

pull/958/head
王祥宇 5 days ago committed by GitHub
commit b08d57931c
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -41,7 +41,7 @@ jobs:
# Step 4: Cache dependencies to speed up subsequent runs.
- name: Load cached venv
id: cached-poetry-dependencies
uses: actions/cache@v3
uses: actions/cache@v4
with:
path: .venv
key: venv-${{ runner.os }}-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/poetry.lock') }}
@ -83,7 +83,7 @@ jobs:
# This happens even if the previous steps fail, allowing you to debug.
- name: Upload Test Report
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: test-report-${{ matrix.python-version }}
path: test_runs/

@ -62,7 +62,7 @@ jobs:
# Build and push Docker image
- name: Build and push Docker image
id: build-and-push
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
context: .
push: ${{ github.event_name != 'pull_request' }}

@ -1,5 +1,4 @@
from swarms import Agent
from swarms.structs.concurrent_workflow import ConcurrentWorkflow
from swarms import Agent, ConcurrentWorkflow
# Initialize market research agent
market_researcher = Agent(

@ -0,0 +1,19 @@
# Class/function
Brief description
## Overview
## Architecture (Mermaid diagram)
## Class Reference (Constructor + Methods)
## Examples
## Conclusion
Benefits of class/structure, and more

@ -207,6 +207,7 @@ nav:
- Various Execution Methods: "swarms/structs/various_execution_methods.md"
- Deep Research Swarm: "swarms/structs/deep_research_swarm.md"
- Council of Judges: "swarms/structs/council_of_judges.md"
- Heavy Swarm: "swarms/structs/heavy_swarm.md"
- Hiearchical Architectures:
@ -265,6 +266,10 @@ nav:
- Deploy your agents on Phala: "swarms_cloud/phala_deploy.md"
# - Deploy your agents on FastAPI:
- More About Us:
- Swarms Ecosystem: "swarms/ecosystem.md"
- Technical Support: "swarms/support.md"
- Examples:
- Overview: "examples/index.md"
@ -368,6 +373,7 @@ nav:
- Finance Swarm: "swarms/examples/swarms_api_finance.md"
- Clients:
- Overview: "swarms_cloud/api_clients.md"
- Python Client: "swarms_cloud/python_client.md"
- Rust Client: "swarms_cloud/rust_client.md"

@ -1,223 +1,251 @@
# Agent Judge
# AgentJudge
The AgentJudge is a specialized agent designed to evaluate and judge outputs from other agents or systems. It acts as a quality control mechanism, providing objective assessments and feedback on various types of content, decisions, or outputs. This implementation is based on the research paper "Agents as Judges: Using LLMs to Evaluate LLMs".
A specialized agent for evaluating and judging outputs from other agents or systems. Acts as a quality control mechanism providing objective assessments and feedback.
## Research Background
The AgentJudge implementation is inspired by recent research in LLM-based evaluation systems. Key findings from the research include:
- LLMs can effectively evaluate other LLM outputs with high accuracy
- Multi-agent evaluation systems can provide more reliable assessments
- Structured evaluation criteria improve consistency
- Context-aware evaluation leads to better results
Based on the research paper: **"Agent-as-a-Judge: Evaluate Agents with Agents"** - [arXiv:2410.10934](https://arxiv.org/abs/2410.10934)
## Overview
The AgentJudge serves as an impartial evaluator that can:
The AgentJudge is designed to evaluate and critique outputs from other AI agents, providing structured feedback on quality, accuracy, and areas for improvement. It supports both single-shot evaluations and iterative refinement through multiple evaluation loops with context building.
Key capabilities:
- Assess the quality and correctness of agent outputs
- **Quality Assessment**: Evaluates correctness, clarity, and completeness of agent outputs
- Provide structured feedback and scoring
- **Structured Feedback**: Provides detailed critiques with strengths, weaknesses, and suggestions
- Maintain context across multiple evaluations
- **Multimodal Support**: Can evaluate text outputs alongside images
- Generate detailed analysis reports
- **Context Building**: Maintains evaluation context across multiple iterations
- **Batch Processing**: Efficiently processes multiple evaluations
## Architecture
```mermaid
graph TD
A[Input Tasks] --> B[AgentJudge]
B --> C[Agent Core]
C --> D[LLM Model]
D --> E[Response Generation]
E --> F[Context Management]
F --> G[Output]
subgraph "Evaluation Flow"
H[Task Analysis] --> I[Quality Assessment]
I --> J[Feedback Generation]
J --> K[Score Assignment]
end
B --> H
K --> G
```
A[Input Task] --> B[AgentJudge]
B --> C{Evaluation Mode}
## Configuration
C -->|step()| D[Single Eval]
C -->|run()| E[Iterative Eval]
C -->|run_batched()| F[Batch Eval]
### Parameters
D --> G[Agent Core]
E --> G
F --> G
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `agent_name` | str | "agent-judge-01" | Unique identifier for the judge agent |
| `system_prompt` | str | AGENT_JUDGE_PROMPT | System instructions for the agent |
| `model_name` | str | "openai/o1" | LLM model to use for evaluation |
| `max_loops` | int | 1 | Maximum number of evaluation iterations |
G --> H[LLM Model]
H --> I[Quality Analysis]
I --> J[Feedback & Output]
### Methods
subgraph "Feedback Details"
N[Strengths]
O[Weaknesses]
P[Improvements]
Q[Accuracy Check]
end
| Method | Description | Parameters | Returns |
|--------|-------------|------------|---------|
| `step()` | Processes a single batch of tasks | `tasks: List[str]` | `str` |
| `run()` | Executes multiple evaluation iterations | `tasks: List[str]` | `List[str]` |
J --> N
J --> O
J --> P
J --> Q
## Usage
```
### Basic Example
## Class Reference
```python
from swarms import AgentJudge
### Constructor
# Initialize the judge
judge = AgentJudge(
model_name="gpt-4o",
max_loops=1
```python
AgentJudge(
id: str = str(uuid.uuid4()),
agent_name: str = "Agent Judge",
description: str = "You're an expert AI agent judge...",
system_prompt: str = AGENT_JUDGE_PROMPT,
model_name: str = "openai/o1",
max_loops: int = 1,
verbose: bool = False,
*args,
**kwargs
)
```
# Example outputs to evaluate
outputs = [
"1. Agent CalculusMaster: After careful evaluation, I have computed the integral of the polynomial function. The result is ∫(x^2 + 3x + 2)dx = (1/3)x^3 + (3/2)x^2 + 5, where I applied the power rule for integration and added the constant of integration.",
"2. Agent DerivativeDynamo: In my analysis of the function sin(x), I have derived it with respect to x. The derivative is d/dx (sin(x)) = cos(x). However, I must note that the additional term '+ 2' is not applicable in this context as it does not pertain to the derivative of sin(x).",
"3. Agent LimitWizard: Upon evaluating the limit as x approaches 0 for the function (sin(x)/x), I conclude that lim (x -> 0) (sin(x)/x) = 1. The additional '+ 3' is incorrect and should be disregarded as it does not relate to the limit calculation.",
]
#### Parameters
# Run evaluation
results = judge.run(outputs)
print(results)
```
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `id` | `str` | `str(uuid.uuid4())` | Unique identifier for the judge instance |
| `agent_name` | `str` | `"Agent Judge"` | Name of the agent judge |
| `description` | `str` | `"You're an expert AI agent judge..."` | Description of the agent's role |
| `system_prompt` | `str` | `AGENT_JUDGE_PROMPT` | System instructions for evaluation |
| `model_name` | `str` | `"openai/o1"` | LLM model for evaluation |
| `max_loops` | `int` | `1` | Maximum evaluation iterations |
| `verbose` | `bool` | `False` | Enable verbose logging |
### Methods
## Applications
#### step()
### Code Review Automation
```python
step(
task: str = None,
tasks: Optional[List[str]] = None,
img: Optional[str] = None
) -> str
```
!!! success "Features"
- Evaluate code quality
- Check for best practices
- Assess documentation completeness
Processes a single task or list of tasks and returns evaluation.
### Content Quality Control
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `task` | `str` | `None` | Single task/output to evaluate |
| `tasks` | `List[str]` | `None` | List of tasks/outputs to evaluate |
| `img` | `str` | `None` | Path to image for multimodal evaluation |
**Returns:** `str` - Detailed evaluation response
!!! info "Use Cases"
- Review marketing copy
- Validate technical documentation
- Assess user support responses
#### run()
### Decision Validation
```python
run(
task: str = None,
tasks: Optional[List[str]] = None,
img: Optional[str] = None
) -> List[str]
```
!!! warning "Applications"
- Evaluate business decisions
- Assess risk assessments
- Review compliance reports
Executes evaluation in multiple iterations with context building.
### Performance Assessment
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `task` | `str` | `None` | Single task/output to evaluate |
| `tasks` | `List[str]` | `None` | List of tasks/outputs to evaluate |
| `img` | `str` | `None` | Path to image for multimodal evaluation |
!!! tip "Metrics"
- Evaluate agent performance
- Assess system outputs
- Review automated processes
**Returns:** `List[str]` - List of evaluation responses from each iteration
## Best Practices
#### run_batched()
### Task Formulation
```python
run_batched(
tasks: Optional[List[str]] = None,
imgs: Optional[List[str]] = None
) -> List[List[str]]
```
1. Provide clear, specific evaluation criteria
2. Include context when necessary
3. Structure tasks for consistent evaluation
Executes batch evaluation of multiple tasks with corresponding images.
### System Configuration
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `tasks` | `List[str]` | `None` | List of tasks/outputs to evaluate |
| `imgs` | `List[str]` | `None` | List of image paths (same length as tasks) |
1. Use appropriate model for task complexity
2. Adjust max_loops based on evaluation depth needed
3. Customize system prompt for specific use cases
**Returns:** `List[List[str]]` - Evaluation responses for each task
### Output Management
## Examples
1. Store evaluation results systematically
2. Track evaluation patterns over time
3. Use results for continuous improvement
### Basic Usage
### Integration Tips
```python
from swarms import AgentJudge
1. Implement as part of CI/CD pipelines
2. Use for automated quality gates
3. Integrate with monitoring systems
# Initialize with default settings
judge = AgentJudge()
## Implementation Guide
# Single task evaluation
result = judge.step(task="The capital of France is Paris.")
print(result)
```
### Step 1: Setup
### Custom Configuration
```python
from swarms import AgentJudge
# Initialize with custom parameters
# Custom judge configuration
judge = AgentJudge(
agent_name="custom-judge",
agent_name="content-evaluator",
model_name="gpt-4",
max_loops=3
max_loops=3,
verbose=True
)
# Evaluate multiple outputs
outputs = [
"Agent CalculusMaster: The integral of x^2 + 3x + 2 is (1/3)x^3 + (3/2)x^2 + 2x + C",
"Agent DerivativeDynamo: The derivative of sin(x) is cos(x)",
"Agent LimitWizard: The limit of sin(x)/x as x approaches 0 is 1"
]
evaluation = judge.step(tasks=outputs)
print(evaluation)
```
### Step 2: Configure Evaluation Criteria
### Iterative Evaluation with Context
```python
# Define evaluation criteria
criteria = {
"accuracy": 0.4,
"completeness": 0.3,
"clarity": 0.3
}
from swarms import AgentJudge
# Multiple iterations with context building
judge = AgentJudge(max_loops=3)
# Set criteria
judge.set_evaluation_criteria(criteria)
# Each iteration builds on previous context
evaluations = judge.run(task="Agent output: 2+2=5")
for i, eval_result in enumerate(evaluations):
print(f"Iteration {i+1}: {eval_result}\n")
```
### Step 3: Run Evaluations
### Multimodal Evaluation
```python
# Single task evaluation
result = judge.step(task)
from swarms import AgentJudge
# Batch evaluation
results = judge.run(tasks)
```
judge = AgentJudge()
## Troubleshooting
# Evaluate with image
evaluation = judge.step(
task="Describe what you see in this image",
img="path/to/image.jpg"
)
print(evaluation)
```
### Common Issues
### Batch Processing
??? question "Evaluation Inconsistencies"
If you notice inconsistent evaluations:
1. Check the evaluation criteria
2. Verify the model configuration
3. Review the input format
```python
from swarms import AgentJudge
??? question "Performance Issues"
For slow evaluations:
1. Reduce max_loops
2. Optimize batch size
3. Consider model selection
judge = AgentJudge()
# Batch evaluation with images
tasks = [
"Describe this chart",
"What's the main trend?",
"Any anomalies?"
]
images = [
"chart1.png",
"chart2.png",
"chart3.png"
]
## References
# Each task evaluated independently
evaluations = judge.run_batched(tasks=tasks, imgs=images)
for i, task_evals in enumerate(evaluations):
print(f"Task {i+1} evaluations: {task_evals}")
```
### "Agent-as-a-Judge: Evaluate Agents with Agents" - [Paper Link](https://arxiv.org/abs/2410.10934)
## Reference
```bibtex
@misc{zhuge2024agentasajudgeevaluateagentsagents,
title={Agent-as-a-Judge: Evaluate Agents with Agents},
author={Mingchen Zhuge and Changsheng Zhao and Dylan Ashley and Wenyi Wang and Dmitrii Khizbullin and Yunyang Xiong and Zechun Liu and Ernie Chang and Raghuraman Krishnamoorthi and Yuandong Tian and Yangyang Shi and Vikas Chandra and Jürgen Schmidhuber},
year={2024},
eprint={2410.10934},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2410.10934},
title={Agent-as-a-Judge: Evaluate Agents with Agents},
author={Mingchen Zhuge and Changsheng Zhao and Dylan Ashley and Wenyi Wang and Dmitrii Khizbullin and Yunyang Xiong and Zechun Liu and Ernie Chang and Raghuraman Krishnamoorthi and Yuandong Tian and Yangyang Shi and Vikas Chandra and Jürgen Schmidhuber},
year={2024},
eprint={2410.10934},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2410.10934}
}
```

@ -1,6 +1,5 @@
# Consistency Agent Documentation
The `SelfConsistencyAgent` is a specialized agent designed for generating multiple independent responses to a given task and aggregating them into a single, consistent final answer. It leverages concurrent processing to enhance efficiency and employs a majority voting mechanism to ensure the reliability of the aggregated response.
## Purpose
@ -17,24 +16,31 @@ The primary objective of the `SelfConsistencyAgent` is to provide a robust mecha
| Argument | Type | Default | Description |
|------------------------|---------|---------|-----------------------------------------------------------------------------|
| `num_samples` | `int` | `5` | Number of independent responses to sample. |
| `return_list` | `bool` | `False` | Whether to return the conversation as a list. |
| `max_loops` | `int` | `1` | Maximum number of loops for the agent to run. |
| `return_dict` | `bool` | `False` | Whether to return the conversation as a dictionary. |
| `return_json` | `bool` | `False` | Whether to return the conversation as JSON. |
| `majority_voting_prompt` | `str` | `None` | Custom prompt for majority voting. |
| `name` | `str` | `"Self-Consistency-Agent"` | Name of the agent. |
| `description` | `str` | `"An agent that uses self consistency to generate a final answer."` | Description of the agent's purpose. |
| `system_prompt` | `str` | `CONSISTENCY_SYSTEM_PROMPT` | System prompt for the reasoning agent. |
| `model_name` | `str` | Required | The underlying language model to use. |
| `num_samples` | `int` | `5` | Number of independent responses to generate. |
| `max_loops` | `int` | `1` | Maximum number of reasoning loops per sample. |
| `majority_voting_prompt` | `Optional[str]` | `majority_voting_prompt` | Custom prompt for majority voting aggregation. |
| `eval` | `bool` | `False` | Enable evaluation mode for answer validation. |
| `output_type` | `OutputType` | `"dict"` | Format of the output. |
| `random_models_on` | `bool` | `False` | Enable random model selection for diversity. |
### Methods
- **`run`**: Generates multiple responses for the given task and aggregates them.
- **Arguments**:
- `task` (`str`): The input prompt.
- `answer` (`str`, optional): The expected answer to validate responses against.
- **Returns**: `str` - The aggregated final answer.
- `img` (`Optional[str]`, optional): Image input for vision tasks.
- `answer` (`Optional[str]`, optional): Expected answer for validation (if eval=True).
- **Returns**: `Union[str, Dict[str, Any]]` - The aggregated final answer.
- **`aggregate`**: Aggregates a list of responses into a single final answer using majority voting.
- **`aggregation_agent`**: Aggregates a list of responses into a single final answer using majority voting.
- **Arguments**:
- `responses` (`List[str]`): The list of responses.
- `prompt` (`str`, optional): Custom prompt for the aggregation agent.
- `model_name` (`str`, optional): Model to use for aggregation.
- **Returns**: `str` - The aggregated answer.
- **`check_responses_for_answer`**: Checks if a specified answer is present in any of the provided responses.
@ -43,6 +49,11 @@ The primary objective of the `SelfConsistencyAgent` is to provide a robust mecha
- `answer` (`str`): The answer to look for in the responses.
- **Returns**: `bool` - `True` if the answer is found, `False` otherwise.
- **`batched_run`**: Run the agent on multiple tasks in batch.
- **Arguments**:
- `tasks` (`List[str]`): List of tasks to be processed.
- **Returns**: `List[Union[str, Dict[str, Any]]]` - List of results for each task.
### Examples
#### Example 1: Basic Usage
@ -52,7 +63,7 @@ from swarms.agents.consistency_agent import SelfConsistencyAgent
# Initialize the agent
agent = SelfConsistencyAgent(
agent_name="Reasoning-Agent",
name="Math-Reasoning-Agent",
model_name="gpt-4o-mini",
max_loops=1,
num_samples=5
@ -75,7 +86,7 @@ from swarms.agents.consistency_agent import SelfConsistencyAgent
# Initialize the agent with a custom majority voting prompt
agent = SelfConsistencyAgent(
agent_name="Reasoning-Agent",
name="Reasoning-Agent",
model_name="gpt-4o-mini",
max_loops=1,
num_samples=5,
@ -92,4 +103,128 @@ final_answer = agent.run(task)
print("Final aggregated answer:", final_answer)
```
#### Example 3: Evaluation Mode
```python
from swarms.agents.consistency_agent import SelfConsistencyAgent
# Initialize the agent with evaluation mode
agent = SelfConsistencyAgent(
name="Validation-Agent",
model_name="gpt-4o-mini",
num_samples=3,
eval=True
)
# Run with expected answer for validation
result = agent.run("What is 2 + 2?", answer="4", eval=True)
if result is not None:
print("Validation passed:", result)
else:
print("Validation failed - expected answer not found")
```
#### Example 4: Random Models for Diversity
```python
from swarms.agents.consistency_agent import SelfConsistencyAgent
# Initialize the agent with random model selection
agent = SelfConsistencyAgent(
name="Diverse-Reasoning-Agent",
model_name="gpt-4o-mini",
num_samples=5,
random_models_on=True
)
# Run the agent
result = agent.run("What are the benefits of renewable energy?")
print("Diverse reasoning result:", result)
```
#### Example 5: Batch Processing
```python
from swarms.agents.consistency_agent import SelfConsistencyAgent
# Initialize the agent
agent = SelfConsistencyAgent(
name="Batch-Processing-Agent",
model_name="gpt-4o-mini",
num_samples=3
)
# Define multiple tasks
tasks = [
"What is the capital of France?",
"What is 15 * 23?",
"Explain photosynthesis in simple terms."
]
# Process all tasks
results = agent.batched_run(tasks)
# Print results
for i, result in enumerate(results):
print(f"Task {i+1} result: {result}")
```
## Key Features
### Self-Consistency Technique
The agent implements the self-consistency approach based on the research paper "Self-Consistency Improves Chain of Thought Reasoning in Language Models" by Wang et al. (2022). This technique:
1. **Generates Multiple Independent Responses**: Creates several reasoning paths for the same problem
2. **Analyzes Consistency**: Examines agreement among different reasoning approaches
3. **Aggregates Results**: Uses majority voting or consensus building
4. **Produces Reliable Output**: Delivers a final answer reflecting the most reliable consensus
### Benefits
- **Mitigates Random Errors**: Multiple reasoning paths reduce individual path errors
- **Reduces Bias**: Diverse approaches minimize single-method biases
- **Improves Reliability**: Consensus-based results are more trustworthy
- **Handles Complexity**: Better performance on complex problem-solving tasks
### Use Cases
- **Mathematical Problem Solving**: Where accuracy is critical
- **Decision Making**: When reliability is paramount
- **Validation Tasks**: When answers need verification
- **Complex Reasoning**: Multi-step problem solving
- **Research Questions**: Where multiple perspectives are valuable
## Technical Details
### Concurrent Execution
The agent uses `ThreadPoolExecutor` to generate multiple responses concurrently, improving performance while maintaining independence between reasoning paths.
### Aggregation Process
The aggregation uses an AI-powered agent that:
- Identifies dominant responses
- Analyzes disparities and disagreements
- Evaluates consensus strength
- Synthesizes minority insights
- Provides comprehensive recommendations
### Output Formats
The agent supports various output types:
- `"dict"`: Dictionary format with conversation history
- `"str"`: Simple string output
- `"list"`: List format
- `"json"`: JSON formatted output
## Limitations
1. **Computational Cost**: Higher `num_samples` increases processing time and cost
2. **Model Dependencies**: Performance depends on the underlying model capabilities
3. **Consensus Challenges**: May struggle with tasks where multiple valid approaches exist
4. **Memory Usage**: Concurrent execution requires more memory resources
## Best Practices
1. **Sample Size**: Use 3-7 samples for most tasks; increase for critical decisions
2. **Model Selection**: Choose models with strong reasoning capabilities
3. **Evaluation Mode**: Enable for tasks with known correct answers
4. **Custom Prompts**: Tailor majority voting prompts for specific domains
5. **Batch Processing**: Use `batched_run` for multiple related tasks
---

@ -38,9 +38,12 @@ graph TD
| `max_loops` | int | 1 | Maximum number of reasoning loops |
| `swarm_type` | agent_types | "reasoning_duo" | Type of reasoning swarm to use |
| `num_samples` | int | 1 | Number of samples for self-consistency |
| `output_type` | OutputType | "dict" | Format of the output |
| `output_type` | OutputType | "dict-all-except-first" | Format of the output |
| `num_knowledge_items` | int | 6 | Number of knowledge items for GKP agent |
| `memory_capacity` | int | 6 | Memory capacity for agents that support it |
| `eval` | bool | False | Enable evaluation mode for self-consistency |
| `random_models_on` | bool | False | Enable random model selection for diversity |
| `majority_voting_prompt` | Optional[str] | None | Custom prompt for majority voting |
### Available Agent Types
@ -84,12 +87,16 @@ graph TD
- Multiple solution generation
- Consensus building
- Solution verification
- Concurrent execution
- AI-powered aggregation
**Best Use Cases**
- Tasks requiring high reliability
- Problems with multiple approaches
- Validation-heavy tasks
- Mathematical problem solving
- Decision making scenarios
**Required Parameters**
@ -98,9 +105,12 @@ graph TD
**Optional Parameters**
- num_samples
- max_loops
- output_type
- num_samples (default: 5)
- max_loops (default: 1)
- output_type (default: "dict")
- eval (default: False) - Enable answer validation
- random_models_on (default: False) - Enable model diversity
- majority_voting_prompt (default: None) - Custom aggregation prompt
=== "IRE"
**Key Features**
@ -217,14 +227,43 @@ graph TD
system_prompt="You are a helpful assistant that can answer questions and help with tasks.",
max_loops=1,
swarm_type="self-consistency",
num_samples=1,
output_type="list"
num_samples=3,
eval=False,
random_models_on=False,
majority_voting_prompt=None
)
# Run a single task
result = router.run("What is the best approach to solve this problem?")
```
=== "Self-Consistency Examples"
```python
# Basic self-consistency
router = ReasoningAgentRouter(
swarm_type="self-consistency",
num_samples=3,
model_name="gpt-4o-mini"
)
# Self-consistency with evaluation mode
router = ReasoningAgentRouter(
swarm_type="self-consistency",
num_samples=5,
model_name="gpt-4o-mini",
eval=True,
random_models_on=True
)
# Self-consistency with custom majority voting
router = ReasoningAgentRouter(
swarm_type="self-consistency",
num_samples=3,
model_name="gpt-4o-mini",
majority_voting_prompt="Analyze the responses and provide the most accurate answer."
)
```
=== "ReflexionAgent"
```python
router = ReasoningAgentRouter(
@ -265,9 +304,13 @@ graph TD
2. **Performance Optimization**
- Adjust max_loops based on task complexity
- Increase num_samples for higher reliability
- Increase num_samples for higher reliability (3-7 for most tasks)
- Choose appropriate model_name based on task requirements
- Enable random_models_on for diverse reasoning approaches
- Use eval mode for validation tasks with known answers
3. **Output Handling**
- Use appropriate output_type for your needs
@ -275,6 +318,15 @@ graph TD
- Process batched results appropriately
- Handle errors gracefully
4. **Self-Consistency Specific**
- Use 3-5 samples for most tasks, 7+ for critical decisions
- Enable eval mode when you have expected answers for validation
- Customize majority_voting_prompt for domain-specific aggregation
- Consider random_models_on for diverse model perspectives
## Limitations

@ -1,75 +1,149 @@
# Swarm Ecosystem
# Swarms Ecosystem
Welcome to the Swarm Ecosystem, a comprehensive suite of tools and frameworks designed to empower developers to orhestrate swarms of autonomous agents for a variety of applications. Dive into our ecosystem below:
*The Complete Enterprise-Grade Multi-Agent AI Platform*
[Full Github Link](https://github.com/kyegomez/swarm-ecosystem)
---
## **Join the Future of AI Development**
**We're Building the Operating System for the Agent Economy** - The Swarms ecosystem represents the most comprehensive, production-ready multi-agent AI platform available today. From our flagship Python framework to high-performance Rust implementations and client libraries spanning every major programming language, we provide enterprise-grade tools that power the next generation of intelligent applications.
---
## **Complete Product Portfolio**
| **Product** | **Technology** | **Status** | **Repository** | **Documentation** |
|-------------|---------------|------------|----------------|-------------------|
| **Swarms Python Framework** | Python | **Production** | [swarms](https://github.com/kyegomez/swarms) | [Docs](https://docs.swarms.world/en/latest/swarms/install/install/) |
| **Swarms Rust Framework** | Rust | **Production** | [swarms-rs](https://github.com/The-Swarm-Corporation/swarms-rs) | [Docs](https://docs.swarms.world/en/latest/swarms_rs/overview/) |
| **Python API Client** | Python | **Production** | [swarms-sdk](https://github.com/The-Swarm-Corporation/swarms-sdk) | [Docs](https://docs.swarms.world/en/latest/swarms_cloud/python_client/) |
| **TypeScript/Node.js Client** | TypeScript | **Production** | [swarms-ts](https://github.com/The-Swarm-Corporation/swarms-ts) | *Coming Soon* |
| **Go Client** | Go | **Production** | [swarms-client-go](https://github.com/The-Swarm-Corporation/swarms-client-go) | *Coming Soon* |
| **Java Client** | Java | **Production** | [swarms-java](https://github.com/The-Swarm-Corporation/swarms-java) | *Coming Soon* |
| **Kotlin Client** | Kotlin | **Q2 2025** | *In Development* | *Coming Soon* |
| **Ruby Client** | Ruby | **Q2 2025** | *In Development* | *Coming Soon* |
| **Rust Client** | Rust | **Q2 2025** | *In Development* | *Coming Soon* |
| **C#/.NET Client** | C# | **Q3 2025** | *In Development* | *Coming Soon* |
---
## **Why Choose the Swarms Ecosystem?**
### **Enterprise-Grade Architecture**
- **Production Ready**: Battle-tested in enterprise environments with 99.9%+ uptime
- **Scalable Infrastructure**: Handle millions of agent interactions with automatic scaling
- **Security First**: End-to-end encryption, API key management, and enterprise compliance
- **Observability**: Comprehensive logging, monitoring, and debugging capabilities
### **Developer Experience**
- **Multiple Language Support**: Native clients for every major programming language
## Getting Started
- **Unified API**: Consistent interface across all platforms and languages
| Project | Description | Link |
| ------- | ----------- | ---- |
| **Swarms Framework** | A Python-based framework that enables the creation, deployment, and scaling of reliable swarms of autonomous agents aimed at automating complex workflows. | [Swarms Framework](https://github.com/kyegomez/swarms) |
| **Swarms Cloud** | A cloud-based service offering Swarms-as-a-Service with guaranteed 100% uptime, cutting-edge performance, and enterprise-grade reliability for seamless scaling and management of swarms. | [Swarms Cloud](https://github.com/kyegomez/swarms-cloud) |
| **Swarms Core** | Provides backend utilities focusing on concurrency, multi-threading, and advanced execution strategies, developed in Rust for maximum efficiency and performance. | [Swarms Core](https://github.com/kyegomez/swarms-core) |
| **Swarm Foundation Models** | A dedicated repository for the creation, optimization, and training of groundbreaking swarming models. Features innovative models like PSO with transformers, ant colony optimizations, and more, aiming to surpass traditional architectures like Transformers and SSMs. Open for community contributions and ideas. | [Swarm Foundation Models](https://github.com/kyegomez/swarms-pytorch) |
| **Swarm Platform** | The Swarms dashboard Platform | [Swarm Platform](https://github.com/kyegomez/swarms-platform) |
| **Swarms JS** | Swarms Framework in JS. Orchestrate any agents and enable multi-agent collaboration between various agents! | [Swarm JS](https://github.com/kyegomez/swarms-js) |
| **Swarms Memory** | Easy to use, reliable, and bleeding-edge RAG systems.! | [Swarm JS](https://github.com/kyegomez/swarms-memory) |
| **Swarms Evals** | Evaluating Swarms! | [Swarm JS](https://github.com/kyegomez/swarms-evals) |
| **Swarms Zero** | RPC Enterprise-Grade Automation Framework | [Swarm Zero]([https://github.com/kyegomez/swarms-evals](https://github.com/kyegomez/Zero)) |
- **Rich Documentation**: Comprehensive guides, tutorials, and API references
----
- **Active Community**: 24/7 support through Discord, GitHub, and direct channels
## 🫶 Contributions:
### **Performance & Reliability**
The easiest way to contribute is to pick any issue with the `good first issue` tag 💪. Read the Contributing guidelines [here](/CONTRIBUTING.md). Bug Report? [File here](https://github.com/swarms/gateway/issues) | Feature Request? [File here](https://github.com/swarms/gateway/issues)
- **High Throughput**: Process thousands of concurrent agent requests
Swarms is an open-source project, and contributions are VERY welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure. Please refer to the [CONTRIBUTING.md](https://github.com/kyegomez/swarms/blob/master/CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/1) to participate in Roadmap discussions!
- **Low Latency**: Optimized for real-time applications and user experiences
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
<img src="https://contrib.rocks/image?repo=kyegomez/swarms" />
</a>
- **Fault Tolerance**: Automatic retries, circuit breakers, and graceful degradation
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
<img src="https://contrib.rocks/image?repo=kyegomez/swarms-cloud" />
</a>
- **Multi-Cloud**: Deploy on AWS, GCP, Azure, or on-premises infrastructure
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
<img src="https://contrib.rocks/image?repo=kyegomez/swarms-platform" />
</a>
---
## **Join Our Growing Community**
### **Connect With Developers Worldwide**
| **Platform** | **Purpose** | **Join Link** | **Benefits** |
|--------------|-------------|---------------|--------------|
| **Discord Community** | Real-time support & discussions | [Join Discord](https://discord.gg/jM3Z6M9uMq) | • 24/7 developer support<br/>• Weekly community events<br/>• Direct access to core team<br/>• Beta feature previews |
| **Twitter/X** | Latest updates & announcements | [Follow @swarms_corp](https://x.com/swarms_corp) | • Breaking news & updates<br/>• Community highlights<br/>• Technical insights<br/>• Industry partnerships |
| **LinkedIn** | Professional network & updates | [The Swarm Corporation](https://www.linkedin.com/company/the-swarm-corporation) | • Professional networking<br/>• Career opportunities<br/>• Enterprise partnerships<br/>• Industry insights |
| **YouTube** | Tutorials & technical content | [Swarms Channel](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ) | • In-depth tutorials<br/>• Live coding sessions<br/>• Architecture deep dives<br/>• Community showcases |
---
## **Contribute to the Ecosystem**
### **How You Can Make an Impact**
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
<img src="https://contrib.rocks/image?repo=kyegomez/swarms-js" />
</a>
| **Contribution Area** | **Skills Needed** | **Impact Level** | **Getting Started** |
|-----------------------|-------------------|------------------|---------------------|
| **Core Framework Development** | Python, Rust, Systems Design | **High Impact** | [Contributing Guide](https://docs.swarms.world/en/latest/contributors/main/) |
| **Client Library Development** | Various Languages (Go, Java, TS, etc.) | **High Impact** | [Client Development](https://github.com/The-Swarm-Corporation) |
| **Documentation & Tutorials** | Technical Writing, Examples | **High Impact** | [Docs Contributing](https://docs.swarms.world/en/latest/contributors/docs/) |
| **Testing & Quality Assurance** | Testing Frameworks, QA | **Medium Impact** | [Testing Guide](https://docs.swarms.world/en/latest/swarms/framework/test/) |
| **UI/UX & Design** | Design, Frontend Development | **Medium Impact** | [Design Contributions](https://github.com/The-Swarm-Corporation/swarms/issues) |
| **Bug Reports & Feature Requests** | User Experience, Testing | **Easy Start** | [Report Issues](https://github.com/The-Swarm-Corporation/swarms/issues) |
---
## **We're Hiring Top Talent**
### **Join the Team Building the Future Of The World Economy**
**Ready to work on cutting-edge agent technology that's shaping the future?** We're actively recruiting exceptional engineers, researchers, and technical leaders to join our mission of building the operating system for the agent economy.
----
| **Why Join Swarms?** | **What We Offer** |
|-----------------------|-------------------|
| **Cutting-Edge Technology** | Work on the most powerful multi-agent systems, distributed computing, and enterprise-scale infrastructure |
| **Global Impact** | Your code will power agent applications used by Fortune 500 companies and millions of developers |
| **World-Class Team** | Collaborate with top engineers, researchers, and industry experts from Google, OpenAI, and more |
| **Fast Growth** | Join a rapidly scaling company with massive market opportunity and venture backing |
## Community
### **Open Positions**
Join our growing community around the world, for real-time support, ideas, and discussions on Swarms 😊
| **Position** | **Role Description** |
|-------------------------------|----------------------------------------------------------|
| **Senior Rust Engineers** | Building high-performance agent infrastructure |
| **Python Framework Engineers**| Expanding our core multi-agent capabilities |
| **DevOps/Platform Engineers** | Scaling cloud infrastructure for millions of agents |
| **Technical Writers** | Creating world-class developer documentation |
| **Solutions Engineers** | Helping enterprises adopt multi-agent AI |
- View our official [Blog](https://docs.swarms.world)
- Chat live with us on [Discord](https://discord.gg/kS3rwKs3ZC)
- Follow us on [Twitter](https://twitter.com/kyegomez)
- Connect with us on [LinkedIn](https://www.linkedin.com/company/the-swarm-corporation)
- Visit us on [YouTube](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)
- [Join the Swarms community on Discord!](https://discord.gg/AJazBmhKnr)
- Join our Swarms Community Gathering every Thursday at 1pm NYC Time to unlock the potential of autonomous agents in automating your daily tasks [Sign up here](https://lu.ma/5p2jnc2v)
**Ready to Build the Future?** **[Apply Now at swarms.ai/hiring](https://swarms.ai/hiring)**
---
---
## Discovery Call
Book a discovery call to learn how Swarms can lower your operating costs by 40% with swarms of autonomous agents in lightspeed. [Click here to book a time that works for you!](https://calendly.com/swarm-corp/30min?month=2023-11)
## **Get Started Today**
### **Quick Start Guide**
| **Step** | **Action** | **Time Required** |
|----------|------------|-------------------|
| **1** | [Install Swarms Python Framework](https://docs.swarms.world/en/latest/swarms/install/install/) | 5 minutes |
| **2** | [Run Your First Agent](https://docs.swarms.world/en/latest/swarms/examples/basic_agent/) | 10 minutes |
| **3** | [Try Multi-Agent Workflows](https://docs.swarms.world/en/latest/swarms/examples/sequential_example/) | 15 minutes |
| **4** | [Join Our Discord Community](https://discord.gg/jM3Z6M9uMq) | 2 minutes |
| **5** | [Explore Enterprise Features](https://docs.swarms.world/en/latest/swarms_cloud/swarms_api/) | 20 minutes |
## Accelerate Backlog
Help us accelerate our backlog by supporting us financially! Note, we're an open source corporation and so all the revenue we generate is through donations at the moment ;)
---
## **Enterprise Support & Partnerships**
<a href="https://polar.sh/kyegomez"><img src="https://polar.sh/embed/fund-our-backlog.svg?org=kyegomez" /></a>
### **Ready to Scale with Swarms?**
| **Contact Type** | **Best For** | **Response Time** | **Contact Information** |
|------------------|--------------|-------------------|-------------------------|
| **Technical Support** | Development questions, troubleshooting | < 24 hours | [Book Support Call](https://cal.com/swarms/swarms-technical-support) |
| **Enterprise Sales** | Custom deployments, enterprise licensing | < 4 hours | [kye@swarms.world](mailto:kye@swarms.world) |
| **Partnerships** | Integration partnerships, technology alliances | < 48 hours | [kye@swarms.world](mailto:kye@swarms.world) |
| **Investor Relations** | Investment opportunities, funding updates | By appointment | [kye@swarms.world](mailto:kye@swarms.world) |
---
**Ready to build the future of AI? Start with Swarms today and join thousands of developers creating the next generation of intelligent applications.**

@ -0,0 +1,322 @@
# HeavySwarm Documentation
HeavySwarm is a sophisticated multi-agent orchestration system that decomposes complex tasks into specialized questions and executes them using four specialized agents: Research, Analysis, Alternatives, and Verification. The results are then synthesized into a comprehensive response.
Inspired by X.AI's Grok 4 heavy implementation, HeavySwarm provides robust task analysis through intelligent question generation, parallel execution, and comprehensive synthesis with real-time progress monitoring.
## Architecture
### System Design
The HeavySwarm follows a structured 5-phase workflow:
1. **Task Decomposition**: Complex tasks are broken down into specialized questions
2. **Question Generation**: AI-powered generation of role-specific questions
3. **Parallel Execution**: Four specialized agents work concurrently
4. **Result Collection**: Outputs are gathered and validated
5. **Synthesis**: Integration into a comprehensive final response
### Agent Specialization
- **Research Agent**: Comprehensive information gathering and synthesis
- **Analysis Agent**: Pattern recognition and statistical analysis
- **Alternatives Agent**: Creative problem-solving and strategic options
- **Verification Agent**: Validation, feasibility assessment, and quality assurance
- **Synthesis Agent**: Multi-perspective integration and executive reporting
## Architecture Diagram
```mermaid
graph TB
subgraph "HeavySwarm Architecture"
A[Input Task] --> B[Question Generation Agent]
B --> C[Task Decomposition]
C --> D[Research Agent]
C --> E[Analysis Agent]
C --> F[Alternatives Agent]
C --> G[Verification Agent]
D --> H[Parallel Execution Engine]
E --> H
F --> H
G --> H
H --> I[Result Collection]
I --> J[Synthesis Agent]
J --> K[Comprehensive Report]
subgraph "Monitoring & Control"
L[Rich Dashboard]
M[Progress Tracking]
N[Error Handling]
O[Timeout Management]
end
H --> L
H --> M
H --> N
H --> O
end
subgraph "Agent Specializations"
D --> D1[Information Gathering<br/>Market Research<br/>Data Collection]
E --> E1[Statistical Analysis<br/>Pattern Recognition<br/>Predictive Modeling]
F --> F1[Creative Solutions<br/>Strategic Options<br/>Innovation Ideation]
G --> G1[Fact Checking<br/>Feasibility Assessment<br/>Quality Assurance]
end
style A fill:#ff6b6b
style K fill:#4ecdc4
style H fill:#45b7d1
style J fill:#96ceb4
```
## Installation
```bash
pip install swarms
```
## Quick Start
```python
from swarms import HeavySwarm
# Initialize the swarm
swarm = HeavySwarm(
name="MarketAnalysisSwarm",
description="Financial market analysis swarm",
question_agent_model_name="gpt-4o-mini",
worker_model_name="gpt-4o-mini",
show_dashboard=True,
verbose=True
)
# Execute analysis
result = swarm.run("Analyze the current cryptocurrency market trends and investment opportunities")
print(result)
```
## API Reference
### HeavySwarm Class
#### Constructor Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `name` | `str` | `"HeavySwarm"` | Identifier name for the swarm instance |
| `description` | `str` | `"A swarm of agents..."` | Description of the swarm's purpose |
| `agents` | `List[Agent]` | `None` | Pre-configured agent list (unused - agents created internally) |
| `timeout` | `int` | `300` | Maximum execution time per agent in seconds |
| `aggregation_strategy` | `str` | `"synthesis"` | Strategy for result aggregation |
| `loops_per_agent` | `int` | `1` | Number of execution loops per agent |
| `question_agent_model_name` | `str` | `"gpt-4o-mini"` | Model for question generation |
| `worker_model_name` | `str` | `"gpt-4o-mini"` | Model for specialized worker agents |
| `verbose` | `bool` | `False` | Enable detailed logging output |
| `max_workers` | `int` | `int(os.cpu_count() * 0.9)` | Maximum concurrent workers |
| `show_dashboard` | `bool` | `False` | Enable rich dashboard visualization |
| `agent_prints_on` | `bool` | `False` | Enable individual agent output printing |
#### Methods
##### `run(task: str, img: str = None) -> str`
Execute the complete HeavySwarm orchestration flow.
**Parameters:**
- `task` (str): The main task to analyze and decompose
- `img` (str, optional): Image input for visual analysis tasks
**Returns:**
- `str`: Comprehensive final analysis from synthesis agent
**Example:**
```python
result = swarm.run("Develop a go-to-market strategy for a new SaaS product")
```
## Real-World Applications
### Financial Services
```python
# Market Analysis
swarm = HeavySwarm(
name="FinanceSwarm",
worker_model_name="gpt-4o",
show_dashboard=True
)
result = swarm.run("""
Analyze the impact of recent Federal Reserve policy changes on:
1. Bond markets and yield curves
2. Equity market valuations
3. Currency exchange rates
4. Provide investment recommendations for institutional portfolios
""")
```
### Use-cases
| Use Case | Description |
|---------------------------------------------|---------------------------------------------|
| Portfolio optimization and risk assessment | Optimize asset allocation and assess risks |
| Market trend analysis and forecasting | Analyze and predict market movements |
| Regulatory compliance evaluation | Evaluate adherence to financial regulations |
| Investment strategy development | Develop and refine investment strategies |
| Credit risk analysis and modeling | Analyze and model credit risk |
-------
### Healthcare & Life Sciences
```python
# Clinical Research Analysis
swarm = HeavySwarm(
name="HealthcareSwarm",
worker_model_name="gpt-4o",
timeout=600,
loops_per_agent=2
)
result = swarm.run("""
Evaluate the potential of AI-driven personalized medicine:
1. Current technological capabilities and limitations
2. Regulatory landscape and approval pathways
3. Market opportunities and competitive analysis
4. Implementation strategies for healthcare systems
""")
```
----
**Use Cases:**
| Use Case | Description |
|----------------------------------------|---------------------------------------------|
| Drug discovery and development analysis| Analyze and accelerate drug R&D processes |
| Clinical trial optimization | Improve design and efficiency of trials |
| Healthcare policy evaluation | Assess and inform healthcare policies |
| Medical device market analysis | Evaluate trends and opportunities in devices|
| Patient outcome prediction modeling | Predict and model patient health outcomes |
---
### Technology & Innovation
```python
# Tech Strategy Analysis
swarm = HeavySwarm(
name="TechSwarm",
worker_model_name="gpt-4o",
show_dashboard=True,
verbose=True
)
result = swarm.run("""
Assess the strategic implications of quantum computing adoption:
1. Technical readiness and hardware developments
2. Industry applications and use cases
3. Competitive landscape and key players
4. Investment and implementation roadmap
""")
```
**Use Cases:**
| Use Case | Description |
|------------------------------------|---------------------------------------------|
| Technology roadmap development | Plan and prioritize technology initiatives |
| Competitive intelligence gathering | Analyze competitors and market trends |
| Innovation pipeline analysis | Evaluate and manage innovation projects |
| Digital transformation strategy | Develop and implement digital strategies |
| Emerging technology assessment | Assess new and disruptive technologies |
### Manufacturing & Supply Chain
```python
# Supply Chain Optimization
swarm = HeavySwarm(
name="ManufacturingSwarm",
worker_model_name="gpt-4o",
max_workers=8
)
result = swarm.run("""
Optimize global supply chain resilience:
1. Risk assessment and vulnerability analysis
2. Alternative sourcing strategies
3. Technology integration opportunities
4. Cost-benefit analysis of proposed changes
""")
```
**Use Cases:**
| Use Case | Description |
|----------------------------------|---------------------------------------------|
| Supply chain risk management | Identify and mitigate supply chain risks |
| Manufacturing process optimization | Improve efficiency and productivity |
| Quality control system design | Develop systems to ensure product quality |
| Sustainability impact assessment | Evaluate environmental and social impacts |
| Logistics network optimization | Enhance logistics and distribution networks |
## Advanced Configuration
### Custom Agent Configuration
```python
# High-performance configuration
swarm = HeavySwarm(
name="HighPerformanceSwarm",
question_agent_model_name="gpt-4o",
worker_model_name="gpt-4o",
timeout=900,
loops_per_agent=3,
max_workers=12,
show_dashboard=True,
verbose=True
)
```
## Troubleshooting
| Issue | Solution |
|-------------------------|---------------------------------------------------------------|
| **Agent Timeout** | Increase timeout parameter or reduce task complexity |
| **Model Rate Limits** | Implement backoff strategies or use different models |
| **Memory Usage** | Monitor system resources with large-scale operations |
| **Dashboard Performance** | Disable dashboard for batch processing |
## Contributing
HeavySwarm is part of the Swarms ecosystem. Contributions are welcome for:
- New agent specializations
- Performance optimizations
- Integration capabilities
- Documentation improvements
## Acknowledgments
- Inspired by X.AI's Grok heavy implementation
- Built on the Swarms framework
- Utilizes Rich for dashboard visualization
- Powered by advanced language models

@ -0,0 +1,384 @@
# Technical Support
*Getting Help with the Swarms Multi-Agent Framework*
---
## **Getting Started with Support**
The Swarms team is committed to providing exceptional technical support to help you build production-grade multi-agent systems. Whether you're experiencing bugs, need implementation guidance, or want to request new features, we have multiple channels to ensure you get the help you need quickly and efficiently.
---
## **Support Channels Overview**
| **Support Type** | **Best For** | **Response Time** | **Channel** |
|------------------|--------------|-------------------|-------------|
| **Bug Reports** | Code issues, errors, unexpected behavior | < 24 hours | [GitHub Issues](https://github.com/kyegomez/swarms/issues) |
| **Feature Requests** | New capabilities, enhancements | < 48 hours | [Email kye@swarms.world](mailto:kye@swarms.world) |
| **Private Issues** | Security concerns, enterprise consulting | < 4 hours | [Book Support Call](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true) |
| **Real-time Help** | Quick questions, community discussions | Immediate | [Discord Community](https://discord.gg/jM3Z6M9uMq) |
| **Documentation** | Usage guides, examples, tutorials | Self-service | [docs.swarms.world](https://docs.swarms.world) |
---
## **Reporting Bugs & Technical Issues**
### **When to Use GitHub Issues**
Use GitHub Issues for:
- Code bugs and errors
- Installation problems
- Documentation issues
- Performance problems
- API inconsistencies
- Public technical discussions
### **How to Create an Effective Bug Report**
1. **Visit our Issues page**: [https://github.com/kyegomez/swarms/issues](https://github.com/kyegomez/swarms/issues)
2. **Search existing issues** to avoid duplicates
3. **Click "New Issue"** and select the appropriate template
4. **Include the following information**:
```markdown
## Bug Description
A clear description of what the bug is.
## Environment
- Swarms version: [e.g., 5.9.2]
- Python version: [e.g., 3.9.0]
- Operating System: [e.g., Ubuntu 20.04, macOS 14, Windows 11]
- Model provider: [e.g., OpenAI, Anthropic, Groq]
## Steps to Reproduce
1. Step one
2. Step two
3. Step three
## Expected Behavior
What you expected to happen.
## Actual Behavior
What actually happened.
## Code Sample
```python
# Minimal code that reproduces the issue
from swarms import Agent
agent = Agent(model_name="gpt-4o-mini")
result = agent.run("Your task here")
```
## Error Messages
```
Paste any error messages or stack traces here
```
## Additional Context
Any other context, screenshots, or logs that might help.
### **Issue Templates Available**
| Template | Use Case |
|----------|----------|
| **Bug Report** | Standard bug reporting template |
| **Documentation** | Issues with docs, guides, examples |
| **Feature Request** | Suggesting new functionality |
| **Question** | General questions about usage |
| **Enterprise** | Enterprise-specific issues |
---
## **Private & Enterprise Support**
### **When to Book a Private Support Call**
Book a private consultation for:
- Security vulnerabilities or concerns
- Enterprise deployment guidance
- Custom implementation consulting
- Architecture review sessions
- Performance optimization
- Integration troubleshooting
- Strategic technical planning
### **How to Schedule Support**
1. **Visit our booking page**: [https://cal.com/swarms/swarms-technical-support?overlayCalendar=true](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true)
2. **Select an available time** that works for your timezone
3. **Provide details** about your issue or requirements
4. **Prepare for the call**:
- Have your code/environment ready
- Prepare specific questions
- Include relevant error messages or logs
- Share your use case and goals
### **What to Expect**
- **Direct access** to Swarms core team members
- **Screen sharing** for live debugging
- **Custom solutions** tailored to your needs
- **Follow-up resources** and documentation
- **Priority support** for implementation
---
## **Real-Time Community Support**
### **Join Our Discord Community**
Get instant help from our active community of developers and core team members.
**Discord Benefits:**
- **24/7 availability** - Someone is always online
- **Instant responses** - Get help in real-time
- **Community wisdom** - Learn from other developers
- **Specialized channels** - Find the right help quickly
- **Latest updates** - Stay informed about new releases
### **Discord Channels Guide**
| Channel | Purpose |
|---------|---------|
| **#general** | General discussions and introductions |
| **#technical-support** | Technical questions and troubleshooting |
| **#showcase** | Share your Swarms projects and demos |
| **#feature-requests** | Discuss potential new features |
| **#announcements** | Official updates and releases |
| **#resources** | Helpful links, tutorials, and guides |
### **Getting Help on Discord**
1. **Join here**: [https://discord.gg/jM3Z6M9uMq](https://discord.gg/jM3Z6M9uMq)
2. **Read the rules** and introduce yourself in #general
3. **Use the right channel** for your question type
4. **Provide context** when asking questions:
```
Python version: 3.9
Swarms version: 5.9.2
OS: macOS 14
Question: How do I implement custom tools with MCP?
What I tried: [paste your code]
Error: [paste error message]
```
5. **Be patient and respectful** - our community loves helping!
---
## **Feature Requests & Enhancement Suggestions**
### **When to Email for Feature Requests**
Contact us directly for:
- Major new framework capabilities
- Architecture enhancements
- New model provider integrations
- Enterprise-specific features
- Analytics and monitoring tools
- UI/UX improvements
### **How to Submit Feature Requests**
**Email**: [kye@swarms.world](mailto:kye@swarms.world)
**Subject Format**: `[FEATURE REQUEST] Brief description`
**Include in your email**:
```markdown
## Feature Description
Clear description of the proposed feature
## Use Case
Why this feature is needed and how it would be used
## Business Impact
How this would benefit the Swarms ecosystem
## Technical Requirements
Any specific technical considerations
## Priority Level
- Low: Nice to have
- Medium: Would significantly improve workflow
- High: Critical for adoption/production use
## Alternatives Considered
Other solutions you've explored
## Implementation Ideas
Any thoughts on how this could be implemented
```
### **Feature Request Process**
1. **Email submission** with detailed requirements
2. **Initial review** within 48 hours
3. **Technical feasibility** assessment
4. **Community feedback** gathering (if applicable)
5. **Roadmap planning** and timeline estimation
6. **Development** and testing
7. **Release** with documentation
---
## **Self-Service Resources**
Before reaching out for support, check these resources:
### **Documentation**
- **[Complete Documentation](https://docs.swarms.world)** - Comprehensive guides and API reference
- **[Installation Guide](https://docs.swarms.world/en/latest/swarms/install/install/)** - Setup and configuration
- **[Quick Start](https://docs.swarms.world/en/latest/quickstart/)** - Get up and running fast
- **[Examples Gallery](https://docs.swarms.world/en/latest/examples/)** - Real-world use cases
### **Common Solutions**
| Issue | Solution |
|-------|----------|
| **Installation fails** | Check [Environment Setup](https://docs.swarms.world/en/latest/swarms/install/env/) |
| **Model not responding** | Verify API keys in environment variables |
| **Import errors** | Ensure latest version: `pip install -U swarms` |
| **Memory issues** | Review [Performance Guide](https://docs.swarms.world/en/latest/swarms/framework/test/) |
| **Agent not working** | Check [Basic Agent Example](https://docs.swarms.world/en/latest/swarms/examples/basic_agent/) |
### **Video Tutorials**
- **[YouTube Channel](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)** - Step-by-step tutorials
- **[Live Coding Sessions](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)** - Real-world implementations
---
## **Support Checklist**
Before requesting support, please:
- [ ] **Check the documentation** for existing solutions
- [ ] **Search GitHub issues** for similar problems
- [ ] **Update to latest version**: `pip install -U swarms`
- [ ] **Verify environment setup** and API keys
- [ ] **Test with minimal code** to isolate the issue
- [ ] **Gather error messages** and relevant logs
- [ ] **Note your environment** (OS, Python version, Swarms version)
---
## **Support Best Practices**
### **For Faster Resolution**
1. **Be Specific**: Provide exact error messages and steps to reproduce
2. **Include Code**: Share minimal, runnable examples
3. **Environment Details**: Always include version information
4. **Search First**: Check if your issue has been addressed before
5. **One Issue Per Report**: Don't combine multiple problems
6. **Follow Up**: Respond promptly to requests for additional information
### **Response Time Expectations**
| Priority | Response Time | Resolution Time |
|----------|---------------|-----------------|
| **Critical** (Production down) | < 2 hours | < 24 hours |
| **High** (Major functionality blocked) | < 8 hours | < 48 hours |
| **Medium** (Feature issues) | < 24 hours | < 1 week |
| **Low** (Documentation, enhancements) | < 48 hours | Next release |
---
## **Contributing Back**
Help improve support for everyone:
- **Answer questions** in Discord or GitHub
- **Improve documentation** with your learnings
- **Share examples** of successful implementations
- **Report bugs** you discover
- **Suggest improvements** to this support process
**Your contributions make Swarms better for everyone.**
---
## **Support Channel Summary**
| Urgency | Best Channel |
|---------|-------------|
| **Emergency** | [Book Immediate Call](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true) |
| **Urgent** | [Discord #technical-support](https://discord.gg/jM3Z6M9uMq) |
| **Standard** | [GitHub Issues](https://github.com/kyegomez/swarms/issues) |
| **Feature Ideas** | [Email kye@swarms.world](mailto:kye@swarms.world) |
**We're here to help you succeed with Swarms.**

@ -0,0 +1,242 @@
# Swarms API Clients
*Production-Ready Client Libraries for Every Programming Language*
## Overview
The Swarms API provides official client libraries across multiple programming languages, enabling developers to integrate powerful multi-agent AI capabilities into their applications with ease. Our clients are designed for production use, featuring robust error handling, comprehensive documentation, and seamless integration with existing codebases.
Whether you're building enterprise applications, research prototypes, or innovative AI products, our client libraries provide the tools you need to harness the full power of the Swarms platform.
## Available Clients
| Language | Status | Repository | Documentation | Description |
|----------|--------|------------|---------------|-------------|
| **Python** | ✅ **Available** | [swarms-sdk](https://github.com/The-Swarm-Corporation/swarms-sdk) | [Docs](https://docs.swarms.world/en/latest/swarms_cloud/python_client/) | Production-grade Python client with comprehensive error handling, retry logic, and extensive examples |
| **TypeScript/Node.js** | ✅ **Available** | [swarms-ts](https://github.com/The-Swarm-Corporation/swarms-ts) | 📚 *Coming Soon* | Modern TypeScript client with full type safety, Promise-based API, and Node.js compatibility |
| **Go** | ✅ **Available** | [swarms-client-go](https://github.com/The-Swarm-Corporation/swarms-client-go) | 📚 *Coming Soon* | High-performance Go client optimized for concurrent operations and microservices |
| **Java** | ✅ **Available** | [swarms-java](https://github.com/The-Swarm-Corporation/swarms-java) | 📚 *Coming Soon* | Enterprise Java client with Spring Boot integration and comprehensive SDK features |
| **Kotlin** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | Modern Kotlin client with coroutines support and Android compatibility |
| **Ruby** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | Elegant Ruby client with Rails integration and gem packaging |
| **Rust** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | Ultra-fast Rust client with memory safety and zero-cost abstractions |
| **C#/.NET** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | .NET client with async/await support and NuGet packaging |
## Client Features
All Swarms API clients are built with the following enterprise-grade features:
### 🔧 **Core Functionality**
| Feature | Description |
|------------------------|--------------------------------------------------------------------|
| **Full API Coverage** | Complete access to all Swarms API endpoints |
| **Type Safety** | Strongly-typed interfaces for all request/response objects |
| **Error Handling** | Comprehensive error handling with detailed error messages |
| **Retry Logic** | Automatic retries with exponential backoff for transient failures |
---
### 🚀 **Performance & Reliability**
| Feature | Description |
|--------------------------|--------------------------------------------------------------------|
| **Connection Pooling** | Efficient HTTP connection management |
| **Rate Limiting** | Built-in rate limit handling and backoff strategies |
| **Timeout Configuration**| Configurable timeouts for different operation types |
| **Streaming Support** | Real-time streaming for long-running operations |
---
### 🛡️ **Security & Authentication**
| Feature | Description |
|------------------------|--------------------------------------------------------------------|
| **API Key Management** | Secure API key handling and rotation |
| **TLS/SSL** | End-to-end encryption for all communications |
| **Request Signing** | Optional request signing for enhanced security |
| **Environment Configuration** | Secure environment-based configuration |
---
### 📊 **Monitoring & Debugging**
| Feature | Description |
|----------------------------|--------------------------------------------------------------------|
| **Comprehensive Logging** | Detailed logging for debugging and monitoring |
| **Request/Response Tracing** | Full request/response tracing capabilities |
| **Metrics Integration** | Built-in metrics for monitoring client performance |
| **Debug Mode** | Enhanced debugging features for development |
## Client-Specific Features
### Python Client
| Feature | Description |
|------------------------|----------------------------------------------------------|
| **Async Support** | Full async/await support with `asyncio` |
| **Pydantic Integration** | Type-safe request/response models |
| **Context Managers** | Resource management with context managers |
| **Rich Logging** | Integration with Python's `logging` module |
---
### TypeScript/Node.js Client
| Feature | Description |
|------------------------|----------------------------------------------------------|
| **TypeScript First** | Built with TypeScript for maximum type safety |
| **Promise-Based** | Modern Promise-based API with async/await |
| **Browser Compatible** | Works in both Node.js and modern browsers |
| **Zero Dependencies** | Minimal dependency footprint |
---
### Go Client
| Feature | Description |
|------------------------|----------------------------------------------------------|
| **Context Support** | Full context.Context support for cancellation |
| **Structured Logging** | Integration with structured logging libraries |
| **Concurrency Safe** | Thread-safe design for concurrent operations |
| **Minimal Allocation** | Optimized for minimal memory allocation |
---
### Java Client
| Feature | Description |
|------------------------|----------------------------------------------------------|
| **Spring Boot Ready** | Built-in Spring Boot auto-configuration |
| **Reactive Support** | Optional reactive streams support |
| **Enterprise Features**| JMX metrics, health checks, and more |
| **Maven & Gradle** | Available on Maven Central |
## Advanced Configuration
### Environment Variables
All clients support standard environment variables for configuration:
```bash
# API Configuration
SWARMS_API_KEY=your_api_key_here
SWARMS_BASE_URL=https://api.swarms.world
# Client Configuration
SWARMS_TIMEOUT=60
SWARMS_MAX_RETRIES=3
SWARMS_LOG_LEVEL=INFO
```
## Community & Support
### 📚 **Documentation & Resources**
| Resource | Link |
|-----------------------------|----------------------------------------------------------------------------------------|
| Complete API Documentation | [View Docs](https://docs.swarms.world/en/latest/swarms_cloud/swarms_api/) |
| Python Client Docs | [View Docs](https://docs.swarms.world/en/latest/swarms_cloud/python_client/) |
| API Examples & Tutorials | [View Examples](https://docs.swarms.world/en/latest/examples/) |
---
### 💬 **Community Support**
| Community Channel | Description | Link |
|-----------------------------|---------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------|
| Discord Community | Join our active developer community for real-time support and discussions | [Join Discord](https://discord.gg/jM3Z6M9uMq) |
| GitHub Discussions | Ask questions and share ideas | [GitHub Discussions](https://github.com/The-Swarm-Corporation/swarms/discussions) |
| Twitter/X | Follow for updates and announcements | [Twitter/X](https://x.com/swarms_corp) |
---
### 🐛 **Issue Reporting & Contributions**
| Contribution Area | Description | Link |
|-----------------------------|---------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------|
| Report Bugs | Help us improve by reporting issues | [Report Bugs](https://github.com/The-Swarm-Corporation/swarms/issues) |
| Feature Requests | Suggest new features and improvements | [Feature Requests](https://github.com/The-Swarm-Corporation/swarms/issues) |
| Contributing Guide | Learn how to contribute to the project | [Contributing Guide](https://docs.swarms.world/en/latest/contributors/main/) |
---
### 📧 **Direct Support**
| Support Type | Contact Information |
|-----------------------------|---------------------------------------------------------------------------------------|
| Support Call | [Book a call](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true) |
| Enterprise Support | Contact us for dedicated enterprise support options |
## Contributing to Client Development
We welcome contributions to all our client libraries! Here's how you can help:
### 🛠️ **Development**
| Task | Description |
|-----------------------------------------|--------------------------------------------------|
| Implement new features and endpoints | Add new API features and expand client coverage |
| Improve error handling and retry logic | Enhance robustness and reliability |
| Add comprehensive test coverage | Ensure code quality and prevent regressions |
| Optimize performance and memory usage | Improve speed and reduce resource consumption |
---
### 📝 **Documentation**
| Task | Description |
|-----------------------------|-----------------------------------------------------|
| Write tutorials and examples | Create guides and sample code for users |
| Improve API documentation | Clarify and expand reference docs |
| Create integration guides | Help users connect clients to their applications |
| Translate documentation | Make docs accessible in multiple languages |
---
### 🧪 **Testing**
| Task | Description |
|-------------------------------|-----------------------------------------------------|
| Add unit and integration tests | Test individual components and end-to-end flows |
| Test with different language versions | Ensure compatibility across environments |
| Performance benchmarking | Measure and optimize speed and efficiency |
| Security testing | Identify and fix vulnerabilities |
---
### 📦 **Packaging**
| Task | Description |
|-------------------------------|-----------------------------------------------------|
| Package managers (npm, pip, Maven, etc.) | Publish to popular package repositories |
| Distribution optimization | Streamline builds and reduce package size |
| Version management | Maintain clear versioning and changelogs |
| Release automation | Automate build, test, and deployment pipelines |
## Enterprise Features
For enterprise customers, we offer additional features and support:
### 🏢 **Enterprise Client Features**
| Feature | Description |
|--------------------------|----------------------------------------------------------------|
| **Priority Support** | Dedicated support team with SLA guarantees |
| **Custom Integrations** | Tailored integrations for your specific needs |
| **On-Premises Deployment** | Support for on-premises or private cloud deployments |
| **Advanced Security** | Enhanced security features and compliance support |
| **Training & Onboarding**| Comprehensive training for your development team |
### 📞 **Contact Enterprise Sales**
| Contact Type | Details |
|----------------|-----------------------------------------------------------------------------------------|
| **Sales** | [kye@swarms.world](mailto:kye@swarms.world) |
| **Schedule Demo** | [Book a Demo](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true) |
| **Partnership**| [kye@swarms.world](mailto:kye@swarms.world) |
---
*Ready to build the future with AI agents? Start with any of our client libraries and join our growing community of developers building the next generation of intelligent applications.*

@ -0,0 +1,250 @@
from swarms import Agent
from swarms.structs.election_swarm import (
ElectionSwarm,
)
# Create candidate agents for Apple CEO position
tim_cook = Agent(
agent_name="Tim Cook - Current CEO",
system_prompt="""You are Tim Cook, the current CEO of Apple Inc. since 2011.
Your background:
- 13+ years as Apple CEO, succeeding Steve Jobs
- Former COO of Apple (2007-2011)
- Former VP of Operations at Compaq
- MBA from Duke University
- Known for operational excellence and supply chain management
- Led Apple to become the world's most valuable company
- Expanded Apple's services business significantly
- Strong focus on privacy, sustainability, and social responsibility
- Successfully navigated global supply chain challenges
- Annual revenue growth from $108B to $394B during tenure
Strengths: Operational expertise, global experience, proven track record, strong relationships with suppliers and partners, focus on privacy and sustainability.
Challenges: Perceived lack of innovation compared to Jobs era, heavy reliance on iPhone revenue, limited new product categories.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
sundar_pichai = Agent(
agent_name="Sundar Pichai - Google/Alphabet CEO",
system_prompt="""You are Sundar Pichai, CEO of Alphabet Inc. and Google since 2015.
Your background:
- CEO of Alphabet Inc. since 2019, Google since 2015
- Former Senior VP of Chrome, Apps, and Android
- Led development of Chrome browser and Android platform
- MS in Engineering from Stanford, MBA from Wharton
- Known for product development and AI leadership
- Successfully integrated AI into Google's core products
- Led Google's cloud computing expansion
- Strong focus on AI/ML and emerging technologies
- Experience with large-scale platform management
- Annual revenue growth from $75B to $307B during tenure
Strengths: AI/ML expertise, product development, platform management, experience with large-scale operations, strong technical background.
Challenges: Limited hardware experience, regulatory scrutiny, different company culture.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
jensen_huang = Agent(
agent_name="Jensen Huang - NVIDIA CEO",
system_prompt="""You are Jensen Huang, CEO and co-founder of NVIDIA since 1993.
Your background:
- CEO and co-founder of NVIDIA for 31 years
- Former engineer at AMD and LSI Logic
- MS in Electrical Engineering from Stanford
- Led NVIDIA from graphics cards to AI computing leader
- Pioneered GPU computing and AI acceleration
- Successfully pivoted company to AI/data center focus
- Market cap grew from $2B to $2.5T+ under leadership
- Known for long-term vision and technical innovation
- Strong focus on AI, robotics, and autonomous vehicles
- Annual revenue growth from $3.9B to $60B+ during recent years
Strengths: Technical innovation, AI expertise, long-term vision, proven ability to pivot business models, strong engineering background, experience building new markets.
Challenges: Limited consumer hardware experience, different industry focus, no experience with Apple's ecosystem.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
# Create board member voter agents with realistic personas
arthur_levinson = Agent(
agent_name="Arthur Levinson - Chairman",
system_prompt="""You are Arthur Levinson, Chairman of Apple's Board of Directors since 2011.
Background: Former CEO of Genentech (1995-2009), PhD in Biochemistry, served on Apple's board since 2000.
Voting perspective: You prioritize scientific innovation, long-term research, and maintaining Apple's culture of excellence. You value candidates who understand both technology and business, and who can balance innovation with operational excellence. You're concerned about Apple's future in AI and biotechnology.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
james_bell = Agent(
agent_name="James Bell - Board Member",
system_prompt="""You are James Bell, Apple board member since 2015.
Background: Former CFO of Boeing (2008-2013), former CFO of Rockwell International, extensive experience in aerospace and manufacturing.
Voting perspective: You focus on financial discipline, operational efficiency, and global supply chain management. You value candidates with strong operational backgrounds and proven track records in managing complex global operations. You're particularly concerned about maintaining Apple's profitability and managing costs.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
al_gore = Agent(
agent_name="Al Gore - Board Member",
system_prompt="""You are Al Gore, Apple board member since 2003.
Background: Former Vice President of the United States, environmental activist, Nobel Peace Prize winner, author of "An Inconvenient Truth."
Voting perspective: You prioritize environmental sustainability, social responsibility, and ethical leadership. You value candidates who demonstrate commitment to climate action, privacy protection, and corporate social responsibility. You want to ensure Apple continues its leadership in environmental initiatives.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
monica_lozano = Agent(
agent_name="Monica Lozano - Board Member",
system_prompt="""You are Monica Lozano, Apple board member since 2014.
Background: Former CEO of College Futures Foundation, former CEO of La Opinión newspaper, extensive experience in media and education.
Voting perspective: You focus on diversity, inclusion, and community impact. You value candidates who demonstrate commitment to building diverse teams, serving diverse communities, and creating products that benefit all users. You want to ensure Apple continues to be a leader in accessibility and inclusive design.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
ron_sugar = Agent(
agent_name="Ron Sugar - Board Member",
system_prompt="""You are Ron Sugar, Apple board member since 2010.
Background: Former CEO of Northrop Grumman (2003-2010), PhD in Engineering, extensive experience in defense and aerospace technology.
Voting perspective: You prioritize technological innovation, research and development, and maintaining competitive advantage. You value candidates with strong technical backgrounds and proven ability to lead large-scale engineering organizations. You're concerned about Apple's position in emerging technologies like AI and autonomous systems.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
susan_wagner = Agent(
agent_name="Susan Wagner - Board Member",
system_prompt="""You are Susan Wagner, Apple board member since 2014.
Background: Co-founder and former COO of BlackRock (1988-2012), extensive experience in investment management and financial services.
Voting perspective: You focus on shareholder value, capital allocation, and long-term strategic planning. You value candidates who understand capital markets, can manage investor relations effectively, and have proven track records of creating shareholder value. You want to ensure Apple continues to deliver strong returns while investing in future growth.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
andrea_jung = Agent(
agent_name="Andrea Jung - Board Member",
system_prompt="""You are Andrea Jung, Apple board member since 2008.
Background: Former CEO of Avon Products (1999-2012), extensive experience in consumer goods and direct sales, served on multiple Fortune 500 boards.
Voting perspective: You prioritize customer experience, brand management, and global market expansion. You value candidates who understand consumer behavior, can build strong brands, and have experience managing global consumer businesses. You want to ensure Apple continues to deliver exceptional customer experiences worldwide.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
bob_iger = Agent(
agent_name="Bob Iger - Board Member",
system_prompt="""You are Bob Iger, Apple board member since 2011.
Background: Former CEO of The Walt Disney Company (2005-2020), extensive experience in media, entertainment, and content creation.
Voting perspective: You focus on content strategy, media partnerships, and creative leadership. You value candidates who understand content creation, can build strategic partnerships, and have experience managing creative organizations. You want to ensure Apple continues to grow its services business and content offerings.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
alex_gorsky = Agent(
agent_name="Alex Gorsky - Board Member",
system_prompt="""You are Alex Gorsky, Apple board member since 2019.
Background: Former CEO of Johnson & Johnson (2012-2022), extensive experience in healthcare, pharmaceuticals, and regulated industries.
Voting perspective: You prioritize healthcare innovation, regulatory compliance, and product safety. You value candidates who understand healthcare markets, can navigate regulatory environments, and have experience with product development in highly regulated industries. You want to ensure Apple continues to grow its healthcare initiatives and maintain the highest standards of product safety.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
# Create lists of voters and candidates
voter_agents = [
arthur_levinson,
james_bell,
al_gore,
# monica_lozano,
# ron_sugar,
# susan_wagner,
# andrea_jung,
# bob_iger,
# alex_gorsky,
]
candidate_agents = [tim_cook, sundar_pichai, jensen_huang]
# Create the election swarm
apple_election = ElectionSwarm(
name="Apple Board Election for CEO",
description="Board election to select the next CEO of Apple Inc.",
agents=voter_agents,
candidate_agents=candidate_agents,
max_loops=1,
show_dashboard=False,
)
# Define the election task
election_task = """
You are participating in a critical board election to select the next CEO of Apple Inc.
The current CEO, Tim Cook, has announced his retirement after 13 years of successful leadership. The board must select a new CEO who can lead Apple into the next decade of innovation and growth.
Key considerations for the next CEO:
1. Leadership in AI and emerging technologies
2. Ability to maintain Apple's culture of innovation and excellence
3. Experience with global operations and supply chain management
4. Commitment to privacy, sustainability, and social responsibility
5. Track record of creating shareholder value
6. Ability to expand Apple's services business
7. Experience with hardware and software integration
8. Vision for Apple's future in healthcare, automotive, and other new markets
Please carefully evaluate each candidate based on their background, experience, and alignment with Apple's values and strategic objectives. Consider both their strengths and potential challenges in leading Apple.
Vote for the candidate you believe is best positioned to lead Apple successfully into the future. Provide a detailed explanation of your reasoning for each vote and a specific candidate name.
"""
# Run the election
results = apple_election.run(election_task)
print(results)
print(type(results))

@ -0,0 +1,22 @@
from swarms import SelfConsistencyAgent
# Initialize the reasoning agent router with self-consistency
reasoning_agent_router = SelfConsistencyAgent(
name="reasoning-agent",
description="A reasoning agent that can answer questions and help with tasks.",
model_name="gpt-4o-mini",
system_prompt="You are a helpful assistant that can answer questions and help with tasks.",
max_loops=1,
num_samples=3, # Generate 3 independent responses
eval=False, # Disable evaluation mode
random_models_on=False, # Disable random model selection
majority_voting_prompt=None, # Use default majority voting prompt
)
# Run the agent on a financial analysis task
result = reasoning_agent_router.run(
"What is the best possible financial strategy to maximize returns but minimize risk? Give a list of etfs to invest in and the percentage of the portfolio to allocate to each etf."
)
print("Financial Strategy Result:")
print(result)

@ -1,5 +1,6 @@
from swarms.agents.reasoning_agents import ReasoningAgentRouter
# Initialize the reasoning agent router with self-consistency
reasoning_agent_router = ReasoningAgentRouter(
agent_name="reasoning-agent",
description="A reasoning agent that can answer questions and help with tasks.",
@ -7,40 +8,16 @@ reasoning_agent_router = ReasoningAgentRouter(
system_prompt="You are a helpful assistant that can answer questions and help with tasks.",
max_loops=1,
swarm_type="self-consistency",
num_samples=1,
output_type="list",
num_samples=3, # Generate 3 independent responses
eval=False, # Disable evaluation mode
random_models_on=False, # Disable random model selection
majority_voting_prompt=None, # Use default majority voting prompt
)
reasoning_agent_router.run(
# Run the agent on a financial analysis task
result = reasoning_agent_router.run(
"What is the best possible financial strategy to maximize returns but minimize risk? Give a list of etfs to invest in and the percentage of the portfolio to allocate to each etf."
)
# reasoning_agent_router.batched_run(
# [
# "What is the best possible financial strategy to maximize returns but minimize risk? Give a list of etfs to invest in and the percentage of the portfolio to allocate to each etf.",
# "What is the best possible financial strategy to maximize returns but minimize risk? Give a list of etfs to invest in and the percentage of the portfolio to allocate to each etf.",
# ]
# )
# from swarms import ReasoningAgentRouter
# calculus_router = ReasoningAgentRouter(
# agent_name="calculus-expert",
# description="A calculus problem solving agent",
# model_name="gpt-4o-mini",
# system_prompt="You are a calculus expert. Solve differentiation and integration problems methodically.",
# swarm_type="self-consistency",
# num_samples=3, # Generate 3 samples to ensure consistency
# output_type="list",
# )
# # Example calculus problem
# calculus_problem = "Find the derivative of f(x) = x³ln(x) - 5x²"
# # Get the solution
# solution = calculus_router.run(calculus_problem)
# print(solution)
print("Financial Strategy Result:")
print(result)

@ -0,0 +1,104 @@
"""
Example usage of log_function_execution decorator with class methods.
This demonstrates how the decorator works with:
- Instance methods
- Class methods
- Static methods
- Property methods
"""
from swarms.telemetry.log_executions import log_function_execution
class DataProcessor:
"""Example class to demonstrate decorator usage with methods."""
def __init__(self, name: str, version: str = "1.0"):
self.name = name
self.version = version
self.processed_count = 0
@log_function_execution(
swarm_id="data-processor-instance",
swarm_architecture="object_oriented",
enabled_on=True,
)
def process_data(self, data: list, multiplier: int = 2) -> dict:
"""Instance method that processes data."""
processed = [x * multiplier for x in data]
self.processed_count += len(data)
return {
"original": data,
"processed": processed,
"processor_name": self.name,
"count": len(processed),
}
@classmethod
@log_function_execution(
swarm_id="data-processor-class",
swarm_architecture="class_method",
enabled_on=True,
)
def create_default(cls, name: str):
"""Class method to create a default instance."""
return cls(name=name, version="default")
@staticmethod
@log_function_execution(
swarm_id="data-processor-static",
swarm_architecture="utility",
enabled_on=True,
)
def validate_data(data: list) -> bool:
"""Static method to validate data."""
return isinstance(data, list) and len(data) > 0
@property
def status(self) -> str:
"""Property method (not decorated as it's a getter)."""
return f"{self.name} v{self.version} - {self.processed_count} items processed"
class AdvancedProcessor(DataProcessor):
"""Subclass to test inheritance with decorated methods."""
@log_function_execution(
swarm_id="advanced-processor",
swarm_architecture="inheritance",
enabled_on=True,
)
def advanced_process(
self, data: list, algorithm: str = "enhanced"
) -> dict:
"""Advanced processing method in subclass."""
base_result = self.process_data(data, multiplier=3)
return {
**base_result,
"algorithm": algorithm,
"advanced": True,
"processor_type": "AdvancedProcessor",
}
if __name__ == "__main__":
print("Testing decorator with class methods...")
# Test instance method
print("\n1. Testing instance method:")
processor = DataProcessor("TestProcessor", "2.0")
result1 = processor.process_data([1, 2, 3, 4], multiplier=5)
print(f"Result: {result1}")
print(f"Status: {processor.status}")
# Test class method
print("\n2. Testing class method:")
default_processor = DataProcessor.create_default(
"DefaultProcessor"
)
print(
f"Created: {default_processor.name} v{default_processor.version}"
)

@ -0,0 +1,116 @@
"""
Example usage of the log_function_execution decorator.
This example demonstrates how to use the decorator to automatically log
function executions including parameters, outputs, and execution metadata.
"""
from swarms.telemetry.log_executions import log_function_execution
# Example 1: Simple function with basic parameters
@log_function_execution(
swarm_id="example-swarm-001",
swarm_architecture="sequential",
enabled_on=True,
)
def calculate_sum(a: int, b: int) -> int:
"""Calculate the sum of two numbers."""
return a + b
# Example 2: Function with complex parameters and return values
@log_function_execution(
swarm_id="data-processing-swarm",
swarm_architecture="parallel",
enabled_on=True,
)
def process_data(
data_list: list,
threshold: float = 0.5,
include_metadata: bool = True,
) -> dict:
"""Process a list of data with filtering and metadata generation."""
filtered_data = [x for x in data_list if x > threshold]
result = {
"original_count": len(data_list),
"filtered_count": len(filtered_data),
"filtered_data": filtered_data,
"threshold_used": threshold,
}
if include_metadata:
result["metadata"] = {
"processing_method": "threshold_filter",
"success": True,
}
return result
# Example 3: Function that might raise an exception
@log_function_execution(
swarm_id="validation-swarm",
swarm_architecture="error_handling",
enabled_on=True,
)
def validate_input(value: str, min_length: int = 5) -> bool:
"""Validate input string length."""
if not isinstance(value, str):
raise TypeError(f"Expected string, got {type(value)}")
if len(value) < min_length:
raise ValueError(
f"String too short: {len(value)} < {min_length}"
)
return True
# Example 4: Decorator with logging disabled
@log_function_execution(
swarm_id="silent-swarm",
swarm_architecture="background",
enabled_on=False, # Logging disabled
)
def silent_function(x: int) -> int:
"""This function won't be logged."""
return x * 2
if __name__ == "__main__":
print("Testing log_function_execution decorator...")
# Test successful executions
print("\n1. Testing simple sum calculation:")
result1 = calculate_sum(5, 3)
print(f"Result: {result1}")
print("\n2. Testing data processing:")
sample_data = [0.2, 0.7, 1.2, 0.1, 0.9, 1.5]
result2 = process_data(
sample_data, threshold=0.5, include_metadata=True
)
print(f"Result: {result2}")
print("\n3. Testing validation with valid input:")
result3 = validate_input("hello world", min_length=5)
print(f"Result: {result3}")
print("\n4. Testing silent function (no logging):")
result4 = silent_function(10)
print(f"Result: {result4}")
print(
"\n5. Testing validation with invalid input (will raise exception):"
)
try:
validate_input("hi", min_length=5)
except ValueError as e:
print(f"Caught expected error: {e}")
print("\nAll function calls have been logged automatically!")
print(
"Check your telemetry logs to see the captured execution data."
)

@ -0,0 +1,16 @@
from swarms.structs.heavy_swarm import HeavySwarm
swarm = HeavySwarm(
worker_model_name="claude-3-5-sonnet-20240620",
show_dashboard=True,
question_agent_model_name="gpt-4.1",
loops_per_agent=1,
)
out = swarm.run(
"Identify the top 3 energy sector ETFs listed on US exchanges that offer the highest potential for growth over the next 3-5 years. Focus specifically on funds with significant exposure to companies in the nuclear, natural gas, or oil industries. For each ETF, provide the rationale for its selection, recent performance metrics, sector allocation breakdown, and any notable holdings related to nuclear, gas, or oil. Exclude broad-based energy ETFs that do not have a clear emphasis on these sub-sectors."
)
print(out)

@ -0,0 +1,17 @@
import json
import csv
with open("profession_personas.progress.json", "r") as file:
data = json.load(file)
# Extract the professions list from the JSON structure
professions = data["professions"]
with open("data_personas_progress.csv", "w", newline="") as file:
writer = csv.writer(file)
# Write header using the keys from the first profession
if professions:
writer.writerow(professions[0].keys())
# Write data for each profession
for profession in professions:
writer.writerow(profession.values())

File diff suppressed because it is too large Load Diff

@ -0,0 +1,72 @@
#!/usr/bin/env python3
"""
Script to format prompt.txt into proper markdown format.
Converts \n characters to actual line breaks and improves formatting.
"""
def format_prompt(
input_file="prompt.txt", output_file="prompt_formatted.md"
):
"""
Read the prompt file and format it properly as markdown.
Args:
input_file (str): Path to input file
output_file (str): Path to output file
"""
try:
# Read the original file
with open(input_file, "r", encoding="utf-8") as f:
content = f.read()
# Replace \n with actual newlines
formatted_content = content.replace("\\n", "\n")
# Additional formatting improvements
# Fix spacing around headers
formatted_content = formatted_content.replace(
"\n**", "\n\n**"
)
formatted_content = formatted_content.replace(
"**\n", "**\n\n"
)
# Fix spacing around list items
formatted_content = formatted_content.replace(
"\n -", "\n\n -"
)
# Fix spacing around sections
formatted_content = formatted_content.replace(
"\n---\n", "\n\n---\n\n"
)
# Clean up excessive newlines (more than 3 in a row)
import re
formatted_content = re.sub(
r"\n{4,}", "\n\n\n", formatted_content
)
# Write the formatted content
with open(output_file, "w", encoding="utf-8") as f:
f.write(formatted_content)
print("✅ Successfully formatted prompt!")
print(f"📄 Input file: {input_file}")
print(f"📝 Output file: {output_file}")
# Show some stats
original_lines = content.count("\\n") + 1
new_lines = formatted_content.count("\n") + 1
print(f"📊 Lines: {original_lines}{new_lines}")
except FileNotFoundError:
print(f"❌ Error: Could not find file '{input_file}'")
except Exception as e:
print(f"❌ Error: {e}")
if __name__ == "__main__":
format_prompt()

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

@ -0,0 +1,284 @@
You are Morgan L. Whitaker, a world-class General and Operations Manager renowned for exceptional expertise in orchestrating complex, cross-functional operations within large-scale organizations. Your leadership is marked by a rare blend of strategic vision, operational excellence, and a deep commitment to organizational success, employee development, and stakeholder satisfaction.
---
**1. UNIQUE PROFESSIONAL NAME**
Morgan L. Whitaker
---
**2. EXPERIENCE HISTORY**
- **Education**
- Bachelor of Science in Industrial Engineering, Georgia Institute of Technology, 2003
- MBA in Operations and Strategic Management, The Wharton School, University of Pennsylvania, 2007
- Certified Lean Six Sigma Black Belt, 2009
- Certificate in Executive Leadership, Harvard Business School, 2015
- **Career Progression**
- **2004-2008:** Operations Analyst, Procter & Gamble
- Initiated process improvements, decreased waste by 12% in first two years
- Supported multi-site supply chain coordination
- **2008-2012:** Operations Manager, FedEx Ground
- Managed 150+ employees across three regional distribution centers
- Led post-merger integration, aligning disparate operational systems
- **2012-2016:** Senior Operations Manager, Baxter International
- Spearheaded cross-departmental efficiency initiatives, resulting in $7M annual savings
- Developed and implemented SOPs for quality and compliance across five facilities
- **2016-2020:** Director of Operations, UnitedHealth Group
- Oversaw daily operations for national claims processing division (600+ staff)
- Orchestrated digital transformation project, increasing productivity by 25%
- Mentored 8 direct reports, 2 promoted to VP-level roles
- **2020-Present:** Vice President, Corporate Operations, Sterling Dynamics Inc.
- Accountable for strategic planning, budget oversight ($500M+), and multi-site leadership
- Championed company-wide ESG (Environmental, Social, Governance) initiative
- Developed crisis management protocols during pandemic; ensured uninterrupted operations
- **Key Achievements**
- Recognized as “Top 40 Under 40” by Operations Management Review (2016)
- Led enterprise resource planning (ERP) implementation across four business units
- Regular speaker at industry forums (APICS, SHRM, National Operations Summit)
- Published whitepaper: “Operational Agility in a Rapidly Changing World” (2023)
- Ongoing executive coaching and mentoring for emerging leaders
---
**3. CORE INSTRUCTIONS**
- **Primary Responsibilities**
- Formulate, implement, and monitor organizational policies and procedures
- Oversee daily operations, ensuring all departments meet performance targets
- Optimize workforce allocation and materials usage for maximum efficiency
- Coordinate cross-departmental projects and change management initiatives
- Lead annual strategic planning and budgeting cycles
- Ensure compliance with regulatory requirements and industry standards
- Mentor and develop subordinate managers and supervisors
- **Key Performance Indicators (KPIs)**
- Operational efficiency ratios (cost per unit, throughput, OEE)
- Employee engagement and retention rates
- Customer satisfaction and NPS (Net Promoter Score)
- Achievement of strategic goals and project milestones
- Regulatory compliance metrics
- **Professional Standards & Ethics**
- Uphold integrity, transparency, and fairness in all decisions
- Emphasize diversity, equity, and inclusion
- Foster a safety-first culture
- Ensure confidentiality and data protection
- **Stakeholder Relationships & Communication**
- Maintain open, structured communication with executive leadership, department heads, and frontline supervisors
- Provide regular operational updates and risk assessments to the Board
- Engage transparently with clients, suppliers, and regulatory bodies
- Facilitate interdepartmental collaboration and knowledge-sharing
- **Decision-Making Frameworks**
- Data-driven analysis (KPIs, dashboards, trend reports)
- Risk assessment and scenario planning
- Consultative approach: seek input from relevant experts and teams
- Continuous improvement and feedback loops
---
**4. COMMON WORKFLOWS**
- **Daily/Weekly/Monthly Routines**
- Daily operational review with direct reports
- Weekly cross-departmental leadership meetings
- Monthly performance dashboard and KPI review
- Monthly town hall with staff for transparency and engagement
- Quarterly strategic review and forecast adjustments
- **Project Management Approaches**
- Agile project management for cross-functional initiatives
- Waterfall methodology for regulatory or compliance projects
- Use of Gantt charts, RACI matrices, and Kanban boards
- Regular status updates and post-mortem analyses
- **Problem-Solving Methodologies**
- Root Cause Analysis (5 Whys, Fishbone Diagram)
- Lean Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control)
- Cross-functional task forces for complex challenges
- **Collaboration and Team Interaction**
- Empower teams via clear delegation and accountability
- Promote open-door policy for innovation and feedback
- Leverage digital collaboration tools (MS Teams, Slack, Asana)
- **Tools, Software, and Systems**
- ERP (SAP, Oracle) and business intelligence platforms (Power BI, Tableau)
- HRIS (Workday), CRM (Salesforce), project management tools (Asana, Jira)
- Communication tools (Zoom, MS Teams)
---
**5. MENTAL MODELS**
- **Strategic Thinking Patterns**
- “Systems thinking” for interdependencies and long-term impact
- “First principles” to challenge assumptions and innovate processes
- Scenario planning and “what-if” analysis for future-proofing
- **Risk Assessment and Management**
- Proactive identification, quantification, and mitigation of operational risks
- Regular risk audits and contingency planning
- Emphasize flexibility and agility in response frameworks
- **Innovation and Continuous Improvement**
- Kaizen mindset: relentless pursuit of incremental improvements
- Encourage cross-functional idea generation and rapid prototyping
- Benchmark against industry best practices
- **Professional Judgment and Expertise Application**
- Balance quantitative analysis with qualitative insights
- Apply ethical principles and corporate values to all decisions
- Prioritize sustainable, stakeholder-centric outcomes
- **Industry-Specific Analytical Approaches**
- Use of operational KPIs, TQM, and lean manufacturing metrics
- Market trend analysis and competitive benchmarking
- **Best Practice Implementation**
- Formalize best practices via SOPs and ongoing training
- Monitor adoption and measure outcomes for continuous feedback
---
**6. WORLD-CLASS EXCELLENCE**
- **Unique Expertise & Specializations**
- Mastery in operational integration across distributed sites
- Proven success in digital transformation and process automation
- Specialist in building high-performance, agile teams
- **Industry Recognition & Thought Leadership**
- Frequent keynote at operational excellence conferences
- Contributor to leading management publications
- Advisor for operations management think tanks
- **Innovative Approaches & Methodologies**
- Early adopter of AI and predictive analytics in operations
- Developed proprietary frameworks for rapid crisis response
- Pioneer of blended work models and flexible resource deployment
- **Mentorship & Knowledge Sharing**
- Established internal leadership academy for talent development
- Sponsor of diversity and inclusion mentorship programs
- Regularly coach rising operations managers and peers
- **Continuous Learning & Adaptation**
- Attends annual executive education and industry roundtables
- Active in professional associations (APICS, SHRM, Institute for Operations Research and the Management Sciences)
- Seeks feedback from all levels, adapts rapidly to evolving challenges
---
**Summary:**
You are Morgan L. Whitaker, an elite General and Operations Manager. Your role is to strategically plan, direct, and coordinate all operational functions of a large, multi-faceted organization. You integrate best-in-class management principles, leverage advanced technology, drive continuous improvement, and foster a high-performance culture. You are recognized for thought leadership, industry innovation, and your unwavering commitment to operational excellence and stakeholder value.

@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "7.9.7"
version = "7.9.9"
description = "Swarms - TGSC"
license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"]
@ -86,7 +86,7 @@ swarms = "swarms.cli.main:main"
[tool.poetry.group.lint.dependencies]
black = ">=23.1,<26.0"
ruff = ">=0.5.1,<0.12.3"
ruff = ">=0.5.1,<0.12.4"
types-toml = "^0.10.8.1"
types-pytz = ">=2023.3,<2026.0"
types-chardet = "^5.0.4.6"

@ -0,0 +1,18 @@
from swarms import Agent
agent = Agent(
name="Research Agent",
description="A research agent that can answer questions",
model_name="groq/moonshotai/kimi-k2-instruct",
verbose=True,
streaming_on=True,
)
out = agent.run(
"Create a detailed report on the best AI research papers for multi-agent collaboration. "
"Include paper titles, authors, publication venues, years, and a brief summary of each paper's key contributions. "
"Highlight recent advances and foundational works. Only include papers from 2024 and 2025."
"Provie their links as well"
)
print(out)

@ -1,5 +1,7 @@
import traceback
from typing import List, Optional, Union, Dict
import uuid
from swarms.prompts.agent_judge_prompt import AGENT_JUDGE_PROMPT
@ -7,16 +9,20 @@ from swarms.structs.agent import Agent
from swarms.structs.conversation import Conversation
from swarms.utils.any_to_str import any_to_str
class AgentJudgeInitializationError(Exception):
"""
Exception raised when there is an error initializing the AgentJudge.
"""
pass
class AgentJudgeExecutionError(Exception):
"""
Exception raised when there is an error executing the AgentJudge.
"""
pass
class AgentJudgeFeedbackCycleError(Exception):
@ -28,9 +34,11 @@ class AgentJudgeFeedbackCycleError(Exception):
class AgentJudge:
"""
A specialized agent designed to evaluate and judge outputs from other agents or systems.
The AgentJudge acts as a quality control mechanism, providing objective assessments
and feedback on various types of content, decisions, or outputs. It's based on research
in LLM-based evaluation systems and can maintain context across multiple evaluations.
This implementation supports both single task evaluation and batch processing with
iterative refinement capabilities.
@ -43,6 +51,7 @@ class AgentJudge:
max_loops (int): The maximum number of evaluation iterations to run.
verbose (bool): Whether to enable verbose logging.
agent (Agent): An instance of the Agent class that performs the evaluation execution.
evaluation_criteria (Dict[str, float]): Dictionary of evaluation criteria and their weights.
Example:
@ -76,6 +85,7 @@ class AgentJudge:
Processes a single task or list of tasks and returns the agent's evaluation.
run(task: str = None, tasks: List[str] = None, img: str = None) -> List[str]:
Executes evaluation in a loop with context building, collecting responses.
run_batched(tasks: List[str] = None, imgs: List[str] = None) -> List[str]:
Executes batch evaluation of tasks with corresponding images.
"""
@ -89,7 +99,9 @@ class AgentJudge:
model_name: str = "openai/o1",
max_loops: int = 1,
verbose: bool = False,
evaluation_criteria: Optional[Dict[str, float]] = None,
*args,
**kwargs,
):
@ -100,6 +112,7 @@ class AgentJudge:
self.conversation = Conversation(time_enabled=False)
self.max_loops = max_loops
self.verbose = verbose
self.evaluation_criteria = evaluation_criteria or {}
# Enhance system prompt with evaluation criteria if provided
@ -110,10 +123,13 @@ class AgentJudge:
criteria_str += f"- {criterion}: weight = {weight}\n"
enhanced_prompt += criteria_str
self.agent = Agent(
agent_name=agent_name,
agent_description=description,
system_prompt=enhanced_prompt,
model_name=model_name,
max_loops=1,
*args,
@ -183,6 +199,7 @@ class AgentJudge:
) -> str:
"""
Processes a single task or list of tasks and returns the agent's evaluation.
This method performs a one-shot evaluation of the provided content. It takes
either a single task string or a list of tasks and generates a comprehensive
evaluation with strengths, weaknesses, and improvement suggestions.
@ -207,6 +224,7 @@ class AgentJudge:
# Single task evaluation
evaluation = judge.step(task="The answer is 42.")
# Multiple tasks evaluation
evaluation = judge.step(tasks=[
"Response 1: Paris is the capital of France",
@ -274,6 +292,7 @@ class AgentJudge:
):
"""
Executes evaluation in multiple iterations with context building and refinement.
This method runs the evaluation process for the specified number of max_loops,
where each iteration builds upon the previous context. This allows for iterative
refinement of evaluations and deeper analysis over multiple passes.
@ -363,11 +382,13 @@ class AgentJudge:
):
"""
Executes batch evaluation of multiple tasks with corresponding images.
This method processes multiple task-image pairs independently, where each
task can be evaluated with its corresponding image. Unlike the run() method,
this doesn't build context between different tasks - each is evaluated
independently.
Args:
tasks (List[str], optional): A list of tasks/outputs to be evaluated.
imgs (List[str], optional): A list of image paths corresponding to each task.
@ -378,6 +399,7 @@ class AgentJudge:
list contains the responses from all iterations (max_loops)
for that particular task.
Example:
```python
# Batch evaluation with images
@ -402,6 +424,7 @@ class AgentJudge:
])
```
Note:
- Each task is processed independently
- If imgs is provided, it must have the same length as tasks
@ -412,4 +435,6 @@ class AgentJudge:
for task, img in zip(tasks, imgs):
response = self.run(task=task, img=img)
responses.append(response)
return responses
return responses

@ -1,22 +1,71 @@
from collections import Counter
"""
Self-Consistency Agent Implementation
This module implements the SelfConsistencyAgent, a specialized agent that leverages the
self-consistency technique to improve reasoning reliability and accuracy. The agent generates
multiple independent responses to a given task and aggregates them into a single, consistent
final answer using majority voting and sophisticated aggregation techniques.
The self-consistency approach is based on the research paper:
"Self-Consistency Improves Chain of Thought Reasoning in Language Models"
by Wang et al. (2022) - https://arxiv.org/abs/2203.07870
Key Features:
- Concurrent generation of multiple independent responses
- Majority voting aggregation with detailed analysis
- Evaluation mode for answer validation
- Configurable output formats
- Thread-safe execution
Author: Swarms Team
License: MIT
"""
from concurrent.futures import ThreadPoolExecutor, as_completed
from typing import List
from typing import List, Optional, Union, Dict, Any
from loguru import logger
from swarms.structs.agent import Agent
from swarms.structs.conversation import Conversation
from swarms.structs.malt import majority_voting_prompt
from swarms.utils.output_types import OutputType
from swarms.utils.any_to_str import any_to_str
from swarms.utils.history_output_formatter import (
history_output_formatter,
)
# System prompt for the reasoning agent that generates individual responses
CONSISTENCY_SYSTEM_PROMPT = """
You are a reasoning agent designed for complex problem-solving and decision-making. Your objective is to provide clear and reliable responses through structured reasoning. Begin by thoroughly understanding the problem, rephrasing it for clarity, and identifying key components. Develop a logical plan that breaks the problem into manageable steps, detailing your approach and any assumptions made. Validate your information with reliable sources and assess the accuracy of your calculations. Explore multiple solutions, weighing their pros and cons, and maintain transparency by documenting your reasoning process, uncertainties, and biases. Summarize your findings in a concise final answer that reflects your thorough analysis, ensuring it is well-organized and accessible. Adapt your reasoning to the context of the problem, integrating new information as needed, and implement error-handling strategies to address any issues that arise. Finally, reflect on your reasoning process to identify areas for improvement and ensure consistency across all reasoning paths.
"""
# Detailed prompt for the majority voting aggregation agent
majority_voting_prompt = """
Engage in a comprehensive and exhaustive majority voting analysis of the following conversation, ensuring a deep and thoughtful examination of the responses provided by each agent. This analysis should not only summarize the responses but also critically engage with the content, context, and implications of each agent's input.
Please adhere to the following detailed guidelines:
1. **Identification of Dominant Responses:**
- Identify the most prevalent answer or recommendation across all agents. Provide a thorough rationale for its dominance, including an exploration of the factors that may have contributed to its acceptance among the agents. Discuss the context in which this consensus emerged and any relevant historical or theoretical frameworks that support this conclusion.
2. **Exploration of Disparities:**
- Delve into any significant disparities or contrasting viewpoints between agents. Explore the underlying reasons for these differences, considering aspects such as differing methodologies, assumptions, or interpretations of the task at hand. Analyze how these contrasting perspectives may reflect broader debates within the field and what implications they hold for the overall understanding of the topic.
3. **Consensus and Disagreement Analysis:**
- Highlight key areas of consensus and disagreement among the agents. Discuss the implications of these findings on the overall argument, including how consensus can strengthen certain claims while disagreement may indicate areas of uncertainty or contention. Provide examples from the conversation to illustrate these points and consider how they might influence future discussions or research directions.
4. **Critical Evaluation of Majority Opinion:**
- Critically evaluate the strength of the majority opinion, considering factors such as the reasoning behind it and its mathematical validity if applicable. Assess whether the majority opinion is well-supported by evidence and logical reasoning, and discuss any potential weaknesses or oversights that may undermine its credibility.
5. **Insights from Minority Viewpoints:**
- Note any unique insights from minority viewpoints, assessing their potential contributions to a more nuanced understanding of the topic. Discuss how these minority perspectives can enrich the conversation and provide alternative angles that may have been overlooked by the majority. Consider the value of dissent in academic discourse and how it can lead to more robust conclusions.
6. **Synthesis of Recommendations:**
- Provide a final synthesized recommendation based on the majority consensus, ensuring that it reflects a thorough consideration of all perspectives and is grounded in sound reasoning. This recommendation should not only summarize the majority view but also integrate insights from minority opinions, creating a comprehensive and balanced conclusion that acknowledges the complexity of the discussion.
Throughout your analysis, focus on uncovering clear patterns while being attentive to the subtleties and complexities inherent in the responses. Pay particular attention to the nuances of mathematical contexts where algorithmic thinking may be required, ensuring that your examination is both rigorous and accessible to a diverse audience.
"""
def aggregation_agent(
responses: List[str],
@ -24,7 +73,27 @@ def aggregation_agent(
model_name: str = "gpt-4o-mini",
) -> str:
"""
Aggregates a list of responses into a single final answer.
Aggregates a list of responses into a single final answer using an AI-powered aggregation agent.
This function creates a specialized agent that analyzes multiple responses and synthesizes
them into a coherent final answer. The aggregation process considers consensus, disagreements,
and minority viewpoints to produce a well-reasoned conclusion.
Args:
responses (List[str]): List of responses to be aggregated
prompt (str, optional): Custom prompt for the aggregation agent.
Defaults to the majority_voting_prompt.
model_name (str, optional): Model to use for aggregation.
Defaults to "gpt-4o-mini".
Returns:
str: The aggregated final answer
Example:
>>> responses = ["Answer A", "Answer B", "Answer A"]
>>> final_answer = aggregation_agent(responses)
>>> print(final_answer)
"Based on the majority consensus..."
"""
task = any_to_str(responses)
@ -41,69 +110,174 @@ def aggregation_agent(
return final_answer
class SelfConsistencyAgent(Agent):
class SelfConsistencyAgent:
"""
A specialized agent that implements self-consistency for improved reasoning reliability.
The SelfConsistencyAgent generates multiple independent responses to a given task and
aggregates them into a single, consistent final answer. This approach is based on the
research paper "Self-Consistency Improves Chain of Thought Reasoning in Language Models"
by Wang et al. (2022).
Key Features:
- Concurrent generation of multiple independent responses
- Majority voting aggregation with detailed analysis
- Evaluation mode for answer validation
- Configurable output formats
- Thread-safe execution
The self-consistency technique works by:
1. Generating multiple independent reasoning paths for the same problem
2. Analyzing the consistency and agreement among these paths
3. Aggregating the results using majority voting or consensus building
4. Producing a final answer that reflects the most reliable consensus
This approach helps mitigate issues like:
- Random errors in individual reasoning paths
- Biases in single reasoning approaches
- Inconsistencies in complex problem-solving
Reference:
Wang, Y., Dong, W., Han, J., & Wang, W. (2022). Self-Consistency Improves Chain of
Thought Reasoning in Language Models. arXiv preprint arXiv:2203.07870.
https://arxiv.org/abs/2203.07870
Example:
>>> agent = SelfConsistencyAgent(
... name="Math-Reasoning-Agent",
... model_name="gpt-4o-mini",
... num_samples=5,
... max_loops=1
... )
>>> result = agent.run("What is the 40th prime number?")
>>> print(result)
"""
def __init__(
self,
name: str = "Self-Consistency-Agent",
description: str = "An agent that uses self consistency to generate a final answer.",
model_name: str = "gpt-4o-mini",
system_prompt: str = CONSISTENCY_SYSTEM_PROMPT,
num_samples: int = 5,
max_loops: int = 1,
majority_voting_prompt: str = None,
majority_voting_prompt: Optional[
str
] = majority_voting_prompt,
eval: bool = False,
output_type: OutputType = "dict",
random_models_on: bool = False,
*args,
**kwargs,
):
"""
Initializes the SelfConsistencyAgent.
Initialize the SelfConsistencyAgent.
Args:
num_samples (int): Number of independent responses to sample.
**kwargs: Other keyword arguments passed to the base Agent.
name (str, optional): Name of the agent. Defaults to "Self-Consistency-Agent".
description (str, optional): Description of the agent's purpose.
Defaults to "An agent that uses self consistency to generate a final answer.".
model_name (str, optional): The underlying language model to use.
Defaults to "gpt-4o-mini".
system_prompt (str, optional): System prompt for the reasoning agent.
Defaults to CONSISTENCY_SYSTEM_PROMPT.
num_samples (int, optional): Number of independent responses to generate.
Defaults to 5.
max_loops (int, optional): Maximum number of reasoning loops per sample.
Defaults to 1.
majority_voting_prompt (Optional[str], optional): Custom prompt for majority voting.
Defaults to None.
eval (bool, optional): Enable evaluation mode for answer validation.
Defaults to False.
output_type (OutputType, optional): Format of the output.
Defaults to "dict".
random_models_on (bool, optional): Enable random model selection for diversity.
Defaults to False.
**kwargs: Additional keyword arguments passed to the base Agent class.
Note:
The num_samples parameter determines how many independent reasoning paths
will be generated. Higher values generally lead to more reliable results
but increase computational cost and time.
"""
super().__init__(
name=name,
description=description,
**kwargs,
)
self.name = name
self.description = description
self.model_name = model_name
self.num_samples = num_samples
self.conversation = Conversation()
self.max_loops = max_loops
self.majority_voting_prompt = majority_voting_prompt
self.eval = eval
self.output_type = output_type
self.system_prompt = system_prompt
self.random_models_on = random_models_on
self.conversation = Conversation()
self.args = args
self.kwargs = kwargs
def run(
self, task: str, answer: str = None, *args, **kwargs
) -> str:
self,
task: str,
img: Optional[str] = None,
answer: Optional[str] = None,
*args,
**kwargs,
) -> Union[str, Dict[str, Any]]:
"""
Generates multiple responses for the given prompt and aggregates them concurrently.
Generate multiple responses for the given task and aggregate them concurrently.
This method implements the core self-consistency algorithm:
1. Generates multiple independent responses using concurrent execution
2. Optionally validates responses against a known answer (if eval=True)
3. Aggregates responses using an AI-powered aggregation agent
4. Returns the final result in the specified output format
Args:
task (str): The input prompt.
task (str): The input prompt or task to be solved
answer (Optional[str], optional): Expected answer for validation (if eval=True).
Defaults to None.
*args: Additional positional arguments passed to the base agent's run method
**kwargs: Additional keyword arguments passed to the base agent's run method
Returns:
str: The aggregated final answer.
Union[str, Dict[str, Any]]: The aggregated final answer in the specified format
Raises:
RuntimeError: If evaluation mode is enabled and the expected answer is not found
in any of the generated responses
Example:
>>> agent = SelfConsistencyAgent(num_samples=3)
>>> result = agent.run("What is 2 + 2?")
>>> print(result)
>>> # With evaluation mode
>>> result = agent.run("What is 2 + 2?", answer="4", eval=True)
"""
responses = []
logger.info(
f"Generating {self.num_samples} responses concurrently..."
)
self.conversation.add(role="User", content=task)
# Generate multiple independent responses concurrently
reasoning_agent = self._create_reasoning_agent()
with ThreadPoolExecutor() as executor:
futures = {
executor.submit(super().run, task, *args, **kwargs): i
executor.submit(
reasoning_agent.run,
task=task,
img=img,
*args,
**kwargs,
): i
for i in range(self.num_samples)
}
for future in as_completed(futures):
response = future.result()
responses.append(response)
self.conversation.add(role=self.agent_name, content=responses)
self.conversation.add(role=self.name, content=responses)
# Optional evaluation against known answer
if self.eval:
if answer is not None:
correct = self.check_responses_for_answer(
@ -116,9 +290,7 @@ class SelfConsistencyAgent(Agent):
)
return None
# Aggregation agent
# final_answer = self.aggregation_agent(responses)
# Aggregate responses using AI-powered aggregation
final_answer = aggregation_agent(responses)
self.conversation.add(
@ -129,39 +301,46 @@ class SelfConsistencyAgent(Agent):
self.conversation, self.output_type
)
def aggregate(self, responses: List[str]) -> str:
def _create_reasoning_agent(self) -> Agent:
"""
Aggregates a list of responses into a single final answer.
Here we use a simple majority vote (most common answer) as an example. Depending on
the task, you might need a more sophisticated aggregation (e.g., weighting, consensus reasoning, etc.).
Args:
responses (list of str): The list of responses.
Create a reasoning agent instance for generating individual responses.
Returns:
str: The aggregated answer.
Agent: A configured Agent instance for reasoning tasks
"""
# Count the frequency of each response.
counts = Counter(responses)
most_common, freq = counts.most_common(1)[0]
logger.info(
f"Aggregation complete. Most common response (appeared {freq} times):"
return Agent(
agent_name=self.name,
description=self.description,
model_name=self.model_name,
system_prompt=self.system_prompt,
max_loops=self.max_loops,
random_models_on=self.random_models_on,
output_type="str-all-except-first",
**self.kwargs,
)
return most_common
def check_responses_for_answer(
self, responses: List[str], answer: str
) -> bool:
"""
Checks if the specified answer is present in any of the provided responses.
Check if the specified answer is present in any of the provided responses.
This method performs a simple string matching to determine if the expected
answer appears in any of the generated responses. It's useful for validation
and evaluation purposes.
Args:
responses (List[str]): A list of responses to check.
answer (str): The answer to look for in the responses.
responses (List[str]): List of responses to check
answer (str): The answer to look for in the responses
Returns:
bool: True if the answer is found in any response, False otherwise.
bool: True if the answer is found in any response, False otherwise
Example:
>>> agent = SelfConsistencyAgent()
>>> responses = ["The answer is 42", "I think it's 42", "Not sure"]
>>> found = agent.check_responses_for_answer(responses, "42")
>>> print(found) # True
"""
for response in responses:
if answer in response:
@ -181,27 +360,30 @@ class SelfConsistencyAgent(Agent):
def batched_run(
self, tasks: List[str], *args, **kwargs
) -> List[str]:
) -> List[Union[str, Dict[str, Any]]]:
"""
Runs the agent in a batched manner.
Run the agent on multiple tasks in batch.
This method processes multiple tasks sequentially, applying the self-consistency
approach to each task independently. It's useful for processing large datasets
or multiple related problems.
Args:
tasks (List[str]): List of tasks to be processed
*args: Additional positional arguments passed to the run method
**kwargs: Additional keyword arguments passed to the run method
Returns:
List[Union[str, Dict[str, Any]]]: List of results for each task
Example:
>>> agent = SelfConsistencyAgent()
>>> tasks = ["What is 2+2?", "What is 3+3?", "What is 4+4?"]
>>> results = agent.batched_run(tasks)
>>> print(len(results)) # 3
"""
responses = []
for task in tasks:
response = self.run(task, *args, **kwargs)
responses.append(response)
return responses
# # Example usage:
# if __name__ == "__main__":
# agent = SelfConsistencyAgent(
# agent_name="Reasoning-Agent",
# model_name="gpt-4o-mini",
# max_loops=1,
# num_samples=5, # Number of samples for self consistency
# )
# prompt = "What is the 40th prime number?"
# final_answer = agent.run(prompt)
# print("\nFinal aggregated answer:")
# print(final_answer)

@ -1,3 +1,41 @@
"""
ReasoningAgentRouter: A flexible router for advanced reasoning agent swarms.
This module provides the ReasoningAgentRouter class, which enables dynamic selection and instantiation
of various advanced reasoning agent types (swarms) for complex problem-solving tasks. It supports
multiple reasoning strategies, including self-consistency, collaborative duo agents, iterative
reflection, knowledge prompting, and agent judging.
Key Features:
- Unified interface for multiple agent types (see `agent_types`)
- Caching of agent instances for efficiency and memory management
- Extensible factory-based architecture for easy addition of new agent types
- Batch and single-task execution
- Customizable agent configuration (model, prompt, memory, etc.)
Supported Agent Types:
- "reasoning-duo" / "reasoning-agent": Dual collaborative agent system
- "self-consistency" / "consistency-agent": Multiple independent solutions with consensus
- "ire" / "ire-agent": Iterative Reflective Expansion agent
- "ReflexionAgent": Reflexion agent with memory
- "GKPAgent": Generated Knowledge Prompting agent
- "AgentJudge": Agent judge for evaluation/critique
Example usage:
>>> router = ReasoningAgentRouter(swarm_type="self-consistency", num_samples=3)
>>> result = router.run("What is the capital of France?")
>>> print(result)
>>> # Batch mode
>>> results = router.batched_run(["2+2?", "3+3?"])
>>> print(results)
See also:
- docs/swarms/agents/reasoning_agent_router.md for detailed documentation and architecture diagrams.
- consistency_example.py for a usage example with SelfConsistencyAgent.
"""
from typing import (
List,
Literal,
@ -6,9 +44,9 @@ from typing import (
Any,
Tuple,
Hashable,
Optional,
)
from swarms.agents.consistency_agent import SelfConsistencyAgent
from swarms.agents.flexion_agent import ReflexionAgent
from swarms.agents.gkp_agent import GKPAgent
@ -19,7 +57,7 @@ from swarms.agents.reasoning_duo import ReasoningDuo
from swarms.utils.output_types import OutputType
from swarms.agents.agent_judge import AgentJudge
#: Supported agent type literals for ReasoningAgentRouter
agent_types = Literal[
"reasoning-duo",
"self-consistency",
@ -35,18 +73,30 @@ agent_types = Literal[
class ReasoningAgentRouter:
"""
A Reasoning Agent that can answer questions and assist with various tasks using different reasoning strategies.
Attributes:
agent_name (str): The name of the agent.
description (str): A brief description of the agent's capabilities.
model_name (str): The name of the model used for reasoning.
system_prompt (str): The prompt that guides the agent's reasoning process.
max_loops (int): The maximum number of loops for the reasoning process.
swarm_type (agent_types): The type of reasoning swarm to use (e.g., reasoning duo, self-consistency, IRE).
num_samples (int): The number of samples to generate for self-consistency agents.
output_type (OutputType): The format of the output (e.g., dict, list).
A router for advanced reasoning agent swarms.
The ReasoningAgentRouter enables dynamic selection, instantiation, and caching of various
reasoning agent types ("swarms") for flexible, robust, and scalable problem-solving.
Args:
agent_name (str): Name identifier for the agent instance.
description (str): Description of the agent's capabilities.
model_name (str): The underlying language model to use.
system_prompt (str): System prompt for the agent.
max_loops (int): Maximum number of reasoning loops.
swarm_type (agent_types): Type of reasoning swarm to use.
num_samples (int): Number of samples for self-consistency or iterations.
output_type (OutputType): Format of the output.
num_knowledge_items (int): Number of knowledge items for GKP agent.
memory_capacity (int): Memory capacity for agents that support it.
eval (bool): Enable evaluation mode for self-consistency.
random_models_on (bool): Enable random model selection for diversity.
majority_voting_prompt (Optional[str]): Custom prompt for majority voting.
Example:
>>> router = ReasoningAgentRouter(swarm_type="reasoning-duo")
>>> result = router.run("Explain quantum entanglement.")
>>> print(result)
"""
# Class variable to store cached agent instances
@ -59,12 +109,20 @@ class ReasoningAgentRouter:
model_name: str = "gpt-4o-mini",
system_prompt: str = "You are a helpful assistant that can answer questions and help with tasks.",
max_loops: int = 1,
swarm_type: agent_types = "reasoning_duo",
swarm_type: agent_types = "reasoning-duo",
num_samples: int = 1,
output_type: OutputType = "dict",
output_type: OutputType = "dict-all-except-first",
num_knowledge_items: int = 6,
memory_capacity: int = 6,
eval: bool = False,
random_models_on: bool = False,
majority_voting_prompt: Optional[str] = None,
):
"""
Initialize the ReasoningAgentRouter with the specified configuration.
See class docstring for parameter details.
"""
self.agent_name = agent_name
self.description = description
self.model_name = model_name
@ -75,14 +133,17 @@ class ReasoningAgentRouter:
self.output_type = output_type
self.num_knowledge_items = num_knowledge_items
self.memory_capacity = memory_capacity
self.eval = eval
self.random_models_on = random_models_on
self.majority_voting_prompt = majority_voting_prompt
# Added: Initialize the factory mapping dictionary
# Initialize the factory mapping dictionary
self._initialize_agent_factories()
def _initialize_agent_factories(self) -> None:
"""
Initialize the agent factory mapping dictionary, mapping various agent types to their respective creation functions.
This method replaces the original if-elif chain, making the code more maintainable and extensible.
"""
self.agent_factories: Dict[str, Callable[[], Any]] = {
@ -104,11 +165,11 @@ class ReasoningAgentRouter:
def _get_cache_key(self) -> Tuple[Hashable, ...]:
"""
Generate a unique key for cache lookup.
The key is based on all relevant configuration parameters of the agent.
The key is based on all relevant configuration parameters of the agent.
Returns:
Tuple[Hashable, ...]: A hashable tuple to serve as the cache key
Tuple[Hashable, ...]: A hashable tuple to serve as the cache key.
"""
return (
self.swarm_type,
@ -121,10 +182,18 @@ class ReasoningAgentRouter:
self.output_type,
self.num_knowledge_items,
self.memory_capacity,
self.eval,
self.random_models_on,
self.majority_voting_prompt,
)
def _create_reasoning_duo(self):
"""Create an agent instance for the ReasoningDuo type"""
"""
Create an agent instance for the ReasoningDuo type.
Returns:
ReasoningDuo: An instance of the ReasoningDuo agent.
"""
return ReasoningDuo(
agent_name=self.agent_name,
agent_description=self.description,
@ -134,19 +203,32 @@ class ReasoningAgentRouter:
)
def _create_consistency_agent(self):
"""Create an agent instance for the SelfConsistencyAgent type"""
"""
Create an agent instance for the SelfConsistencyAgent type.
Returns:
SelfConsistencyAgent: An instance of the SelfConsistencyAgent.
"""
return SelfConsistencyAgent(
agent_name=self.agent_name,
name=self.agent_name,
description=self.description,
model_name=self.model_name,
system_prompt=self.system_prompt,
max_loops=self.max_loops,
num_samples=self.num_samples,
output_type=self.output_type,
eval=self.eval,
random_models_on=self.random_models_on,
majority_voting_prompt=self.majority_voting_prompt,
)
def _create_ire_agent(self):
"""Create an agent instance for the IREAgent type"""
"""
Create an agent instance for the IREAgent type.
Returns:
IREAgent: An instance of the IterativeReflectiveExpansion agent.
"""
return IREAgent(
agent_name=self.agent_name,
description=self.description,
@ -158,7 +240,12 @@ class ReasoningAgentRouter:
)
def _create_agent_judge(self):
"""Create an agent instance for the AgentJudge type"""
"""
Create an agent instance for the AgentJudge type.
Returns:
AgentJudge: An instance of the AgentJudge agent.
"""
return AgentJudge(
agent_name=self.agent_name,
model_name=self.model_name,
@ -167,16 +254,27 @@ class ReasoningAgentRouter:
)
def _create_reflexion_agent(self):
"""Create an agent instance for the ReflexionAgent type"""
"""
Create an agent instance for the ReflexionAgent type.
Returns:
ReflexionAgent: An instance of the ReflexionAgent.
"""
return ReflexionAgent(
agent_name=self.agent_name,
system_prompt=self.system_prompt,
model_name=self.model_name,
max_loops=self.max_loops,
memory_capacity=self.memory_capacity,
)
def _create_gkp_agent(self):
"""Create an agent instance for the GKPAgent type"""
"""
Create an agent instance for the GKPAgent type.
Returns:
GKPAgent: An instance of the GKPAgent.
"""
return GKPAgent(
agent_name=self.agent_name,
model_name=self.model_name,
@ -186,13 +284,15 @@ class ReasoningAgentRouter:
def select_swarm(self):
"""
Select and initialize the appropriate reasoning swarm based on the specified swarm type.
Uses a caching mechanism to return a cached instance if an agent with the same configuration already exists.
Uses a caching mechanism to return a cached instance if an agent with the same configuration already exists.
Returns:
The selected reasoning swarm instance.
"""
Raises:
ValueError: If the specified swarm type is invalid.
"""
# Generate cache key
cache_key = self._get_cache_key()
@ -216,25 +316,25 @@ class ReasoningAgentRouter:
"""
Execute the reasoning process of the selected swarm on a given task.
Args:
task (str): The task or question to be processed by the reasoning agent.
*args: Additional positional arguments for the agent's run method.
**kwargs: Additional keyword arguments for the agent's run method.
Returns:
The result of the reasoning process.
The result of the reasoning process (format depends on agent and output_type).
"""
swarm = self.select_swarm()
return swarm.run(task=task)
return swarm.run(task=task, *args, **kwargs)
def batched_run(self, tasks: List[str], *args, **kwargs):
"""
Execute the reasoning process on a batch of tasks.
Args:
tasks (List[str]): The list of tasks to process.
*args: Additional positional arguments for the agent's run method.
**kwargs: Additional keyword arguments for the agent's run method.
Returns:
A list of reasoning process results for each task.
@ -248,6 +348,7 @@ class ReasoningAgentRouter:
def clear_cache(cls):
"""
Clear the agent instance cache.
Use this when you need to free memory or force the creation of new instances.
"""
cls._agent_cache.clear()

@ -92,6 +92,7 @@ from swarms.structs.interactive_groupchat import (
)
from swarms.structs.hiearchical_swarm import HierarchicalSwarm
from swarms.structs.heavy_swarm import HeavySwarm
__all__ = [
"Agent",
@ -169,4 +170,5 @@ __all__ = [
"priority_speaker",
"random_dynamic_speaker",
"HierarchicalSwarm",
"HeavySwarm",
]

@ -1539,15 +1539,16 @@ class Agent:
if self.tools_list_dictionary is not None:
if not supports_function_calling(self.model_name):
raise AgentInitializationError(
logger.warning(
f"The model '{self.model_name}' does not support function calling. Please use a model that supports function calling."
)
try:
if self.max_tokens > get_max_tokens(self.model_name):
raise AgentInitializationError(
logger.warning(
f"Max tokens is set to {self.max_tokens}, but the model '{self.model_name}' only supports {get_max_tokens(self.model_name)} tokens. Please set max tokens to {get_max_tokens(self.model_name)} or less."
)
except Exception:
pass
@ -3231,13 +3232,3 @@ class Agent:
f"Full traceback: {traceback.format_exc()}. "
f"Attempting to retry tool execution with 3 attempts"
)
def add_tool_schema(self, tool_schema: dict):
self.tools_list_dictionary = [tool_schema]
self.output_type = "dict-all-except-first"
def add_multiple_tool_schemas(self, tool_schemas: list[dict]):
self.tools_list_dictionary = tool_schemas
self.output_type = "dict-all-except-first"

@ -6,7 +6,6 @@ import threading
import uuid
from typing import (
TYPE_CHECKING,
Callable,
Dict,
List,
Optional,
@ -190,18 +189,16 @@ class Conversation(BaseStructure):
save_enabled: bool = False, # New parameter to control if saving is enabled
save_filepath: str = None,
load_filepath: str = None, # New parameter to specify which file to load from
tokenizer: Callable = None,
context_length: int = 8192,
rules: str = None,
custom_rules_prompt: str = None,
user: str = "User:",
user: str = "User",
save_as_yaml: bool = False,
save_as_json_bool: bool = False,
token_count: bool = True,
token_count: bool = False,
message_id_on: bool = False,
provider: providers = "in-memory",
backend: Optional[str] = None,
# Backend-specific parameters
supabase_url: Optional[str] = None,
supabase_key: Optional[str] = None,
redis_host: str = "localhost",
@ -210,7 +207,6 @@ class Conversation(BaseStructure):
redis_password: Optional[str] = None,
db_path: Optional[str] = None,
table_name: str = "conversations",
# Additional backend parameters
use_embedded_redis: bool = True,
persist_redis: bool = True,
auto_persist: bool = True,
@ -230,20 +226,7 @@ class Conversation(BaseStructure):
self.save_enabled = save_enabled
self.conversations_dir = conversations_dir
self.message_id_on = message_id_on
# Handle save filepath
if save_enabled and save_filepath:
self.save_filepath = save_filepath
elif save_enabled and conversations_dir:
self.save_filepath = os.path.join(
conversations_dir, f"{self.id}.json"
)
else:
self.save_filepath = None
self.load_filepath = load_filepath
self.conversation_history = []
self.tokenizer = tokenizer
self.context_length = context_length
self.rules = rules
self.custom_rules_prompt = custom_rules_prompt
@ -253,9 +236,40 @@ class Conversation(BaseStructure):
self.token_count = token_count
self.provider = provider # Keep for backwards compatibility
self.conversations_dir = conversations_dir
self.backend = backend
self.supabase_url = supabase_url
self.supabase_key = supabase_key
self.redis_host = redis_host
self.redis_port = redis_port
self.redis_db = redis_db
self.redis_password = redis_password
self.db_path = db_path
self.table_name = table_name
self.use_embedded_redis = use_embedded_redis
self.persist_redis = persist_redis
self.auto_persist = auto_persist
self.redis_data_dir = redis_data_dir
self.conversation_history = []
# Handle save filepath
if save_enabled and save_filepath:
self.save_filepath = save_filepath
elif save_enabled and conversations_dir:
self.save_filepath = os.path.join(
conversations_dir, f"{self.id}.json"
)
else:
self.save_filepath = None
# Support both 'provider' and 'backend' parameters for backwards compatibility
# 'backend' takes precedence if both are provided
self.backend_setup(backend, provider)
def backend_setup(
self, backend: str = None, provider: str = None
):
self.backend = backend or provider
self.backend_instance = None
@ -285,19 +299,18 @@ class Conversation(BaseStructure):
]:
try:
self._initialize_backend(
supabase_url=supabase_url,
supabase_key=supabase_key,
redis_host=redis_host,
redis_port=redis_port,
redis_db=redis_db,
redis_password=redis_password,
db_path=db_path,
table_name=table_name,
use_embedded_redis=use_embedded_redis,
persist_redis=persist_redis,
auto_persist=auto_persist,
redis_data_dir=redis_data_dir,
**kwargs,
supabase_url=self.supabase_url,
supabase_key=self.supabase_key,
redis_host=self.redis_host,
redis_port=self.redis_port,
redis_db=self.redis_db,
redis_password=self.redis_password,
db_path=self.db_path,
table_name=self.table_name,
use_embedded_redis=self.use_embedded_redis,
persist_redis=self.persist_redis,
auto_persist=self.auto_persist,
redis_data_dir=self.redis_data_dir,
)
except Exception as e:
logger.warning(
@ -324,7 +337,6 @@ class Conversation(BaseStructure):
"time_enabled": self.time_enabled,
"autosave": self.autosave,
"save_filepath": self.save_filepath,
"tokenizer": self.tokenizer,
"context_length": self.context_length,
"rules": self.rules,
"custom_rules_prompt": self.custom_rules_prompt,
@ -449,8 +461,8 @@ class Conversation(BaseStructure):
if self.custom_rules_prompt is not None:
self.add(self.user or "User", self.custom_rules_prompt)
if self.tokenizer is not None:
self.truncate_memory_with_tokenizer()
# if self.tokenizer is not None:
# self.truncate_memory_with_tokenizer()
def _autosave(self):
"""Automatically save the conversation if autosave is enabled."""
@ -1051,9 +1063,7 @@ class Conversation(BaseStructure):
for message in self.conversation_history:
role = message.get("role")
content = message.get("content")
tokens = self.tokenizer.count_tokens(
text=content
) # Count the number of tokens
tokens = count_tokens(content)
count = tokens # Assign the token count
total_tokens += count

@ -0,0 +1,270 @@
import uuid
from typing import Any, Callable, Dict, List, Optional, Union
from swarms.structs.agent import Agent
from swarms.structs.concurrent_workflow import ConcurrentWorkflow
from swarms.structs.conversation import Conversation
def _create_voting_prompt(candidate_agents: List[Agent]) -> str:
"""
Create a comprehensive voting prompt for the election.
This method generates a detailed prompt that instructs voter agents on:
- Available candidates
- Required structured output format
- Evaluation criteria
- Voting guidelines
Returns:
str: A formatted voting prompt string
"""
candidate_names = [
(agent.agent_name if hasattr(agent, "agent_name") else str(i))
for i, agent in enumerate(candidate_agents)
]
prompt = f"""
You are participating in an election to choose the best candidate agent.
Available candidates: {', '.join(candidate_names)}
Please vote for one candidate and provide your reasoning with the following structured output:
1. rationality: A detailed explanation of the reasoning behind your decision. Include logical considerations, supporting evidence, and trade-offs that were evaluated when selecting this candidate.
2. self_interest: A comprehensive discussion of how self-interest influenced your decision, if at all. Explain whether personal or role-specific incentives played a role, or if your choice was primarily for the collective benefit of the swarm.
3. candidate_agent_name: The full name or identifier of the candidate you are voting for. This should exactly match one of the available candidate names listed above.
Consider the candidates' capabilities, experience, and alignment with the swarm's objectives when making your decision.
"""
print(prompt)
return prompt
def get_vote_schema():
return [
{
"type": "function",
"function": {
"name": "vote",
"description": "Cast a vote for a CEO candidate with reasoning and self-interest analysis.",
"parameters": {
"type": "object",
"properties": {
"rationality": {
"type": "string",
"description": "A detailed explanation of the reasoning behind this voting decision.",
},
"self_interest": {
"type": "string",
"description": "A comprehensive discussion of how self-interest factored into the decision.",
},
"candidate_agent_name": {
"type": "string",
"description": "The full name or identifier of the chosen candidate.",
},
},
"required": [
"rationality",
"self_interest",
"candidate_agent_name",
],
},
},
}
]
class ElectionSwarm:
"""
A swarm system that conducts elections among multiple agents to choose the best candidate.
The ElectionSwarm orchestrates a voting process where multiple voter agents evaluate
and vote for candidate agents based on their capabilities, experience, and alignment
with swarm objectives. The system uses structured output to ensure consistent voting
format and provides detailed reasoning for each vote.
Attributes:
id (str): Unique identifier for the election swarm
name (str): Name of the election swarm
description (str): Description of the election swarm's purpose
max_loops (int): Maximum number of voting rounds (default: 1)
agents (List[Agent]): List of voter agents that will participate in the election
candidate_agents (List[Agent]): List of candidate agents to be voted on
kwargs (dict): Additional keyword arguments
show_dashboard (bool): Whether to display the election dashboard
conversation (Conversation): Conversation history for the election
"""
def __init__(
self,
name: str = "Election Swarm",
description: str = "An election swarm is a swarm of agents that will vote on a candidate.",
agents: Union[List[Agent], List[Callable]] = None,
candidate_agents: Union[List[Agent], List[Callable]] = None,
id: str = str(uuid.uuid4()),
max_loops: int = 1,
show_dashboard: bool = True,
**kwargs,
):
"""
Initialize the ElectionSwarm.
Args:
name (str, optional): Name of the election swarm
description (str, optional): Description of the election swarm's purpose
agents (Union[List[Agent], List[Callable]], optional): List of voter agents
candidate_agents (Union[List[Agent], List[Callable]], optional): List of candidate agents
id (str, optional): Unique identifier for the election swarm
max_loops (int, optional): Maximum number of voting rounds (default: 1)
show_dashboard (bool, optional): Whether to display the election dashboard (default: True)
**kwargs: Additional keyword arguments
"""
self.id = id
self.name = name
self.description = description
self.max_loops = max_loops
self.agents = agents
self.candidate_agents = candidate_agents
self.kwargs = kwargs
self.show_dashboard = show_dashboard
self.conversation = Conversation()
self.reliability_check()
self.setup_voter_agents()
def reliability_check(self):
"""
Check the reliability of the voter agents.
"""
if self.agents is None:
raise ValueError("Voter agents are not set")
if self.candidate_agents is None:
raise ValueError("Candidate agents are not set")
if self.max_loops is None or self.max_loops < 1:
raise ValueError("Max loops are not set")
def setup_concurrent_workflow(self):
"""
Create a concurrent workflow for running voter agents in parallel.
Returns:
ConcurrentWorkflow: A configured concurrent workflow for the election
"""
return ConcurrentWorkflow(
name=self.name,
description=self.description,
agents=self.agents,
output_type="dict-all-except-first",
show_dashboard=self.show_dashboard,
)
def run_voter_agents(
self, task: str, img: Optional[str] = None, *args, **kwargs
):
"""
Execute the voting process by running all voter agents concurrently.
Args:
task (str): The election task or question to be voted on
img (Optional[str], optional): Image path if visual voting is required
*args: Additional positional arguments
**kwargs: Additional keyword arguments
Returns:
List[Dict[str, Any]]: Results from all voter agents containing their votes and reasoning
"""
concurrent_workflow = self.setup_concurrent_workflow()
results = concurrent_workflow.run(
task=task, img=img, *args, **kwargs
)
conversation_history = (
concurrent_workflow.conversation.conversation_history
)
for message in conversation_history:
self.conversation.add(
role=message["role"], content=message["content"]
)
return results
def parse_results(
self, results: List[Dict[str, Any]]
) -> Dict[str, int]:
"""
Parse voting results to count votes for each candidate.
Args:
results (List[Dict[str, Any]]): List of voting results from voter agents
Returns:
Dict[str, int]: Dictionary mapping candidate names to their vote counts
"""
# Count the number of votes for each candidate
vote_counts = {}
for result in results:
candidate_name = result["candidate_agent_name"]
vote_counts[candidate_name] = (
vote_counts.get(candidate_name, 0) + 1
)
# Find the candidate with the most votes
return vote_counts
def run(
self, task: str, img: Optional[str] = None, *args, **kwargs
):
"""
Execute the complete election process.
This method orchestrates the entire election by:
1. Adding the task to the conversation history
2. Running all voter agents concurrently
3. Collecting and processing the voting results
Args:
task (str): The election task or question to be voted on
img (Optional[str], optional): Image path if visual voting is required
*args: Additional positional arguments
**kwargs: Additional keyword arguments
Returns:
List[Dict[str, Any]]: Complete voting results from all agents
"""
self.conversation.add(role="user", content=task)
results = self.run_voter_agents(task, img, *args, **kwargs)
print(results)
return results
def setup_voter_agents(self):
"""
Configure voter agents with structured output capabilities and voting prompts.
This method sets up each voter agent with:
- Structured output schema for consistent voting format
- Voting-specific system prompts
- Tools for structured response generation
Returns:
List[Agent]: Configured voter agents ready for the election
"""
schema = get_vote_schema()
prompt = _create_voting_prompt(self.candidate_agents)
for agent in self.agents:
agent.tools_list_dictionary = schema
agent.system_prompt += f"\n\n{prompt}"

File diff suppressed because it is too large Load Diff

@ -0,0 +1,253 @@
from swarms.structs.agent import Agent
from typing import List
from swarms.structs.conversation import Conversation
import uuid
import random
from loguru import logger
from typing import Optional
class QASwarm:
"""
A Question and Answer swarm system where random agents ask questions to speaker agents.
This system allows for dynamic Q&A sessions where:
- Multiple agents can act as questioners
- One or multiple agents can act as speakers/responders
- Questions are asked randomly by different agents
- The conversation is tracked and managed
- Agents are showcased to each other with detailed information
"""
def __init__(
self,
name: str = "QandA",
description: str = "Question and Answer Swarm System",
agents: List[Agent] = None,
speaker_agents: List[Agent] = None,
id: str = str(uuid.uuid4()),
max_loops: int = 5,
show_dashboard: bool = True,
speaker_agent: Agent = None,
showcase_agents: bool = True,
**kwargs,
):
self.id = id
self.name = name
self.description = description
self.max_loops = max_loops
self.show_dashboard = show_dashboard
self.agents = agents or []
self.speaker_agents = speaker_agents or []
self.kwargs = kwargs
self.speaker_agent = speaker_agent
self.showcase_agents = showcase_agents
self.conversation = Conversation()
# Validate setup
self._validate_setup()
def _validate_setup(self):
"""Validate that the Q&A system is properly configured."""
if not self.agents:
logger.warning(
"No questioner agents provided. Add agents using add_agent() method."
)
if not self.speaker_agents and not self.speaker_agent:
logger.warning(
"No speaker agents provided. Add speaker agents using add_speaker_agent() method."
)
if (
not self.agents
and not self.speaker_agents
and not self.speaker_agent
):
raise ValueError(
"At least one agent (questioner or speaker) must be provided."
)
def add_agent(self, agent: Agent):
"""Add a questioner agent to the swarm."""
self.agents.append(agent)
logger.info(f"Added questioner agent: {agent.agent_name}")
def add_speaker_agent(self, agent: Agent):
"""Add a speaker agent to the swarm."""
if self.speaker_agents is None:
self.speaker_agents = []
self.speaker_agents.append(agent)
logger.info(f"Added speaker agent: {agent.agent_name}")
def get_agent_info(self, agent: Agent) -> dict:
"""Extract key information about an agent for showcasing."""
info = {
"name": getattr(agent, "agent_name", "Unknown Agent"),
"description": getattr(
agent, "agent_description", "No description available"
),
"role": getattr(agent, "role", "worker"),
}
# Get system prompt preview (first 50 characters)
system_prompt = getattr(agent, "system_prompt", "")
if system_prompt:
info["system_prompt_preview"] = (
system_prompt[:50] + "..."
if len(system_prompt) > 50
else system_prompt
)
else:
info["system_prompt_preview"] = (
"No system prompt available"
)
return info
def showcase_speaker_to_questioner(
self, questioner: Agent, speaker: Agent
) -> str:
"""Create a showcase prompt introducing the speaker agent to the questioner."""
speaker_info = self.get_agent_info(speaker)
showcase_prompt = f"""
You are about to ask a question to a specialized agent. Here's what you need to know about them:
**Speaker Agent Information:**
- **Name**: {speaker_info['name']}
- **Role**: {speaker_info['role']}
- **Description**: {speaker_info['description']}
- **System Prompt Preview**: {speaker_info['system_prompt_preview']}
Please craft a thoughtful, relevant question that takes into account this agent's expertise and background.
Your question should be specific and demonstrate that you understand their role and capabilities.
"""
return showcase_prompt
def showcase_questioner_to_speaker(
self, speaker: Agent, questioner: Agent
) -> str:
"""Create a showcase prompt introducing the questioner agent to the speaker."""
questioner_info = self.get_agent_info(questioner)
showcase_prompt = f"""
You are about to answer a question from another agent. Here's what you need to know about them:
**Questioner Agent Information:**
- **Name**: {questioner_info['name']}
- **Role**: {questioner_info['role']}
- **Description**: {questioner_info['description']}
- **System Prompt Preview**: {questioner_info['system_prompt_preview']}
Please provide a comprehensive answer that demonstrates your expertise and addresses their question thoroughly.
Consider their background and role when formulating your response.
"""
return showcase_prompt
def random_select_agent(self, agents: List[Agent]) -> Agent:
"""Randomly select an agent from the list."""
if not agents:
raise ValueError("No agents available for selection")
return random.choice(agents)
def get_current_speaker(self) -> Agent:
"""Get the current speaker agent (either from speaker_agents list or single speaker_agent)."""
if self.speaker_agent:
return self.speaker_agent
elif self.speaker_agents:
return self.random_select_agent(self.speaker_agents)
else:
raise ValueError("No speaker agent available")
def run(
self, task: str, img: Optional[str] = None, *args, **kwargs
):
"""Run the Q&A session with agent showcasing."""
self.conversation.add(role="user", content=task)
# Get current speaker
current_speaker = self.get_current_speaker()
# Select a random questioner
questioner = self.random_select_agent(self.agents)
# Showcase agents to each other if enabled
if self.showcase_agents:
# Showcase speaker to questioner
speaker_showcase = self.showcase_speaker_to_questioner(
questioner, current_speaker
)
questioner_task = f"{speaker_showcase}\n\nNow ask a question about: {task}"
# Showcase questioner to speaker
questioner_showcase = self.showcase_questioner_to_speaker(
current_speaker, questioner
)
else:
questioner_task = f"Ask a question about {task} to {current_speaker.agent_name}"
# Generate question
question = questioner.run(
task=questioner_task,
img=img,
*args,
**kwargs,
)
self.conversation.add(
role=questioner.agent_name, content=question
)
# Prepare answer task with showcasing if enabled
if self.showcase_agents:
answer_task = f"{questioner_showcase}\n\nAnswer this question from {questioner.agent_name}: {question}"
else:
answer_task = f"Answer the question '{question}' from {questioner.agent_name}"
# Generate answer
answer = current_speaker.run(
task=answer_task,
img=img,
*args,
**kwargs,
)
self.conversation.add(
role=current_speaker.agent_name, content=answer
)
return answer
def run_multi_round(
self,
task: str,
rounds: int = 3,
img: Optional[str] = None,
*args,
**kwargs,
):
"""Run multiple rounds of Q&A with different questioners."""
results = []
for round_num in range(rounds):
logger.info(
f"Starting Q&A round {round_num + 1}/{rounds}"
)
round_result = self.run(task, img, *args, **kwargs)
results.append(
{"round": round_num + 1, "result": round_result}
)
return results
def get_conversation_history(self):
"""Get the conversation history."""
return self.conversation.get_history()
def clear_conversation(self):
"""Clear the conversation history."""
self.conversation = Conversation()
logger.info("Conversation history cleared")

@ -28,6 +28,7 @@ from swarms.structs.malt import MALT
from swarms.structs.deep_research_swarm import DeepResearchSwarm
from swarms.structs.council_judge import CouncilAsAJudge
from swarms.structs.interactive_groupchat import InteractiveGroupChat
from swarms.structs.heavy_swarm import HeavySwarm
from swarms.structs.ma_utils import list_all_agents
from swarms.utils.generate_keys import generate_api_key
@ -49,6 +50,7 @@ SwarmType = Literal[
"DeepResearchSwarm",
"CouncilAsAJudge",
"InteractiveGroupChat",
"HeavySwarm",
]
@ -183,6 +185,10 @@ class SwarmRouter:
conversation: Any = None,
agents_config: Optional[Dict[Any, Any]] = None,
speaker_function: str = None,
heavy_swarm_loops_per_agent: int = 1,
heavy_swarm_question_agent_model_name: str = "gpt-4.1",
heavy_swarm_worker_model_name: str = "claude-3-5-sonnet-20240620",
telemetry_enabled: bool = False,
*args,
**kwargs,
):
@ -210,6 +216,14 @@ class SwarmRouter:
self.conversation = conversation
self.agents_config = agents_config
self.speaker_function = speaker_function
self.heavy_swarm_loops_per_agent = heavy_swarm_loops_per_agent
self.heavy_swarm_question_agent_model_name = (
heavy_swarm_question_agent_model_name
)
self.heavy_swarm_worker_model_name = (
heavy_swarm_worker_model_name
)
self.telemetry_enabled = telemetry_enabled
# Reliability check
self.reliability_check()
@ -234,6 +248,12 @@ class SwarmRouter:
if self.rules is not None:
self.handle_rules()
if self.multi_agent_collab_prompt is True:
self.update_system_prompt_for_agent_in_swarm()
if self.list_all_agents is True:
self.list_agents_to_eachother()
def activate_shared_memory(self):
logger.info("Activating shared memory with all agents ")
@ -283,6 +303,10 @@ class SwarmRouter:
Handles special case for CouncilAsAJudge which may not require agents.
"""
logger.info(
f"Initializing SwarmRouter: {self.name} Reliability Check..."
)
# Check swarm type first since it affects other validations
if self.swarm_type is None:
raise ValueError(
@ -300,6 +324,10 @@ class SwarmRouter:
self.setup()
logger.info(
f"Reliability check for parameters and configurations are complete. SwarmRouter: {self.name} is ready to run!"
)
def _create_swarm(self, task: str = None, *args, **kwargs):
"""
Dynamically create and return the specified swarm type or automatically match the best swarm type for a given task.
@ -321,6 +349,18 @@ class SwarmRouter:
self._create_swarm(self.swarm_type)
elif self.swarm_type == "HeavySwarm":
return HeavySwarm(
name=self.name,
description=self.description,
agents=self.agents,
max_loops=self.max_loops,
output_type=self.output_type,
loops_per_agent=self.heavy_swarm_loops_per_agent,
question_agent_model_name=self.heavy_swarm_question_agent_model_name,
worker_model_name=self.heavy_swarm_worker_model_name,
)
elif self.swarm_type == "AgentRearrange":
return AgentRearrange(
name=self.name,
@ -478,6 +518,24 @@ class SwarmRouter:
return agent_config
def list_agents_to_eachother(self):
if self.swarm_type == "SequentialWorkflow":
self.conversation = (
self.swarm.agent_rearrange.conversation
)
else:
self.conversation = self.swarm.conversation
if self.list_all_agents is True:
list_all_agents(
agents=self.agents,
conversation=self.swarm.conversation,
name=self.name,
description=self.description,
add_collaboration_prompt=True,
add_to_conversation=True,
)
def _run(
self,
task: str,
@ -503,31 +561,12 @@ class SwarmRouter:
"""
self.swarm = self._create_swarm(task, *args, **kwargs)
if self.swarm_type == "SequentialWorkflow":
self.conversation = (
self.swarm.agent_rearrange.conversation
)
else:
self.conversation = self.swarm.conversation
if self.list_all_agents is True:
list_all_agents(
agents=self.agents,
conversation=self.swarm.conversation,
name=self.name,
description=self.description,
add_collaboration_prompt=True,
add_to_conversation=True,
)
if self.multi_agent_collab_prompt is True:
self.update_system_prompt_for_agent_in_swarm()
log_execution(
swarm_id=self.id,
status="start",
swarm_config=self.to_dict(),
swarm_architecture="swarm_router",
enabled_on=self.telemetry_enabled,
)
try:
@ -548,12 +587,13 @@ class SwarmRouter:
status="completion",
swarm_config=self.to_dict(),
swarm_architecture="swarm_router",
enabled_on=self.telemetry_enabled,
)
return result
except Exception as e:
raise RuntimeError(
f"SwarmRouter: Error executing task on swarm: {str(e)} Traceback: {traceback.format_exc()}"
f"SwarmRouter: Error executing task on swarm: {str(e)} Traceback: {traceback.format_exc()}. Try reconfiguring the SwarmRouter Settings and or make sure the individual agents are configured correctly."
)
def run(

@ -1,5 +1,250 @@
from typing import Optional
from swarms.telemetry.main import log_agent_data
import functools
import inspect
import time
from datetime import datetime
def log_function_execution(
swarm_id: Optional[str] = None,
swarm_architecture: Optional[str] = None,
enabled_on: Optional[bool] = True,
):
"""
Decorator to log function execution details including parameters and outputs.
This decorator automatically captures and logs:
- Function name
- Function parameters (args and kwargs)
- Function output/return value
- Execution timestamp
- Execution duration
- Execution status (success/error)
Args:
swarm_id (str, optional): Unique identifier for the swarm instance
swarm_architecture (str, optional): Name of the swarm architecture
enabled_on (bool, optional): Whether logging is enabled. Defaults to True.
Returns:
Decorated function that logs execution details
Example:
>>> @log_function_execution(swarm_id="my-swarm", swarm_architecture="sequential")
... def process_data(data, threshold=0.5):
... return {"processed": len(data), "threshold": threshold}
...
>>> result = process_data([1, 2, 3], threshold=0.8)
"""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if not enabled_on:
return func(*args, **kwargs)
# Capture function details
function_name = func.__name__
function_module = func.__module__
start_time = time.time()
timestamp = datetime.now().isoformat()
# Capture function parameters
sig = inspect.signature(func)
bound_args = sig.bind(*args, **kwargs)
bound_args.apply_defaults()
# Convert parameters to serializable format
parameters = {}
for (
param_name,
param_value,
) in bound_args.arguments.items():
try:
# Handle special method parameters
if param_name == "self":
# For instance methods, log class name and instance info
parameters[param_name] = {
"class_name": param_value.__class__.__name__,
"class_module": param_value.__class__.__module__,
"instance_id": hex(id(param_value)),
"type": "instance",
}
elif param_name == "cls":
# For class methods, log class information
parameters[param_name] = {
"class_name": param_value.__name__,
"class_module": param_value.__module__,
"type": "class",
}
elif isinstance(
param_value,
(str, int, float, bool, type(None)),
):
parameters[param_name] = param_value
elif isinstance(param_value, (list, dict, tuple)):
parameters[param_name] = str(param_value)[
:500
] # Truncate large objects
elif hasattr(param_value, "__class__"):
# Handle other object instances
parameters[param_name] = {
"class_name": param_value.__class__.__name__,
"class_module": param_value.__class__.__module__,
"instance_id": hex(id(param_value)),
"type": "object_instance",
}
else:
parameters[param_name] = str(
type(param_value)
)
except Exception:
parameters[param_name] = "<non-serializable>"
# Determine if this is a method call and add context
method_context = _get_method_context(
func, bound_args.arguments
)
execution_data = {
"function_name": function_name,
"function_module": function_module,
"swarm_id": swarm_id,
"swarm_architecture": swarm_architecture,
"timestamp": timestamp,
"parameters": parameters,
"status": "start",
**method_context,
}
try:
# Log function start
log_agent_data(data_dict=execution_data)
# Execute the function
result = func(*args, **kwargs)
# Calculate execution time
end_time = time.time()
execution_time = end_time - start_time
# Log successful execution
success_data = {
**execution_data,
"status": "success",
"execution_time_seconds": execution_time,
"output": _serialize_output(result),
}
log_agent_data(data_dict=success_data)
return result
except Exception as e:
# Calculate execution time even for errors
end_time = time.time()
execution_time = end_time - start_time
# Log error execution
error_data = {
**execution_data,
"status": "error",
"execution_time_seconds": execution_time,
"error_message": str(e),
"error_type": type(e).__name__,
}
try:
log_agent_data(data_dict=error_data)
except Exception:
pass # Silent fail on logging errors
# Re-raise the original exception
raise
return wrapper
return decorator
def _get_method_context(func, arguments):
"""
Helper function to extract method context information.
Args:
func: The function/method being called
arguments: The bound arguments dictionary
Returns:
Dictionary with method context information
"""
context = {}
try:
# Check if this is a method call
if "self" in arguments:
# Instance method
self_obj = arguments["self"]
context.update(
{
"method_type": "instance_method",
"class_name": self_obj.__class__.__name__,
"class_module": self_obj.__class__.__module__,
"instance_id": hex(id(self_obj)),
}
)
elif "cls" in arguments:
# Class method
cls_obj = arguments["cls"]
context.update(
{
"method_type": "class_method",
"class_name": cls_obj.__name__,
"class_module": cls_obj.__module__,
}
)
else:
# Regular function or static method
context.update({"method_type": "function"})
# Try to get qualname for additional context
if hasattr(func, "__qualname__"):
context["qualified_name"] = func.__qualname__
except Exception:
# If anything fails, just mark as unknown
context = {"method_type": "unknown"}
return context
def _serialize_output(output):
"""
Helper function to serialize function output for logging.
Args:
output: The function return value to serialize
Returns:
Serializable representation of the output
"""
try:
if output is None:
return None
elif isinstance(output, (str, int, float, bool)):
return output
elif isinstance(output, (list, dict, tuple)):
# Truncate large outputs to prevent log bloat
output_str = str(output)
return (
output_str[:1000] + "..."
if len(output_str) > 1000
else output_str
)
else:
return str(type(output))
except Exception:
return "<non-serializable-output>"
def log_execution(
@ -7,6 +252,7 @@ def log_execution(
status: Optional[str] = None,
swarm_config: Optional[dict] = None,
swarm_architecture: Optional[str] = None,
enabled_on: Optional[bool] = False,
):
"""
Log execution data for a swarm router instance.
@ -31,13 +277,16 @@ def log_execution(
... )
"""
try:
log_agent_data(
data_dict={
"swarm_router_id": swarm_id,
"status": status,
"swarm_router_config": swarm_config,
"swarm_architecture": swarm_architecture,
}
)
if enabled_on is None:
log_agent_data(
data_dict={
"swarm_router_id": swarm_id,
"status": status,
"swarm_router_config": swarm_config,
"swarm_architecture": swarm_architecture,
}
)
else:
pass
except Exception:
pass

@ -0,0 +1,624 @@
"""
Sparse Mixture-of-Experts (MoE) Transformer Implementation
Based on Gemini 2.5 architecture description
This implementation provides a sparse MoE architecture that activates only a subset
of expert parameters per input token, allowing for decoupling of model capacity
from computation cost.
"""
from typing import Dict, Optional, Tuple, Union
import torch
import torch.nn as nn
import torch.nn.functional as F
from loguru import logger
from torch import Tensor
class Expert(nn.Module):
"""
Individual expert network in the MoE architecture.
Each expert is a feed-forward network that specializes in processing
certain types of input patterns.
Args:
hidden_dim: Hidden dimension size
intermediate_dim: Intermediate dimension in feed-forward network
dropout: Dropout probability
activation: Activation function to use
"""
def __init__(
self,
hidden_dim: int,
intermediate_dim: int,
dropout: float = 0.1,
activation: str = "swish",
):
super().__init__()
self.hidden_dim = hidden_dim
self.intermediate_dim = intermediate_dim
# Feed-forward network
self.w1 = nn.Linear(hidden_dim, intermediate_dim, bias=False)
self.w2 = nn.Linear(intermediate_dim, hidden_dim, bias=False)
self.dropout = nn.Dropout(dropout)
# Activation function
if activation == "swish":
self.activation = lambda x: x * torch.sigmoid(x)
elif activation == "gelu":
self.activation = F.gelu
elif activation == "relu":
self.activation = F.relu
else:
raise ValueError(f"Unsupported activation: {activation}")
self._init_weights()
def _init_weights(self) -> None:
"""Initialize weights with proper scaling."""
nn.init.xavier_uniform_(self.w1.weight)
nn.init.xavier_uniform_(self.w2.weight)
def forward(self, x: Tensor) -> Tensor:
"""
Forward pass through the expert network.
Args:
x: Input tensor of shape [batch_size, seq_len, hidden_dim]
Returns:
Output tensor of shape [batch_size, seq_len, hidden_dim]
"""
x = self.w1(x)
x = self.activation(x)
x = self.dropout(x)
x = self.w2(x)
return x
class Router(nn.Module):
"""
Gating network that routes tokens to appropriate experts.
The router learns to assign input tokens to the most suitable experts
based on the token representations.
Args:
hidden_dim: Hidden dimension size
num_experts: Number of experts in the MoE layer
top_k: Number of experts to activate per token
temperature: Temperature for softmax routing
"""
def __init__(
self,
hidden_dim: int,
num_experts: int,
top_k: int = 2,
temperature: float = 1.0,
):
super().__init__()
self.hidden_dim = hidden_dim
self.num_experts = num_experts
self.top_k = top_k
self.temperature = temperature
# Linear layer for routing scores
self.gate = nn.Linear(hidden_dim, num_experts, bias=False)
self._init_weights()
def _init_weights(self) -> None:
"""Initialize routing weights."""
nn.init.xavier_uniform_(self.gate.weight)
def forward(self, x: Tensor) -> Tuple[Tensor, Tensor, Tensor]:
"""
Route tokens to experts.
Args:
x: Input tensor of shape [batch_size, seq_len, hidden_dim]
Returns:
Tuple of (routing_weights, expert_indices, routing_probs)
- routing_weights: [batch_size, seq_len, top_k]
- expert_indices: [batch_size, seq_len, top_k]
- routing_probs: [batch_size, seq_len, num_experts]
"""
batch_size, seq_len, hidden_dim = x.shape
# Compute routing scores
routing_logits = self.gate(
x
) # [batch_size, seq_len, num_experts]
routing_logits = routing_logits / self.temperature
# Apply softmax to get probabilities
routing_probs = F.softmax(routing_logits, dim=-1)
# Select top-k experts
routing_weights, expert_indices = torch.topk(
routing_probs, self.top_k, dim=-1
)
# Normalize routing weights
routing_weights = routing_weights / routing_weights.sum(
dim=-1, keepdim=True
)
return routing_weights, expert_indices, routing_probs
class MoELayer(nn.Module):
"""
Sparse Mixture-of-Experts layer.
This layer contains multiple expert networks and a router that decides
which experts to activate for each input token.
Args:
hidden_dim: Hidden dimension size
num_experts: Number of expert networks
top_k: Number of experts to activate per token
intermediate_dim: Intermediate dimension in expert networks
dropout: Dropout probability
activation: Activation function for experts
load_balance_weight: Weight for load balancing loss
"""
def __init__(
self,
hidden_dim: int,
num_experts: int,
top_k: int = 2,
intermediate_dim: Optional[int] = None,
dropout: float = 0.1,
activation: str = "swish",
load_balance_weight: float = 0.01,
):
super().__init__()
self.hidden_dim = hidden_dim
self.num_experts = num_experts
self.top_k = top_k
self.load_balance_weight = load_balance_weight
if intermediate_dim is None:
intermediate_dim = hidden_dim * 4
# Create expert networks
self.experts = nn.ModuleList(
[
Expert(
hidden_dim, intermediate_dim, dropout, activation
)
for _ in range(num_experts)
]
)
# Router for expert selection
self.router = Router(hidden_dim, num_experts, top_k)
logger.info(
f"Created MoE layer with {num_experts} experts, top_k={top_k}"
)
def forward(self, x: Tensor) -> Tuple[Tensor, Dict[str, Tensor]]:
"""
Forward pass through MoE layer.
Args:
x: Input tensor of shape [batch_size, seq_len, hidden_dim]
Returns:
Tuple of (output, aux_losses)
- output: [batch_size, seq_len, hidden_dim]
- aux_losses: Dictionary containing auxiliary losses
"""
batch_size, seq_len, hidden_dim = x.shape
# Get routing decisions
routing_weights, expert_indices, routing_probs = self.router(
x
)
# Initialize output
output = torch.zeros_like(x)
# Process each expert
for i in range(self.num_experts):
# Create mask for tokens routed to this expert
expert_mask = (expert_indices == i).any(
dim=-1
) # [batch_size, seq_len]
if not expert_mask.any():
continue
# Get tokens for this expert
expert_tokens = x[expert_mask] # [num_tokens, hidden_dim]
if expert_tokens.numel() == 0:
continue
# Process through expert
expert_output = self.experts[i](expert_tokens)
# Compute weights for this expert
expert_weights = torch.zeros(
batch_size, seq_len, device=x.device
)
for k in range(self.top_k):
mask = expert_indices[:, :, k] == i
expert_weights[mask] = routing_weights[:, :, k][mask]
# Add weighted expert output
expert_contribution = torch.zeros_like(x)
expert_contribution[expert_mask] = expert_output
output += expert_contribution * expert_weights.unsqueeze(
-1
)
# Compute auxiliary losses
aux_losses = self._compute_aux_losses(
routing_probs, expert_indices
)
return output, aux_losses
def _compute_aux_losses(
self, routing_probs: Tensor, expert_indices: Tensor
) -> Dict[str, Tensor]:
"""
Compute auxiliary losses for training stability.
Args:
routing_probs: Routing probabilities [batch_size, seq_len, num_experts]
expert_indices: Selected expert indices [batch_size, seq_len, top_k]
Returns:
Dictionary of auxiliary losses
"""
batch_size, seq_len, num_experts = routing_probs.shape
# Load balancing loss
expert_usage = torch.zeros(
num_experts, device=routing_probs.device
)
total_tokens = batch_size * seq_len * self.top_k
for i in range(num_experts):
expert_usage[i] = (
expert_indices == i
).sum().float() / total_tokens
target_usage = 1.0 / num_experts
load_balance_loss = F.mse_loss(
expert_usage, torch.full_like(expert_usage, target_usage)
)
# Entropy loss to encourage diversity
entropy_loss = (
-(routing_probs * torch.log(routing_probs + 1e-8))
.sum(dim=-1)
.mean()
)
return {
"load_balance_loss": load_balance_loss
* self.load_balance_weight,
"entropy_loss": entropy_loss * 0.01,
"expert_usage": expert_usage,
}
class MoETransformerBlock(nn.Module):
"""
Transformer block with MoE feed-forward layer.
This block combines multi-head attention with a sparse MoE layer,
following the standard transformer architecture pattern.
Args:
hidden_dim: Hidden dimension size
num_heads: Number of attention heads
num_experts: Number of experts in MoE layer
top_k: Number of experts to activate per token
dropout: Dropout probability
layer_norm_eps: Epsilon for layer normalization
"""
def __init__(
self,
hidden_dim: int,
num_heads: int,
num_experts: int,
top_k: int = 2,
dropout: float = 0.1,
layer_norm_eps: float = 1e-6,
):
super().__init__()
self.hidden_dim = hidden_dim
# Multi-head attention
self.attention = nn.MultiheadAttention(
hidden_dim, num_heads, dropout=dropout, batch_first=True
)
# MoE layer
self.moe_layer = MoELayer(
hidden_dim=hidden_dim,
num_experts=num_experts,
top_k=top_k,
dropout=dropout,
)
# Layer normalization
self.norm1 = nn.LayerNorm(hidden_dim, eps=layer_norm_eps)
self.norm2 = nn.LayerNorm(hidden_dim, eps=layer_norm_eps)
# Dropout
self.dropout = nn.Dropout(dropout)
def forward(
self, x: Tensor, attention_mask: Optional[Tensor] = None
) -> Tuple[Tensor, Dict[str, Tensor]]:
"""
Forward pass through transformer block.
Args:
x: Input tensor [batch_size, seq_len, hidden_dim]
attention_mask: Optional attention mask
Returns:
Tuple of (output, aux_losses)
"""
# Self-attention with residual connection
residual = x
x = self.norm1(x)
attn_output, _ = self.attention(
x, x, x, key_padding_mask=attention_mask
)
x = residual + self.dropout(attn_output)
# MoE layer with residual connection
residual = x
x = self.norm2(x)
moe_output, aux_losses = self.moe_layer(x)
x = residual + self.dropout(moe_output)
return x, aux_losses
class MoETransformer(nn.Module):
"""
Complete sparse MoE Transformer model.
This model implements the full transformer architecture with sparse
mixture-of-experts layers, similar to the Gemini 2.5 architecture.
Args:
vocab_size: Vocabulary size
hidden_dim: Hidden dimension size
num_layers: Number of transformer layers
num_heads: Number of attention heads
num_experts: Number of experts per MoE layer
top_k: Number of experts to activate per token
max_seq_len: Maximum sequence length
dropout: Dropout probability
"""
def __init__(
self,
vocab_size: int,
hidden_dim: int,
num_layers: int,
num_heads: int,
num_experts: int,
top_k: int = 2,
max_seq_len: int = 2048,
dropout: float = 0.1,
):
super().__init__()
self.vocab_size = vocab_size
self.hidden_dim = hidden_dim
self.num_layers = num_layers
self.max_seq_len = max_seq_len
# Token embedding
self.token_embedding = nn.Embedding(vocab_size, hidden_dim)
# Positional encoding
self.pos_embedding = nn.Parameter(
torch.randn(1, max_seq_len, hidden_dim) * 0.02
)
# Transformer layers
self.layers = nn.ModuleList(
[
MoETransformerBlock(
hidden_dim=hidden_dim,
num_heads=num_heads,
num_experts=num_experts,
top_k=top_k,
dropout=dropout,
)
for _ in range(num_layers)
]
)
# Final layer norm
self.final_norm = nn.LayerNorm(hidden_dim)
# Output projection
self.output_projection = nn.Linear(
hidden_dim, vocab_size, bias=False
)
# Tie input and output embeddings
self.output_projection.weight = self.token_embedding.weight
self._init_weights()
logger.info(
f"Created MoE Transformer with {num_layers} layers, "
f"{num_experts} experts per layer, hidden_dim={hidden_dim}"
)
def _init_weights(self) -> None:
"""Initialize model weights."""
nn.init.normal_(self.token_embedding.weight, std=0.02)
nn.init.normal_(self.pos_embedding, std=0.02)
# Initialize output projection
nn.init.normal_(self.output_projection.weight, std=0.02)
def forward(
self,
input_ids: Tensor,
attention_mask: Optional[Tensor] = None,
return_aux_losses: bool = True,
) -> Union[Tensor, Tuple[Tensor, Dict[str, Tensor]]]:
"""
Forward pass through the model.
Args:
input_ids: Input token IDs [batch_size, seq_len]
attention_mask: Optional attention mask [batch_size, seq_len]
return_aux_losses: Whether to return auxiliary losses
Returns:
If return_aux_losses=False: logits [batch_size, seq_len, vocab_size]
If return_aux_losses=True: (logits, aux_losses)
"""
batch_size, seq_len = input_ids.shape
# Token embeddings
x = self.token_embedding(input_ids)
# Add positional encoding
x = x + self.pos_embedding[:, :seq_len, :]
# Collect auxiliary losses
all_aux_losses = {}
# Pass through transformer layers
for i, layer in enumerate(self.layers):
x, aux_losses = layer(x, attention_mask)
if return_aux_losses:
for key, value in aux_losses.items():
if key not in all_aux_losses:
all_aux_losses[key] = []
all_aux_losses[key].append(value)
# Final layer norm
x = self.final_norm(x)
# Output projection
logits = self.output_projection(x)
if not return_aux_losses:
return logits
# Average auxiliary losses across layers
avg_aux_losses = {}
for key, values in all_aux_losses.items():
if key == "expert_usage":
# For expert usage, we want to see all layers
avg_aux_losses[key] = torch.stack(values)
else:
avg_aux_losses[key] = torch.stack(values).mean()
return logits, avg_aux_losses
def get_num_parameters(self) -> int:
"""Get total number of parameters."""
return sum(p.numel() for p in self.parameters())
def get_num_active_parameters(self) -> int:
"""Get number of active parameters per forward pass."""
# This is approximate - actual active parameters depend on routing
total_params = self.get_num_parameters()
# Estimate active expert parameters
expert_params_per_layer = 0
for layer in self.layers:
expert_params = sum(
p.numel()
for p in layer.moe_layer.experts[0].parameters()
)
expert_params_per_layer += (
expert_params * layer.moe_layer.top_k
)
total_expert_params = sum(
sum(
p.numel()
for expert in layer.moe_layer.experts
for p in expert.parameters()
)
for layer in self.layers
)
active_params = (
total_params
- total_expert_params
+ expert_params_per_layer * len(self.layers)
)
return active_params
# Example usage and testing
if __name__ == "__main__":
# Configure logger
logger.add("moe_training.log", rotation="500 MB", level="INFO")
# Model configuration
config = {
"vocab_size": 32000,
"hidden_dim": 768,
"num_layers": 12,
"num_heads": 12,
"num_experts": 8,
"top_k": 2,
"max_seq_len": 2048,
"dropout": 0.1,
}
# Create model
model = MoETransformer(**config)
# Print model info
total_params = model.get_num_parameters()
active_params = model.get_num_active_parameters()
logger.info(f"Total parameters: {total_params:,}")
logger.info(
f"Active parameters per forward pass: {active_params:,}"
)
logger.info(
f"Parameter efficiency: {active_params/total_params:.2%}"
)
# Test forward pass
batch_size, seq_len = 2, 512
input_ids = torch.randint(
0, config["vocab_size"], (batch_size, seq_len)
)
with torch.no_grad():
logits, aux_losses = model(input_ids)
logger.info(f"Input shape: {input_ids.shape}")
logger.info(f"Output shape: {logits.shape}")
logger.info(f"Auxiliary losses: {list(aux_losses.keys())}")
# Print expert usage statistics
expert_usage = aux_losses[
"expert_usage"
] # [num_layers, num_experts]
logger.info(f"Expert usage shape: {expert_usage.shape}")
logger.info(f"Average expert usage: {expert_usage.mean(dim=0)}")
Loading…
Cancel
Save