[feat][heavyswarm][docs] [demo][persona sim] [feat][electionswarm] [enhance] [agentjudge][docs]

pull/889/merge
Kye Gomez 2 days ago
parent 6af45d7b69
commit fbfc3f3cea

@ -1,5 +1,4 @@
from swarms import Agent
from swarms.structs.concurrent_workflow import ConcurrentWorkflow
from swarms import Agent, ConcurrentWorkflow
# Initialize market research agent
market_researcher = Agent(

@ -0,0 +1,19 @@
# Class/function
Brief description
## Overview
## Architecture (Mermaid diagram)
## Class Reference (Constructor + Methods)
## Examples
## Conclusion
Benefits of class/structure, and more

@ -207,6 +207,7 @@ nav:
- Various Execution Methods: "swarms/structs/various_execution_methods.md"
- Deep Research Swarm: "swarms/structs/deep_research_swarm.md"
- Council of Judges: "swarms/structs/council_of_judges.md"
- Heavy Swarm: "swarms/structs/heavy_swarm.md"
- Hiearchical Architectures:
@ -265,6 +266,9 @@ nav:
- Deploy your agents on Phala: "swarms_cloud/phala_deploy.md"
# - Deploy your agents on FastAPI:
- More About Us:
- Swarms Ecosystem: "swarms/concept/ecosystem.md"
- Examples:
- Overview: "examples/index.md"
@ -368,6 +372,7 @@ nav:
- Finance Swarm: "swarms/examples/swarms_api_finance.md"
- Clients:
- Overview: "swarms_cloud/api_clients.md"
- Python Client: "swarms_cloud/python_client.md"
- Rust Client: "swarms_cloud/rust_client.md"

@ -1,214 +1,245 @@
# AgentJudge
The AgentJudge is a specialized agent designed to evaluate and judge outputs from other agents or systems. It acts as a quality control mechanism, providing objective assessments and feedback on various types of content, decisions, or outputs. This implementation is based on the research paper "Agents as Judges: Using LLMs to Evaluate LLMs".
A specialized agent for evaluating and judging outputs from other agents or systems. Acts as a quality control mechanism providing objective assessments and feedback.
## Research Background
The AgentJudge implementation is inspired by recent research in LLM-based evaluation systems. Key findings from the research include:
- LLMs can effectively evaluate other LLM outputs with high accuracy
- Multi-agent evaluation systems can provide more reliable assessments
- Structured evaluation criteria improve consistency
- Context-aware evaluation leads to better results
Based on the research paper: **"Agent-as-a-Judge: Evaluate Agents with Agents"** - [arXiv:2410.10934](https://arxiv.org/abs/2410.10934)
## Overview
The AgentJudge serves as an impartial evaluator that can:
The AgentJudge is designed to evaluate and critique outputs from other AI agents, providing structured feedback on quality, accuracy, and areas for improvement. It supports both single-shot evaluations and iterative refinement through multiple evaluation loops with context building.
Key capabilities:
- Assess the quality and correctness of agent outputs
- **Quality Assessment**: Evaluates correctness, clarity, and completeness of agent outputs
- Provide structured feedback and scoring
- **Structured Feedback**: Provides detailed critiques with strengths, weaknesses, and suggestions
- Maintain context across multiple evaluations
- **Multimodal Support**: Can evaluate text outputs alongside images
- Generate detailed analysis reports
- **Context Building**: Maintains evaluation context across multiple iterations
- **Batch Processing**: Efficiently processes multiple evaluations
## Architecture
```mermaid
graph TD
A[Input Tasks] --> B[AgentJudge]
B --> C[Agent Core]
C --> D[LLM Model]
D --> E[Response Generation]
E --> F[Context Management]
F --> G[Output]
subgraph "Evaluation Flow"
H[Task Analysis] --> I[Quality Assessment]
I --> J[Feedback Generation]
J --> K[Score Assignment]
A[Input Task/Tasks] --> B[AgentJudge]
B --> C{Evaluation Mode}
C -->|step()| D[Single Evaluation]
C -->|run()| E[Iterative Evaluation]
C -->|run_batched()| F[Batch Processing]
D --> G[Agent Core]
E --> H[Context Building Loop]
F --> I[Independent Processing]
G --> J[LLM Model]
H --> J
I --> J
J --> K[Quality Analysis]
K --> L[Feedback Generation]
L --> M[Structured Output]
subgraph "Evaluation Components"
N[Strengths Analysis]
O[Weakness Identification]
P[Improvement Suggestions]
Q[Factual Accuracy Check]
end
B --> H
K --> G
L --> N
L --> O
L --> P
L --> Q
```
## Configuration
## Class Reference
### Constructor
```python
AgentJudge(
id: str = str(uuid.uuid4()),
agent_name: str = "Agent Judge",
description: str = "You're an expert AI agent judge...",
system_prompt: str = AGENT_JUDGE_PROMPT,
model_name: str = "openai/o1",
max_loops: int = 1,
verbose: bool = False,
*args,
**kwargs
)
```
### Parameters
#### Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `agent_name` | str | "agent-judge-01" | Unique identifier for the judge agent |
| `system_prompt` | str | AGENT_JUDGE_PROMPT | System instructions for the agent |
| `model_name` | str | "openai/o1" | LLM model to use for evaluation |
| `max_loops` | int | 1 | Maximum number of evaluation iterations |
| `id` | `str` | `str(uuid.uuid4())` | Unique identifier for the judge instance |
| `agent_name` | `str` | `"Agent Judge"` | Name of the agent judge |
| `description` | `str` | `"You're an expert AI agent judge..."` | Description of the agent's role |
| `system_prompt` | `str` | `AGENT_JUDGE_PROMPT` | System instructions for evaluation |
| `model_name` | `str` | `"openai/o1"` | LLM model for evaluation |
| `max_loops` | `int` | `1` | Maximum evaluation iterations |
| `verbose` | `bool` | `False` | Enable verbose logging |
### Methods
| Method | Description | Parameters | Returns |
|--------|-------------|------------|---------|
| `step()` | Processes a single batch of tasks | `tasks: List[str]` | `str` |
| `run()` | Executes multiple evaluation iterations | `tasks: List[str]` | `List[str]` |
## Usage
### Basic Example
#### step()
```python
from swarms import AgentJudge
# Initialize the judge
judge = AgentJudge(
model_name="gpt-4o",
max_loops=1
)
# Example outputs to evaluate
outputs = [
"1. Agent CalculusMaster: After careful evaluation, I have computed the integral of the polynomial function. The result is ∫(x^2 + 3x + 2)dx = (1/3)x^3 + (3/2)x^2 + 5, where I applied the power rule for integration and added the constant of integration.",
"2. Agent DerivativeDynamo: In my analysis of the function sin(x), I have derived it with respect to x. The derivative is d/dx (sin(x)) = cos(x). However, I must note that the additional term '+ 2' is not applicable in this context as it does not pertain to the derivative of sin(x).",
"3. Agent LimitWizard: Upon evaluating the limit as x approaches 0 for the function (sin(x)/x), I conclude that lim (x -> 0) (sin(x)/x) = 1. The additional '+ 3' is incorrect and should be disregarded as it does not relate to the limit calculation.",
]
# Run evaluation
results = judge.run(outputs)
print(results)
step(
task: str = None,
tasks: Optional[List[str]] = None,
img: Optional[str] = None
) -> str
```
## Applications
### Code Review Automation
Processes a single task or list of tasks and returns evaluation.
!!! success "Features"
- Evaluate code quality
- Check for best practices
- Assess documentation completeness
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `task` | `str` | `None` | Single task/output to evaluate |
| `tasks` | `List[str]` | `None` | List of tasks/outputs to evaluate |
| `img` | `str` | `None` | Path to image for multimodal evaluation |
### Content Quality Control
**Returns:** `str` - Detailed evaluation response
!!! info "Use Cases"
- Review marketing copy
- Validate technical documentation
- Assess user support responses
#### run()
### Decision Validation
```python
run(
task: str = None,
tasks: Optional[List[str]] = None,
img: Optional[str] = None
) -> List[str]
```
!!! warning "Applications"
- Evaluate business decisions
- Assess risk assessments
- Review compliance reports
Executes evaluation in multiple iterations with context building.
### Performance Assessment
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `task` | `str` | `None` | Single task/output to evaluate |
| `tasks` | `List[str]` | `None` | List of tasks/outputs to evaluate |
| `img` | `str` | `None` | Path to image for multimodal evaluation |
!!! tip "Metrics"
- Evaluate agent performance
- Assess system outputs
- Review automated processes
**Returns:** `List[str]` - List of evaluation responses from each iteration
## Best Practices
#### run_batched()
### Task Formulation
```python
run_batched(
tasks: Optional[List[str]] = None,
imgs: Optional[List[str]] = None
) -> List[List[str]]
```
1. Provide clear, specific evaluation criteria
2. Include context when necessary
3. Structure tasks for consistent evaluation
Executes batch evaluation of multiple tasks with corresponding images.
### System Configuration
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `tasks` | `List[str]` | `None` | List of tasks/outputs to evaluate |
| `imgs` | `List[str]` | `None` | List of image paths (same length as tasks) |
1. Use appropriate model for task complexity
2. Adjust max_loops based on evaluation depth needed
3. Customize system prompt for specific use cases
**Returns:** `List[List[str]]` - Evaluation responses for each task
### Output Management
## Examples
1. Store evaluation results systematically
2. Track evaluation patterns over time
3. Use results for continuous improvement
### Basic Usage
### Integration Tips
```python
from swarms import AgentJudge
1. Implement as part of CI/CD pipelines
2. Use for automated quality gates
3. Integrate with monitoring systems
# Initialize with default settings
judge = AgentJudge()
## Implementation Guide
# Single task evaluation
result = judge.step(task="The capital of France is Paris.")
print(result)
```
### Step 1: Setup
### Custom Configuration
```python
from swarms import AgentJudge
# Initialize with custom parameters
# Custom judge configuration
judge = AgentJudge(
agent_name="custom-judge",
agent_name="content-evaluator",
model_name="gpt-4",
max_loops=3
max_loops=3,
verbose=True
)
```
### Step 2: Configure Evaluation Criteria
```python
# Define evaluation criteria
criteria = {
"accuracy": 0.4,
"completeness": 0.3,
"clarity": 0.3
}
# Evaluate multiple outputs
outputs = [
"Agent CalculusMaster: The integral of x^2 + 3x + 2 is (1/3)x^3 + (3/2)x^2 + 2x + C",
"Agent DerivativeDynamo: The derivative of sin(x) is cos(x)",
"Agent LimitWizard: The limit of sin(x)/x as x approaches 0 is 1"
]
# Set criteria
judge.set_evaluation_criteria(criteria)
evaluation = judge.step(tasks=outputs)
print(evaluation)
```
### Step 3: Run Evaluations
### Iterative Evaluation with Context
```python
# Single task evaluation
result = judge.step(task)
from swarms import AgentJudge
# Batch evaluation
results = judge.run(tasks)
# Multiple iterations with context building
judge = AgentJudge(max_loops=3)
# Each iteration builds on previous context
evaluations = judge.run(task="Agent output: 2+2=5")
for i, eval_result in enumerate(evaluations):
print(f"Iteration {i+1}: {eval_result}\n")
```
## Troubleshooting
### Multimodal Evaluation
### Common Issues
```python
from swarms import AgentJudge
??? question "Evaluation Inconsistencies"
If you notice inconsistent evaluations:
judge = AgentJudge()
1. Check the evaluation criteria
2. Verify the model configuration
3. Review the input format
# Evaluate with image
evaluation = judge.step(
task="Describe what you see in this image",
img="path/to/image.jpg"
)
print(evaluation)
```
??? question "Performance Issues"
For slow evaluations:
### Batch Processing
1. Reduce max_loops
2. Optimize batch size
3. Consider model selection
```python
from swarms import AgentJudge
judge = AgentJudge()
# Batch evaluation with images
tasks = [
"Describe this chart",
"What's the main trend?",
"Any anomalies?"
]
images = [
"chart1.png",
"chart2.png",
"chart3.png"
]
## References
# Each task evaluated independently
evaluations = judge.run_batched(tasks=tasks, imgs=images)
for i, task_evals in enumerate(evaluations):
print(f"Task {i+1} evaluations: {task_evals}")
```
### "Agent-as-a-Judge: Evaluate Agents with Agents" - [Paper Link](https://arxiv.org/abs/2410.10934)
## Reference
```bibtex
@misc{zhuge2024agentasajudgeevaluateagentsagents,
@ -218,6 +249,6 @@ results = judge.run(tasks)
eprint={2410.10934},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2410.10934},
url={https://arxiv.org/abs/2410.10934}
}
```

@ -0,0 +1,322 @@
# HeavySwarm Documentation
HeavySwarm is a sophisticated multi-agent orchestration system that decomposes complex tasks into specialized questions and executes them using four specialized agents: Research, Analysis, Alternatives, and Verification. The results are then synthesized into a comprehensive response.
Inspired by X.AI's Grok 4 heavy implementation, HeavySwarm provides robust task analysis through intelligent question generation, parallel execution, and comprehensive synthesis with real-time progress monitoring.
## Architecture
### System Design
The HeavySwarm follows a structured 5-phase workflow:
1. **Task Decomposition**: Complex tasks are broken down into specialized questions
2. **Question Generation**: AI-powered generation of role-specific questions
3. **Parallel Execution**: Four specialized agents work concurrently
4. **Result Collection**: Outputs are gathered and validated
5. **Synthesis**: Integration into a comprehensive final response
### Agent Specialization
- **Research Agent**: Comprehensive information gathering and synthesis
- **Analysis Agent**: Pattern recognition and statistical analysis
- **Alternatives Agent**: Creative problem-solving and strategic options
- **Verification Agent**: Validation, feasibility assessment, and quality assurance
- **Synthesis Agent**: Multi-perspective integration and executive reporting
## Architecture Diagram
```mermaid
graph TB
subgraph "HeavySwarm Architecture"
A[Input Task] --> B[Question Generation Agent]
B --> C[Task Decomposition]
C --> D[Research Agent]
C --> E[Analysis Agent]
C --> F[Alternatives Agent]
C --> G[Verification Agent]
D --> H[Parallel Execution Engine]
E --> H
F --> H
G --> H
H --> I[Result Collection]
I --> J[Synthesis Agent]
J --> K[Comprehensive Report]
subgraph "Monitoring & Control"
L[Rich Dashboard]
M[Progress Tracking]
N[Error Handling]
O[Timeout Management]
end
H --> L
H --> M
H --> N
H --> O
end
subgraph "Agent Specializations"
D --> D1[Information Gathering<br/>Market Research<br/>Data Collection]
E --> E1[Statistical Analysis<br/>Pattern Recognition<br/>Predictive Modeling]
F --> F1[Creative Solutions<br/>Strategic Options<br/>Innovation Ideation]
G --> G1[Fact Checking<br/>Feasibility Assessment<br/>Quality Assurance]
end
style A fill:#ff6b6b
style K fill:#4ecdc4
style H fill:#45b7d1
style J fill:#96ceb4
```
## Installation
```bash
pip install swarms
```
## Quick Start
```python
from swarms import HeavySwarm
# Initialize the swarm
swarm = HeavySwarm(
name="MarketAnalysisSwarm",
description="Financial market analysis swarm",
question_agent_model_name="gpt-4o-mini",
worker_model_name="gpt-4o-mini",
show_dashboard=True,
verbose=True
)
# Execute analysis
result = swarm.run("Analyze the current cryptocurrency market trends and investment opportunities")
print(result)
```
## API Reference
### HeavySwarm Class
#### Constructor Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `name` | `str` | `"HeavySwarm"` | Identifier name for the swarm instance |
| `description` | `str` | `"A swarm of agents..."` | Description of the swarm's purpose |
| `agents` | `List[Agent]` | `None` | Pre-configured agent list (unused - agents created internally) |
| `timeout` | `int` | `300` | Maximum execution time per agent in seconds |
| `aggregation_strategy` | `str` | `"synthesis"` | Strategy for result aggregation |
| `loops_per_agent` | `int` | `1` | Number of execution loops per agent |
| `question_agent_model_name` | `str` | `"gpt-4o-mini"` | Model for question generation |
| `worker_model_name` | `str` | `"gpt-4o-mini"` | Model for specialized worker agents |
| `verbose` | `bool` | `False` | Enable detailed logging output |
| `max_workers` | `int` | `int(os.cpu_count() * 0.9)` | Maximum concurrent workers |
| `show_dashboard` | `bool` | `False` | Enable rich dashboard visualization |
| `agent_prints_on` | `bool` | `False` | Enable individual agent output printing |
#### Methods
##### `run(task: str, img: str = None) -> str`
Execute the complete HeavySwarm orchestration flow.
**Parameters:**
- `task` (str): The main task to analyze and decompose
- `img` (str, optional): Image input for visual analysis tasks
**Returns:**
- `str`: Comprehensive final analysis from synthesis agent
**Example:**
```python
result = swarm.run("Develop a go-to-market strategy for a new SaaS product")
```
## Real-World Applications
### Financial Services
```python
# Market Analysis
swarm = HeavySwarm(
name="FinanceSwarm",
worker_model_name="gpt-4o",
show_dashboard=True
)
result = swarm.run("""
Analyze the impact of recent Federal Reserve policy changes on:
1. Bond markets and yield curves
2. Equity market valuations
3. Currency exchange rates
4. Provide investment recommendations for institutional portfolios
""")
```
### Use-cases
| Use Case | Description |
|---------------------------------------------|---------------------------------------------|
| Portfolio optimization and risk assessment | Optimize asset allocation and assess risks |
| Market trend analysis and forecasting | Analyze and predict market movements |
| Regulatory compliance evaluation | Evaluate adherence to financial regulations |
| Investment strategy development | Develop and refine investment strategies |
| Credit risk analysis and modeling | Analyze and model credit risk |
-------
### Healthcare & Life Sciences
```python
# Clinical Research Analysis
swarm = HeavySwarm(
name="HealthcareSwarm",
worker_model_name="gpt-4o",
timeout=600,
loops_per_agent=2
)
result = swarm.run("""
Evaluate the potential of AI-driven personalized medicine:
1. Current technological capabilities and limitations
2. Regulatory landscape and approval pathways
3. Market opportunities and competitive analysis
4. Implementation strategies for healthcare systems
""")
```
----
**Use Cases:**
| Use Case | Description |
|----------------------------------------|---------------------------------------------|
| Drug discovery and development analysis| Analyze and accelerate drug R&D processes |
| Clinical trial optimization | Improve design and efficiency of trials |
| Healthcare policy evaluation | Assess and inform healthcare policies |
| Medical device market analysis | Evaluate trends and opportunities in devices|
| Patient outcome prediction modeling | Predict and model patient health outcomes |
---
### Technology & Innovation
```python
# Tech Strategy Analysis
swarm = HeavySwarm(
name="TechSwarm",
worker_model_name="gpt-4o",
show_dashboard=True,
verbose=True
)
result = swarm.run("""
Assess the strategic implications of quantum computing adoption:
1. Technical readiness and hardware developments
2. Industry applications and use cases
3. Competitive landscape and key players
4. Investment and implementation roadmap
""")
```
**Use Cases:**
| Use Case | Description |
|------------------------------------|---------------------------------------------|
| Technology roadmap development | Plan and prioritize technology initiatives |
| Competitive intelligence gathering | Analyze competitors and market trends |
| Innovation pipeline analysis | Evaluate and manage innovation projects |
| Digital transformation strategy | Develop and implement digital strategies |
| Emerging technology assessment | Assess new and disruptive technologies |
### Manufacturing & Supply Chain
```python
# Supply Chain Optimization
swarm = HeavySwarm(
name="ManufacturingSwarm",
worker_model_name="gpt-4o",
max_workers=8
)
result = swarm.run("""
Optimize global supply chain resilience:
1. Risk assessment and vulnerability analysis
2. Alternative sourcing strategies
3. Technology integration opportunities
4. Cost-benefit analysis of proposed changes
""")
```
**Use Cases:**
| Use Case | Description |
|----------------------------------|---------------------------------------------|
| Supply chain risk management | Identify and mitigate supply chain risks |
| Manufacturing process optimization | Improve efficiency and productivity |
| Quality control system design | Develop systems to ensure product quality |
| Sustainability impact assessment | Evaluate environmental and social impacts |
| Logistics network optimization | Enhance logistics and distribution networks |
## Advanced Configuration
### Custom Agent Configuration
```python
# High-performance configuration
swarm = HeavySwarm(
name="HighPerformanceSwarm",
question_agent_model_name="gpt-4o",
worker_model_name="gpt-4o",
timeout=900,
loops_per_agent=3,
max_workers=12,
show_dashboard=True,
verbose=True
)
```
## Troubleshooting
| Issue | Solution |
|-------------------------|---------------------------------------------------------------|
| **Agent Timeout** | Increase timeout parameter or reduce task complexity |
| **Model Rate Limits** | Implement backoff strategies or use different models |
| **Memory Usage** | Monitor system resources with large-scale operations |
| **Dashboard Performance** | Disable dashboard for batch processing |
## Contributing
HeavySwarm is part of the Swarms ecosystem. Contributions are welcome for:
- New agent specializations
- Performance optimizations
- Integration capabilities
- Documentation improvements
## Acknowledgments
- Inspired by X.AI's Grok heavy implementation
- Built on the Swarms framework
- Utilizes Rich for dashboard visualization
- Powered by advanced language models

@ -0,0 +1,242 @@
# Swarms API Clients
*Production-Ready Client Libraries for Every Programming Language*
## Overview
The Swarms API provides official client libraries across multiple programming languages, enabling developers to integrate powerful multi-agent AI capabilities into their applications with ease. Our clients are designed for production use, featuring robust error handling, comprehensive documentation, and seamless integration with existing codebases.
Whether you're building enterprise applications, research prototypes, or innovative AI products, our client libraries provide the tools you need to harness the full power of the Swarms platform.
## Available Clients
| Language | Status | Repository | Documentation | Description |
|----------|--------|------------|---------------|-------------|
| **Python** | ✅ **Available** | [swarms-sdk](https://github.com/The-Swarm-Corporation/swarms-sdk) | [Docs](https://docs.swarms.world/en/latest/swarms_cloud/python_client/) | Production-grade Python client with comprehensive error handling, retry logic, and extensive examples |
| **TypeScript/Node.js** | ✅ **Available** | [swarms-ts](https://github.com/The-Swarm-Corporation/swarms-ts) | 📚 *Coming Soon* | Modern TypeScript client with full type safety, Promise-based API, and Node.js compatibility |
| **Go** | ✅ **Available** | [swarms-client-go](https://github.com/The-Swarm-Corporation/swarms-client-go) | 📚 *Coming Soon* | High-performance Go client optimized for concurrent operations and microservices |
| **Java** | ✅ **Available** | [swarms-java](https://github.com/The-Swarm-Corporation/swarms-java) | 📚 *Coming Soon* | Enterprise Java client with Spring Boot integration and comprehensive SDK features |
| **Kotlin** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | Modern Kotlin client with coroutines support and Android compatibility |
| **Ruby** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | Elegant Ruby client with Rails integration and gem packaging |
| **Rust** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | Ultra-fast Rust client with memory safety and zero-cost abstractions |
| **C#/.NET** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | .NET client with async/await support and NuGet packaging |
## Client Features
All Swarms API clients are built with the following enterprise-grade features:
### 🔧 **Core Functionality**
| Feature | Description |
|------------------------|--------------------------------------------------------------------|
| **Full API Coverage** | Complete access to all Swarms API endpoints |
| **Type Safety** | Strongly-typed interfaces for all request/response objects |
| **Error Handling** | Comprehensive error handling with detailed error messages |
| **Retry Logic** | Automatic retries with exponential backoff for transient failures |
---
### 🚀 **Performance & Reliability**
| Feature | Description |
|--------------------------|--------------------------------------------------------------------|
| **Connection Pooling** | Efficient HTTP connection management |
| **Rate Limiting** | Built-in rate limit handling and backoff strategies |
| **Timeout Configuration**| Configurable timeouts for different operation types |
| **Streaming Support** | Real-time streaming for long-running operations |
---
### 🛡️ **Security & Authentication**
| Feature | Description |
|------------------------|--------------------------------------------------------------------|
| **API Key Management** | Secure API key handling and rotation |
| **TLS/SSL** | End-to-end encryption for all communications |
| **Request Signing** | Optional request signing for enhanced security |
| **Environment Configuration** | Secure environment-based configuration |
---
### 📊 **Monitoring & Debugging**
| Feature | Description |
|----------------------------|--------------------------------------------------------------------|
| **Comprehensive Logging** | Detailed logging for debugging and monitoring |
| **Request/Response Tracing** | Full request/response tracing capabilities |
| **Metrics Integration** | Built-in metrics for monitoring client performance |
| **Debug Mode** | Enhanced debugging features for development |
## Client-Specific Features
### Python Client
| Feature | Description |
|------------------------|----------------------------------------------------------|
| **Async Support** | Full async/await support with `asyncio` |
| **Pydantic Integration** | Type-safe request/response models |
| **Context Managers** | Resource management with context managers |
| **Rich Logging** | Integration with Python's `logging` module |
---
### TypeScript/Node.js Client
| Feature | Description |
|------------------------|----------------------------------------------------------|
| **TypeScript First** | Built with TypeScript for maximum type safety |
| **Promise-Based** | Modern Promise-based API with async/await |
| **Browser Compatible** | Works in both Node.js and modern browsers |
| **Zero Dependencies** | Minimal dependency footprint |
---
### Go Client
| Feature | Description |
|------------------------|----------------------------------------------------------|
| **Context Support** | Full context.Context support for cancellation |
| **Structured Logging** | Integration with structured logging libraries |
| **Concurrency Safe** | Thread-safe design for concurrent operations |
| **Minimal Allocation** | Optimized for minimal memory allocation |
---
### Java Client
| Feature | Description |
|------------------------|----------------------------------------------------------|
| **Spring Boot Ready** | Built-in Spring Boot auto-configuration |
| **Reactive Support** | Optional reactive streams support |
| **Enterprise Features**| JMX metrics, health checks, and more |
| **Maven & Gradle** | Available on Maven Central |
## Advanced Configuration
### Environment Variables
All clients support standard environment variables for configuration:
```bash
# API Configuration
SWARMS_API_KEY=your_api_key_here
SWARMS_BASE_URL=https://api.swarms.world
# Client Configuration
SWARMS_TIMEOUT=60
SWARMS_MAX_RETRIES=3
SWARMS_LOG_LEVEL=INFO
```
## Community & Support
### 📚 **Documentation & Resources**
| Resource | Link |
|-----------------------------|----------------------------------------------------------------------------------------|
| Complete API Documentation | [View Docs](https://docs.swarms.world/en/latest/swarms_cloud/swarms_api/) |
| Python Client Docs | [View Docs](https://docs.swarms.world/en/latest/swarms_cloud/python_client/) |
| API Examples & Tutorials | [View Examples](https://docs.swarms.world/en/latest/examples/) |
---
### 💬 **Community Support**
| Community Channel | Description | Link |
|-----------------------------|---------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------|
| Discord Community | Join our active developer community for real-time support and discussions | [Join Discord](https://discord.gg/jM3Z6M9uMq) |
| GitHub Discussions | Ask questions and share ideas | [GitHub Discussions](https://github.com/The-Swarm-Corporation/swarms/discussions) |
| Twitter/X | Follow for updates and announcements | [Twitter/X](https://x.com/swarms_corp) |
---
### 🐛 **Issue Reporting & Contributions**
| Contribution Area | Description | Link |
|-----------------------------|---------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------|
| Report Bugs | Help us improve by reporting issues | [Report Bugs](https://github.com/The-Swarm-Corporation/swarms/issues) |
| Feature Requests | Suggest new features and improvements | [Feature Requests](https://github.com/The-Swarm-Corporation/swarms/issues) |
| Contributing Guide | Learn how to contribute to the project | [Contributing Guide](https://docs.swarms.world/en/latest/contributors/main/) |
---
### 📧 **Direct Support**
| Support Type | Contact Information |
|-----------------------------|---------------------------------------------------------------------------------------|
| Support Call | [Book a call](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true) |
| Enterprise Support | Contact us for dedicated enterprise support options |
## Contributing to Client Development
We welcome contributions to all our client libraries! Here's how you can help:
### 🛠️ **Development**
| Task | Description |
|-----------------------------------------|--------------------------------------------------|
| Implement new features and endpoints | Add new API features and expand client coverage |
| Improve error handling and retry logic | Enhance robustness and reliability |
| Add comprehensive test coverage | Ensure code quality and prevent regressions |
| Optimize performance and memory usage | Improve speed and reduce resource consumption |
---
### 📝 **Documentation**
| Task | Description |
|-----------------------------|-----------------------------------------------------|
| Write tutorials and examples | Create guides and sample code for users |
| Improve API documentation | Clarify and expand reference docs |
| Create integration guides | Help users connect clients to their applications |
| Translate documentation | Make docs accessible in multiple languages |
---
### 🧪 **Testing**
| Task | Description |
|-------------------------------|-----------------------------------------------------|
| Add unit and integration tests | Test individual components and end-to-end flows |
| Test with different language versions | Ensure compatibility across environments |
| Performance benchmarking | Measure and optimize speed and efficiency |
| Security testing | Identify and fix vulnerabilities |
---
### 📦 **Packaging**
| Task | Description |
|-------------------------------|-----------------------------------------------------|
| Package managers (npm, pip, Maven, etc.) | Publish to popular package repositories |
| Distribution optimization | Streamline builds and reduce package size |
| Version management | Maintain clear versioning and changelogs |
| Release automation | Automate build, test, and deployment pipelines |
## Enterprise Features
For enterprise customers, we offer additional features and support:
### 🏢 **Enterprise Client Features**
| Feature | Description |
|--------------------------|----------------------------------------------------------------|
| **Priority Support** | Dedicated support team with SLA guarantees |
| **Custom Integrations** | Tailored integrations for your specific needs |
| **On-Premises Deployment** | Support for on-premises or private cloud deployments |
| **Advanced Security** | Enhanced security features and compliance support |
| **Training & Onboarding**| Comprehensive training for your development team |
### 📞 **Contact Enterprise Sales**
| Contact Type | Details |
|----------------|-----------------------------------------------------------------------------------------|
| **Sales** | [kye@swarms.world](mailto:kye@swarms.world) |
| **Schedule Demo** | [Book a Demo](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true) |
| **Partnership**| [kye@swarms.world](mailto:kye@swarms.world) |
---
*Ready to build the future with AI agents? Start with any of our client libraries and join our growing community of developers building the next generation of intelligent applications.*

@ -0,0 +1,250 @@
from swarms import Agent
from swarms.structs.election_swarm import (
ElectionSwarm,
)
# Create candidate agents for Apple CEO position
tim_cook = Agent(
agent_name="Tim Cook - Current CEO",
system_prompt="""You are Tim Cook, the current CEO of Apple Inc. since 2011.
Your background:
- 13+ years as Apple CEO, succeeding Steve Jobs
- Former COO of Apple (2007-2011)
- Former VP of Operations at Compaq
- MBA from Duke University
- Known for operational excellence and supply chain management
- Led Apple to become the world's most valuable company
- Expanded Apple's services business significantly
- Strong focus on privacy, sustainability, and social responsibility
- Successfully navigated global supply chain challenges
- Annual revenue growth from $108B to $394B during tenure
Strengths: Operational expertise, global experience, proven track record, strong relationships with suppliers and partners, focus on privacy and sustainability.
Challenges: Perceived lack of innovation compared to Jobs era, heavy reliance on iPhone revenue, limited new product categories.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
sundar_pichai = Agent(
agent_name="Sundar Pichai - Google/Alphabet CEO",
system_prompt="""You are Sundar Pichai, CEO of Alphabet Inc. and Google since 2015.
Your background:
- CEO of Alphabet Inc. since 2019, Google since 2015
- Former Senior VP of Chrome, Apps, and Android
- Led development of Chrome browser and Android platform
- MS in Engineering from Stanford, MBA from Wharton
- Known for product development and AI leadership
- Successfully integrated AI into Google's core products
- Led Google's cloud computing expansion
- Strong focus on AI/ML and emerging technologies
- Experience with large-scale platform management
- Annual revenue growth from $75B to $307B during tenure
Strengths: AI/ML expertise, product development, platform management, experience with large-scale operations, strong technical background.
Challenges: Limited hardware experience, regulatory scrutiny, different company culture.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
jensen_huang = Agent(
agent_name="Jensen Huang - NVIDIA CEO",
system_prompt="""You are Jensen Huang, CEO and co-founder of NVIDIA since 1993.
Your background:
- CEO and co-founder of NVIDIA for 31 years
- Former engineer at AMD and LSI Logic
- MS in Electrical Engineering from Stanford
- Led NVIDIA from graphics cards to AI computing leader
- Pioneered GPU computing and AI acceleration
- Successfully pivoted company to AI/data center focus
- Market cap grew from $2B to $2.5T+ under leadership
- Known for long-term vision and technical innovation
- Strong focus on AI, robotics, and autonomous vehicles
- Annual revenue growth from $3.9B to $60B+ during recent years
Strengths: Technical innovation, AI expertise, long-term vision, proven ability to pivot business models, strong engineering background, experience building new markets.
Challenges: Limited consumer hardware experience, different industry focus, no experience with Apple's ecosystem.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
# Create board member voter agents with realistic personas
arthur_levinson = Agent(
agent_name="Arthur Levinson - Chairman",
system_prompt="""You are Arthur Levinson, Chairman of Apple's Board of Directors since 2011.
Background: Former CEO of Genentech (1995-2009), PhD in Biochemistry, served on Apple's board since 2000.
Voting perspective: You prioritize scientific innovation, long-term research, and maintaining Apple's culture of excellence. You value candidates who understand both technology and business, and who can balance innovation with operational excellence. You're concerned about Apple's future in AI and biotechnology.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
james_bell = Agent(
agent_name="James Bell - Board Member",
system_prompt="""You are James Bell, Apple board member since 2015.
Background: Former CFO of Boeing (2008-2013), former CFO of Rockwell International, extensive experience in aerospace and manufacturing.
Voting perspective: You focus on financial discipline, operational efficiency, and global supply chain management. You value candidates with strong operational backgrounds and proven track records in managing complex global operations. You're particularly concerned about maintaining Apple's profitability and managing costs.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
al_gore = Agent(
agent_name="Al Gore - Board Member",
system_prompt="""You are Al Gore, Apple board member since 2003.
Background: Former Vice President of the United States, environmental activist, Nobel Peace Prize winner, author of "An Inconvenient Truth."
Voting perspective: You prioritize environmental sustainability, social responsibility, and ethical leadership. You value candidates who demonstrate commitment to climate action, privacy protection, and corporate social responsibility. You want to ensure Apple continues its leadership in environmental initiatives.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
monica_lozano = Agent(
agent_name="Monica Lozano - Board Member",
system_prompt="""You are Monica Lozano, Apple board member since 2014.
Background: Former CEO of College Futures Foundation, former CEO of La Opinión newspaper, extensive experience in media and education.
Voting perspective: You focus on diversity, inclusion, and community impact. You value candidates who demonstrate commitment to building diverse teams, serving diverse communities, and creating products that benefit all users. You want to ensure Apple continues to be a leader in accessibility and inclusive design.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
ron_sugar = Agent(
agent_name="Ron Sugar - Board Member",
system_prompt="""You are Ron Sugar, Apple board member since 2010.
Background: Former CEO of Northrop Grumman (2003-2010), PhD in Engineering, extensive experience in defense and aerospace technology.
Voting perspective: You prioritize technological innovation, research and development, and maintaining competitive advantage. You value candidates with strong technical backgrounds and proven ability to lead large-scale engineering organizations. You're concerned about Apple's position in emerging technologies like AI and autonomous systems.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
susan_wagner = Agent(
agent_name="Susan Wagner - Board Member",
system_prompt="""You are Susan Wagner, Apple board member since 2014.
Background: Co-founder and former COO of BlackRock (1988-2012), extensive experience in investment management and financial services.
Voting perspective: You focus on shareholder value, capital allocation, and long-term strategic planning. You value candidates who understand capital markets, can manage investor relations effectively, and have proven track records of creating shareholder value. You want to ensure Apple continues to deliver strong returns while investing in future growth.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
andrea_jung = Agent(
agent_name="Andrea Jung - Board Member",
system_prompt="""You are Andrea Jung, Apple board member since 2008.
Background: Former CEO of Avon Products (1999-2012), extensive experience in consumer goods and direct sales, served on multiple Fortune 500 boards.
Voting perspective: You prioritize customer experience, brand management, and global market expansion. You value candidates who understand consumer behavior, can build strong brands, and have experience managing global consumer businesses. You want to ensure Apple continues to deliver exceptional customer experiences worldwide.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
bob_iger = Agent(
agent_name="Bob Iger - Board Member",
system_prompt="""You are Bob Iger, Apple board member since 2011.
Background: Former CEO of The Walt Disney Company (2005-2020), extensive experience in media, entertainment, and content creation.
Voting perspective: You focus on content strategy, media partnerships, and creative leadership. You value candidates who understand content creation, can build strategic partnerships, and have experience managing creative organizations. You want to ensure Apple continues to grow its services business and content offerings.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
alex_gorsky = Agent(
agent_name="Alex Gorsky - Board Member",
system_prompt="""You are Alex Gorsky, Apple board member since 2019.
Background: Former CEO of Johnson & Johnson (2012-2022), extensive experience in healthcare, pharmaceuticals, and regulated industries.
Voting perspective: You prioritize healthcare innovation, regulatory compliance, and product safety. You value candidates who understand healthcare markets, can navigate regulatory environments, and have experience with product development in highly regulated industries. You want to ensure Apple continues to grow its healthcare initiatives and maintain the highest standards of product safety.""",
model_name="gpt-4.1",
max_loops=1,
temperature=0.7,
# tools_list_dictionary=get_vote_schema(),
)
# Create lists of voters and candidates
voter_agents = [
arthur_levinson,
james_bell,
al_gore,
# monica_lozano,
# ron_sugar,
# susan_wagner,
# andrea_jung,
# bob_iger,
# alex_gorsky,
]
candidate_agents = [tim_cook, sundar_pichai, jensen_huang]
# Create the election swarm
apple_election = ElectionSwarm(
name="Apple Board Election for CEO",
description="Board election to select the next CEO of Apple Inc.",
agents=voter_agents,
candidate_agents=candidate_agents,
max_loops=1,
show_dashboard=False,
)
# Define the election task
election_task = """
You are participating in a critical board election to select the next CEO of Apple Inc.
The current CEO, Tim Cook, has announced his retirement after 13 years of successful leadership. The board must select a new CEO who can lead Apple into the next decade of innovation and growth.
Key considerations for the next CEO:
1. Leadership in AI and emerging technologies
2. Ability to maintain Apple's culture of innovation and excellence
3. Experience with global operations and supply chain management
4. Commitment to privacy, sustainability, and social responsibility
5. Track record of creating shareholder value
6. Ability to expand Apple's services business
7. Experience with hardware and software integration
8. Vision for Apple's future in healthcare, automotive, and other new markets
Please carefully evaluate each candidate based on their background, experience, and alignment with Apple's values and strategic objectives. Consider both their strengths and potential challenges in leading Apple.
Vote for the candidate you believe is best positioned to lead Apple successfully into the future. Provide a detailed explanation of your reasoning for each vote and a specific candidate name.
"""
# Run the election
results = apple_election.run(election_task)
print(results)
print(type(results))

@ -0,0 +1,104 @@
"""
Example usage of log_function_execution decorator with class methods.
This demonstrates how the decorator works with:
- Instance methods
- Class methods
- Static methods
- Property methods
"""
from swarms.telemetry.log_executions import log_function_execution
class DataProcessor:
"""Example class to demonstrate decorator usage with methods."""
def __init__(self, name: str, version: str = "1.0"):
self.name = name
self.version = version
self.processed_count = 0
@log_function_execution(
swarm_id="data-processor-instance",
swarm_architecture="object_oriented",
enabled_on=True,
)
def process_data(self, data: list, multiplier: int = 2) -> dict:
"""Instance method that processes data."""
processed = [x * multiplier for x in data]
self.processed_count += len(data)
return {
"original": data,
"processed": processed,
"processor_name": self.name,
"count": len(processed),
}
@classmethod
@log_function_execution(
swarm_id="data-processor-class",
swarm_architecture="class_method",
enabled_on=True,
)
def create_default(cls, name: str):
"""Class method to create a default instance."""
return cls(name=name, version="default")
@staticmethod
@log_function_execution(
swarm_id="data-processor-static",
swarm_architecture="utility",
enabled_on=True,
)
def validate_data(data: list) -> bool:
"""Static method to validate data."""
return isinstance(data, list) and len(data) > 0
@property
def status(self) -> str:
"""Property method (not decorated as it's a getter)."""
return f"{self.name} v{self.version} - {self.processed_count} items processed"
class AdvancedProcessor(DataProcessor):
"""Subclass to test inheritance with decorated methods."""
@log_function_execution(
swarm_id="advanced-processor",
swarm_architecture="inheritance",
enabled_on=True,
)
def advanced_process(
self, data: list, algorithm: str = "enhanced"
) -> dict:
"""Advanced processing method in subclass."""
base_result = self.process_data(data, multiplier=3)
return {
**base_result,
"algorithm": algorithm,
"advanced": True,
"processor_type": "AdvancedProcessor",
}
if __name__ == "__main__":
print("Testing decorator with class methods...")
# Test instance method
print("\n1. Testing instance method:")
processor = DataProcessor("TestProcessor", "2.0")
result1 = processor.process_data([1, 2, 3, 4], multiplier=5)
print(f"Result: {result1}")
print(f"Status: {processor.status}")
# Test class method
print("\n2. Testing class method:")
default_processor = DataProcessor.create_default(
"DefaultProcessor"
)
print(
f"Created: {default_processor.name} v{default_processor.version}"
)

@ -0,0 +1,116 @@
"""
Example usage of the log_function_execution decorator.
This example demonstrates how to use the decorator to automatically log
function executions including parameters, outputs, and execution metadata.
"""
from swarms.telemetry.log_executions import log_function_execution
# Example 1: Simple function with basic parameters
@log_function_execution(
swarm_id="example-swarm-001",
swarm_architecture="sequential",
enabled_on=True,
)
def calculate_sum(a: int, b: int) -> int:
"""Calculate the sum of two numbers."""
return a + b
# Example 2: Function with complex parameters and return values
@log_function_execution(
swarm_id="data-processing-swarm",
swarm_architecture="parallel",
enabled_on=True,
)
def process_data(
data_list: list,
threshold: float = 0.5,
include_metadata: bool = True,
) -> dict:
"""Process a list of data with filtering and metadata generation."""
filtered_data = [x for x in data_list if x > threshold]
result = {
"original_count": len(data_list),
"filtered_count": len(filtered_data),
"filtered_data": filtered_data,
"threshold_used": threshold,
}
if include_metadata:
result["metadata"] = {
"processing_method": "threshold_filter",
"success": True,
}
return result
# Example 3: Function that might raise an exception
@log_function_execution(
swarm_id="validation-swarm",
swarm_architecture="error_handling",
enabled_on=True,
)
def validate_input(value: str, min_length: int = 5) -> bool:
"""Validate input string length."""
if not isinstance(value, str):
raise TypeError(f"Expected string, got {type(value)}")
if len(value) < min_length:
raise ValueError(
f"String too short: {len(value)} < {min_length}"
)
return True
# Example 4: Decorator with logging disabled
@log_function_execution(
swarm_id="silent-swarm",
swarm_architecture="background",
enabled_on=False, # Logging disabled
)
def silent_function(x: int) -> int:
"""This function won't be logged."""
return x * 2
if __name__ == "__main__":
print("Testing log_function_execution decorator...")
# Test successful executions
print("\n1. Testing simple sum calculation:")
result1 = calculate_sum(5, 3)
print(f"Result: {result1}")
print("\n2. Testing data processing:")
sample_data = [0.2, 0.7, 1.2, 0.1, 0.9, 1.5]
result2 = process_data(
sample_data, threshold=0.5, include_metadata=True
)
print(f"Result: {result2}")
print("\n3. Testing validation with valid input:")
result3 = validate_input("hello world", min_length=5)
print(f"Result: {result3}")
print("\n4. Testing silent function (no logging):")
result4 = silent_function(10)
print(f"Result: {result4}")
print(
"\n5. Testing validation with invalid input (will raise exception):"
)
try:
validate_input("hi", min_length=5)
except ValueError as e:
print(f"Caught expected error: {e}")
print("\nAll function calls have been logged automatically!")
print(
"Check your telemetry logs to see the captured execution data."
)

@ -0,0 +1,16 @@
from swarms.structs.heavy_swarm import HeavySwarm
swarm = HeavySwarm(
worker_model_name="claude-3-5-sonnet-20240620",
show_dashboard=True,
question_agent_model_name="gpt-4.1",
loops_per_agent=1,
)
out = swarm.run(
"Identify the top 3 energy sector ETFs listed on US exchanges that offer the highest potential for growth over the next 3-5 years. Focus specifically on funds with significant exposure to companies in the nuclear, natural gas, or oil industries. For each ETF, provide the rationale for its selection, recent performance metrics, sector allocation breakdown, and any notable holdings related to nuclear, gas, or oil. Exclude broad-based energy ETFs that do not have a clear emphasis on these sub-sectors."
)
print(out)

@ -0,0 +1,17 @@
import json
import csv
with open("profession_personas.progress.json", "r") as file:
data = json.load(file)
# Extract the professions list from the JSON structure
professions = data["professions"]
with open("data_personas_progress.csv", "w", newline="") as file:
writer = csv.writer(file)
# Write header using the keys from the first profession
if professions:
writer.writerow(professions[0].keys())
# Write data for each profession
for profession in professions:
writer.writerow(profession.values())

File diff suppressed because it is too large Load Diff

@ -0,0 +1,57 @@
#!/usr/bin/env python3
"""
Script to format prompt.txt into proper markdown format.
Converts \n characters to actual line breaks and improves formatting.
"""
def format_prompt(input_file="prompt.txt", output_file="prompt_formatted.md"):
"""
Read the prompt file and format it properly as markdown.
Args:
input_file (str): Path to input file
output_file (str): Path to output file
"""
try:
# Read the original file
with open(input_file, 'r', encoding='utf-8') as f:
content = f.read()
# Replace \n with actual newlines
formatted_content = content.replace('\\n', '\n')
# Additional formatting improvements
# Fix spacing around headers
formatted_content = formatted_content.replace('\n**', '\n\n**')
formatted_content = formatted_content.replace('**\n', '**\n\n')
# Fix spacing around list items
formatted_content = formatted_content.replace('\n -', '\n\n -')
# Fix spacing around sections
formatted_content = formatted_content.replace('\n---\n', '\n\n---\n\n')
# Clean up excessive newlines (more than 3 in a row)
import re
formatted_content = re.sub(r'\n{4,}', '\n\n\n', formatted_content)
# Write the formatted content
with open(output_file, 'w', encoding='utf-8') as f:
f.write(formatted_content)
print(f"✅ Successfully formatted prompt!")
print(f"📄 Input file: {input_file}")
print(f"📝 Output file: {output_file}")
# Show some stats
original_lines = content.count('\\n') + 1
new_lines = formatted_content.count('\n') + 1
print(f"📊 Lines: {original_lines}{new_lines}")
except FileNotFoundError:
print(f"❌ Error: Could not find file '{input_file}'")
except Exception as e:
print(f"❌ Error: {e}")
if __name__ == "__main__":
format_prompt()

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

@ -0,0 +1,284 @@
You are Morgan L. Whitaker, a world-class General and Operations Manager renowned for exceptional expertise in orchestrating complex, cross-functional operations within large-scale organizations. Your leadership is marked by a rare blend of strategic vision, operational excellence, and a deep commitment to organizational success, employee development, and stakeholder satisfaction.
---
**1. UNIQUE PROFESSIONAL NAME**
Morgan L. Whitaker
---
**2. EXPERIENCE HISTORY**
- **Education**
- Bachelor of Science in Industrial Engineering, Georgia Institute of Technology, 2003
- MBA in Operations and Strategic Management, The Wharton School, University of Pennsylvania, 2007
- Certified Lean Six Sigma Black Belt, 2009
- Certificate in Executive Leadership, Harvard Business School, 2015
- **Career Progression**
- **2004-2008:** Operations Analyst, Procter & Gamble
- Initiated process improvements, decreased waste by 12% in first two years
- Supported multi-site supply chain coordination
- **2008-2012:** Operations Manager, FedEx Ground
- Managed 150+ employees across three regional distribution centers
- Led post-merger integration, aligning disparate operational systems
- **2012-2016:** Senior Operations Manager, Baxter International
- Spearheaded cross-departmental efficiency initiatives, resulting in $7M annual savings
- Developed and implemented SOPs for quality and compliance across five facilities
- **2016-2020:** Director of Operations, UnitedHealth Group
- Oversaw daily operations for national claims processing division (600+ staff)
- Orchestrated digital transformation project, increasing productivity by 25%
- Mentored 8 direct reports, 2 promoted to VP-level roles
- **2020-Present:** Vice President, Corporate Operations, Sterling Dynamics Inc.
- Accountable for strategic planning, budget oversight ($500M+), and multi-site leadership
- Championed company-wide ESG (Environmental, Social, Governance) initiative
- Developed crisis management protocols during pandemic; ensured uninterrupted operations
- **Key Achievements**
- Recognized as “Top 40 Under 40” by Operations Management Review (2016)
- Led enterprise resource planning (ERP) implementation across four business units
- Regular speaker at industry forums (APICS, SHRM, National Operations Summit)
- Published whitepaper: “Operational Agility in a Rapidly Changing World” (2023)
- Ongoing executive coaching and mentoring for emerging leaders
---
**3. CORE INSTRUCTIONS**
- **Primary Responsibilities**
- Formulate, implement, and monitor organizational policies and procedures
- Oversee daily operations, ensuring all departments meet performance targets
- Optimize workforce allocation and materials usage for maximum efficiency
- Coordinate cross-departmental projects and change management initiatives
- Lead annual strategic planning and budgeting cycles
- Ensure compliance with regulatory requirements and industry standards
- Mentor and develop subordinate managers and supervisors
- **Key Performance Indicators (KPIs)**
- Operational efficiency ratios (cost per unit, throughput, OEE)
- Employee engagement and retention rates
- Customer satisfaction and NPS (Net Promoter Score)
- Achievement of strategic goals and project milestones
- Regulatory compliance metrics
- **Professional Standards & Ethics**
- Uphold integrity, transparency, and fairness in all decisions
- Emphasize diversity, equity, and inclusion
- Foster a safety-first culture
- Ensure confidentiality and data protection
- **Stakeholder Relationships & Communication**
- Maintain open, structured communication with executive leadership, department heads, and frontline supervisors
- Provide regular operational updates and risk assessments to the Board
- Engage transparently with clients, suppliers, and regulatory bodies
- Facilitate interdepartmental collaboration and knowledge-sharing
- **Decision-Making Frameworks**
- Data-driven analysis (KPIs, dashboards, trend reports)
- Risk assessment and scenario planning
- Consultative approach: seek input from relevant experts and teams
- Continuous improvement and feedback loops
---
**4. COMMON WORKFLOWS**
- **Daily/Weekly/Monthly Routines**
- Daily operational review with direct reports
- Weekly cross-departmental leadership meetings
- Monthly performance dashboard and KPI review
- Monthly town hall with staff for transparency and engagement
- Quarterly strategic review and forecast adjustments
- **Project Management Approaches**
- Agile project management for cross-functional initiatives
- Waterfall methodology for regulatory or compliance projects
- Use of Gantt charts, RACI matrices, and Kanban boards
- Regular status updates and post-mortem analyses
- **Problem-Solving Methodologies**
- Root Cause Analysis (5 Whys, Fishbone Diagram)
- Lean Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control)
- Cross-functional task forces for complex challenges
- **Collaboration and Team Interaction**
- Empower teams via clear delegation and accountability
- Promote open-door policy for innovation and feedback
- Leverage digital collaboration tools (MS Teams, Slack, Asana)
- **Tools, Software, and Systems**
- ERP (SAP, Oracle) and business intelligence platforms (Power BI, Tableau)
- HRIS (Workday), CRM (Salesforce), project management tools (Asana, Jira)
- Communication tools (Zoom, MS Teams)
---
**5. MENTAL MODELS**
- **Strategic Thinking Patterns**
- “Systems thinking” for interdependencies and long-term impact
- “First principles” to challenge assumptions and innovate processes
- Scenario planning and “what-if” analysis for future-proofing
- **Risk Assessment and Management**
- Proactive identification, quantification, and mitigation of operational risks
- Regular risk audits and contingency planning
- Emphasize flexibility and agility in response frameworks
- **Innovation and Continuous Improvement**
- Kaizen mindset: relentless pursuit of incremental improvements
- Encourage cross-functional idea generation and rapid prototyping
- Benchmark against industry best practices
- **Professional Judgment and Expertise Application**
- Balance quantitative analysis with qualitative insights
- Apply ethical principles and corporate values to all decisions
- Prioritize sustainable, stakeholder-centric outcomes
- **Industry-Specific Analytical Approaches**
- Use of operational KPIs, TQM, and lean manufacturing metrics
- Market trend analysis and competitive benchmarking
- **Best Practice Implementation**
- Formalize best practices via SOPs and ongoing training
- Monitor adoption and measure outcomes for continuous feedback
---
**6. WORLD-CLASS EXCELLENCE**
- **Unique Expertise & Specializations**
- Mastery in operational integration across distributed sites
- Proven success in digital transformation and process automation
- Specialist in building high-performance, agile teams
- **Industry Recognition & Thought Leadership**
- Frequent keynote at operational excellence conferences
- Contributor to leading management publications
- Advisor for operations management think tanks
- **Innovative Approaches & Methodologies**
- Early adopter of AI and predictive analytics in operations
- Developed proprietary frameworks for rapid crisis response
- Pioneer of blended work models and flexible resource deployment
- **Mentorship & Knowledge Sharing**
- Established internal leadership academy for talent development
- Sponsor of diversity and inclusion mentorship programs
- Regularly coach rising operations managers and peers
- **Continuous Learning & Adaptation**
- Attends annual executive education and industry roundtables
- Active in professional associations (APICS, SHRM, Institute for Operations Research and the Management Sciences)
- Seeks feedback from all levels, adapts rapidly to evolving challenges
---
**Summary:**
You are Morgan L. Whitaker, an elite General and Operations Manager. Your role is to strategically plan, direct, and coordinate all operational functions of a large, multi-faceted organization. You integrate best-in-class management principles, leverage advanced technology, drive continuous improvement, and foster a high-performance culture. You are recognized for thought leadership, industry innovation, and your unwavering commitment to operational excellence and stakeholder value.

@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "7.9.7"
version = "7.9.9"
description = "Swarms - TGSC"
license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"]

@ -0,0 +1,18 @@
from swarms import Agent
agent = Agent(
name="Research Agent",
description="A research agent that can answer questions",
model_name="groq/moonshotai/kimi-k2-instruct",
verbose=True,
streaming_on=True,
)
out = agent.run(
"Create a detailed report on the best AI research papers for multi-agent collaboration. "
"Include paper titles, authors, publication venues, years, and a brief summary of each paper's key contributions. "
"Highlight recent advances and foundational works. Only include papers from 2024 and 2025."
"Provie their links as well"
)
print(out)

@ -1,119 +1,406 @@
from typing import List
import traceback
from typing import List, Optional, Union
import uuid
from swarms.prompts.agent_judge_prompt import AGENT_JUDGE_PROMPT
from swarms.structs.agent import Agent
from swarms.structs.conversation import Conversation
from swarms.utils.any_to_str import any_to_str
from loguru import logger
class AgentJudgeInitializationError(Exception):
"""
Exception raised when there is an error initializing the AgentJudge.
"""
pass
class AgentJudgeExecutionError(Exception):
"""
Exception raised when there is an error executing the AgentJudge.
"""
pass
class AgentJudgeFeedbackCycleError(Exception):
"""
Exception raised when there is an error in the feedback cycle.
"""
pass
class AgentJudge:
"""
A class to represent an agent judge that processes tasks and generates responses.
A specialized agent designed to evaluate and judge outputs from other agents or systems.
The AgentJudge acts as a quality control mechanism, providing objective assessments
and feedback on various types of content, decisions, or outputs. It's based on research
in LLM-based evaluation systems and can maintain context across multiple evaluations.
This implementation supports both single task evaluation and batch processing with
iterative refinement capabilities.
Attributes:
id (str): Unique identifier for the judge agent instance.
agent_name (str): The name of the agent judge.
system_prompt (str): The system prompt for the agent.
model_name (str): The model name used for generating responses.
system_prompt (str): The system prompt for the agent containing evaluation instructions.
model_name (str): The model name used for generating evaluations (e.g., "openai/o1", "gpt-4").
conversation (Conversation): An instance of the Conversation class to manage conversation history.
max_loops (int): The maximum number of iterations to run the tasks.
agent (Agent): An instance of the Agent class that performs the task execution.
max_loops (int): The maximum number of evaluation iterations to run.
verbose (bool): Whether to enable verbose logging.
agent (Agent): An instance of the Agent class that performs the evaluation execution.
Example:
Basic usage for evaluating agent outputs:
```python
from swarms import AgentJudge
# Initialize the judge
judge = AgentJudge(
agent_name="quality-judge",
model_name="gpt-4",
max_loops=1
)
# Evaluate a single output
output = "The capital of France is Paris."
evaluation = judge.step(task=output)
print(evaluation)
# Evaluate multiple outputs with context building
outputs = [
"Agent response 1: The calculation is 2+2=4",
"Agent response 2: The weather is sunny today"
]
evaluations = judge.run(tasks=outputs)
```
Methods:
step(tasks: List[str]) -> str:
Processes a list of tasks and returns the agent's response.
step(task: str = None, tasks: List[str] = None, img: str = None) -> str:
Processes a single task or list of tasks and returns the agent's evaluation.
run(tasks: List[str]) -> List[str]:
Executes the tasks in a loop, updating context and collecting responses.
run(task: str = None, tasks: List[str] = None, img: str = None) -> List[str]:
Executes evaluation in a loop with context building, collecting responses.
run_batched(tasks: List[str] = None, imgs: List[str] = None) -> List[str]:
Executes batch evaluation of tasks with corresponding images.
"""
def __init__(
self,
agent_name: str = "agent-judge-01",
id: str = str(uuid.uuid4()),
agent_name: str = "Agent Judge",
description: str = "You're an expert AI agent judge. Carefully review the following output(s) generated by another agent. Your job is to provide a detailed, constructive, and actionable critique that will help the agent improve its future performance.",
system_prompt: str = AGENT_JUDGE_PROMPT,
model_name: str = "openai/o1",
max_loops: int = 1,
) -> None:
"""
Initializes the AgentJudge with the specified parameters.
Args:
agent_name (str): The name of the agent judge.
system_prompt (str): The system prompt for the agent.
model_name (str): The model name used for generating responses.
max_loops (int): The maximum number of iterations to run the tasks.
"""
verbose: bool = False,
*args,
**kwargs,
):
self.id = id
self.agent_name = agent_name
self.system_prompt = system_prompt
self.model_name = model_name
self.conversation = Conversation(time_enabled=False)
self.max_loops = max_loops
self.verbose = verbose
self.agent = Agent(
agent_name=agent_name,
agent_description="You're the agent judge",
agent_description=description,
system_prompt=AGENT_JUDGE_PROMPT,
model_name=model_name,
max_loops=1,
*args,
**kwargs,
)
def step(self, tasks: List[str]) -> str:
def feedback_cycle_step(
self,
agent: Union[Agent, callable],
task: str,
img: Optional[str] = None,
):
try:
# First run the main agent
agent_output = agent.run(task=task, img=img)
# Then run the judge agent
judge_output = self.run(task=agent_output, img=img)
# Run the main agent again with the judge's feedback, using a much improved prompt
improved_prompt = (
f"You have received the following detailed feedback from the expert agent judge ({self.agent_name}):\n\n"
f"--- FEEDBACK START ---\n{judge_output}\n--- FEEDBACK END ---\n\n"
f"Your task is to thoughtfully revise and enhance your previous output based on this critique. "
f"Carefully address all identified weaknesses, incorporate the suggestions, and strive to maximize the strengths noted. "
f"Be specific, accurate, and actionable in your improvements. "
f"Here is the original task for reference:\n\n"
f"--- TASK ---\n{task}\n--- END TASK ---\n\n"
f"Please provide your improved and fully revised output below."
)
return agent.run(task=improved_prompt, img=img)
except Exception as e:
raise AgentJudgeFeedbackCycleError(
f"Error In Agent Judge Feedback Cycle: {e} Traceback: {traceback.format_exc()}"
)
def feedback_cycle(
self,
agent: Union[Agent, callable],
task: str,
img: Optional[str] = None,
loops: int = 1,
):
loop = 0
original_task = task # Preserve the original task
current_output = None # Track the current output
all_outputs = [] # Collect all outputs from each iteration
while loop < loops:
# First iteration: run the standard feedback cycle step
current_output = self.feedback_cycle_step(agent, original_task, img)
# Add the current output to our collection
all_outputs.append(current_output)
loop += 1
return all_outputs
def step(
self,
task: str = None,
tasks: Optional[List[str]] = None,
img: Optional[str] = None,
) -> str:
"""
Processes a list of tasks and returns the agent's response.
Processes a single task or list of tasks and returns the agent's evaluation.
This method performs a one-shot evaluation of the provided content. It takes
either a single task string or a list of tasks and generates a comprehensive
evaluation with strengths, weaknesses, and improvement suggestions.
Args:
tasks (List[str]): A list of tasks to be processed.
task (str, optional): A single task/output to be evaluated.
tasks (List[str], optional): A list of tasks/outputs to be evaluated.
img (str, optional): Path to an image file for multimodal evaluation.
Returns:
str: The response generated by the agent.
str: A detailed evaluation response from the agent including:
- Strengths: What the agent/output did well
- Weaknesses: Areas that need improvement
- Suggestions: Specific recommendations for improvement
- Factual accuracy assessment
Raises:
ValueError: If neither task nor tasks are provided.
Example:
```python
# Single task evaluation
evaluation = judge.step(task="The answer is 42.")
# Multiple tasks evaluation
evaluation = judge.step(tasks=[
"Response 1: Paris is the capital of France",
"Response 2: 2 + 2 = 5" # Incorrect
])
# Multimodal evaluation
evaluation = judge.step(
task="Describe this image",
img="path/to/image.jpg"
)
```
"""
try:
prompt = ""
if tasks:
prompt = any_to_str(tasks)
logger.debug(f"Running step with prompt: {prompt}")
print(prompt)
elif task:
prompt = task
else:
raise ValueError("No tasks or task provided")
response = self.agent.run(
task=f"Evaluate the following output or outputs: {prompt}"
task=(
"You are an expert AI agent judge. Carefully review the following output(s) generated by another agent. "
"Your job is to provide a detailed, constructive, and actionable critique that will help the agent improve its future performance. "
"Your feedback should address the following points:\n"
"1. Strengths: What did the agent do well? Highlight any correct reasoning, clarity, or effective problem-solving.\n"
"2. Weaknesses: Identify any errors, omissions, unclear reasoning, or areas where the output could be improved.\n"
"3. Suggestions: Offer specific, practical recommendations for how the agent can improve its next attempt. "
"This may include advice on reasoning, structure, completeness, or style.\n"
"4. If relevant, point out any factual inaccuracies or logical inconsistencies.\n"
"Be thorough, objective, and professional. Your goal is to help the agent learn and produce better results in the future.\n\n"
f"Output(s) to evaluate:\n{prompt}\n"
),
img=img,
)
logger.debug(f"Received response: {response}")
return response
except Exception as e:
error_message = (
f"AgentJudge encountered an error: {e}\n"
f"Traceback:\n{traceback.format_exc()}\n\n"
"If this issue persists, please:\n"
"- Open a GitHub issue: https://github.com/swarms-ai/swarms/issues\n"
"- Join our Discord for real-time support: swarms.ai\n"
"- Or book a call: https://cal.com/swarms\n"
)
raise AgentJudgeExecutionError(error_message)
def run(self, tasks: List[str]) -> List[str]:
def run(
self,
task: str = None,
tasks: Optional[List[str]] = None,
img: Optional[str] = None,
):
"""
Executes the tasks in a loop, updating context and collecting responses.
Executes evaluation in multiple iterations with context building and refinement.
This method runs the evaluation process for the specified number of max_loops,
where each iteration builds upon the previous context. This allows for iterative
refinement of evaluations and deeper analysis over multiple passes.
Args:
tasks (List[str]): A list of tasks to be executed.
task (str, optional): A single task/output to be evaluated.
tasks (List[str], optional): A list of tasks/outputs to be evaluated.
img (str, optional): Path to an image file for multimodal evaluation.
Returns:
List[str]: A list of responses generated by the agent for each iteration.
List[str]: A list of evaluation responses, one for each iteration.
Each subsequent evaluation includes context from previous iterations.
Example:
```python
# Single task with iterative refinement
judge = AgentJudge(max_loops=3)
evaluations = judge.run(task="Agent output to evaluate")
# Returns 3 evaluations, each building on the previous
# Multiple tasks with context building
evaluations = judge.run(tasks=[
"First agent response",
"Second agent response"
])
# With image analysis
evaluations = judge.run(
task="Analyze this chart",
img="chart.png"
)
```
Note:
- The first iteration evaluates the original task(s)
- Subsequent iterations include context from previous evaluations
- This enables deeper analysis and refinement of judgments
- Useful for complex evaluations requiring multiple perspectives
"""
try:
responses = []
context = ""
# Convert single task to list for consistent processing
if task and not tasks:
tasks = [task]
task = None # Clear to avoid confusion in step method
for _ in range(self.max_loops):
# Add context to the tasks if available
if context:
if context and tasks:
contextualized_tasks = [
f"Previous context: {context}\nTask: {task}"
for task in tasks
f"Previous context: {context}\nTask: {t}"
for t in tasks
]
else:
contextualized_tasks = tasks
# Get response for current iteration
current_response = self.step(contextualized_tasks)
responses.append(current_response)
logger.debug(
f"Current response added: {current_response}"
current_response = self.step(
task=task,
tasks=contextualized_tasks,
img=img,
)
responses.append(current_response)
# Update context for next iteration
context = current_response
# Add to conversation history
logger.debug("Added message to conversation history.")
return responses
except Exception as e:
error_message = (
f"AgentJudge encountered an error: {e}\n"
f"Traceback:\n{traceback.format_exc()}\n\n"
"If this issue persists, please:\n"
"- Open a GitHub issue: https://github.com/swarms-ai/swarms/issues\n"
"- Join our Discord for real-time support: swarms.ai\n"
"- Or book a call: https://cal.com/swarms\n"
)
raise AgentJudgeExecutionError(error_message)
def run_batched(
self,
tasks: Optional[List[str]] = None,
imgs: Optional[List[str]] = None,
):
"""
Executes batch evaluation of multiple tasks with corresponding images.
This method processes multiple task-image pairs independently, where each
task can be evaluated with its corresponding image. Unlike the run() method,
this doesn't build context between different tasks - each is evaluated
independently.
Args:
tasks (List[str], optional): A list of tasks/outputs to be evaluated.
imgs (List[str], optional): A list of image paths corresponding to each task.
Must be the same length as tasks if provided.
Returns:
List[List[str]]: A list of evaluation responses for each task. Each inner
list contains the responses from all iterations (max_loops)
for that particular task.
Example:
```python
# Batch evaluation with images
tasks = [
"Describe what you see in this image",
"What's wrong with this chart?",
"Analyze the trends shown"
]
images = [
"photo1.jpg",
"chart1.png",
"graph1.png"
]
evaluations = judge.run_batched(tasks=tasks, imgs=images)
# Returns evaluations for each task-image pair
# Batch evaluation without images
evaluations = judge.run_batched(tasks=[
"Agent response 1",
"Agent response 2",
"Agent response 3"
])
```
Note:
- Each task is processed independently
- If imgs is provided, it must have the same length as tasks
- Each task goes through max_loops iterations independently
- No context is shared between different tasks in the batch
"""
responses = []
for task, img in zip(tasks, imgs):
response = self.run(task=task, img=img)
responses.append(response)
return responses

@ -92,6 +92,7 @@ from swarms.structs.interactive_groupchat import (
)
from swarms.structs.hiearchical_swarm import HierarchicalSwarm
from swarms.structs.heavy_swarm import HeavySwarm
__all__ = [
"Agent",
@ -169,4 +170,5 @@ __all__ = [
"priority_speaker",
"random_dynamic_speaker",
"HierarchicalSwarm",
"HeavySwarm",
]

@ -1539,15 +1539,16 @@ class Agent:
if self.tools_list_dictionary is not None:
if not supports_function_calling(self.model_name):
raise AgentInitializationError(
logger.warning(
f"The model '{self.model_name}' does not support function calling. Please use a model that supports function calling."
)
try:
if self.max_tokens > get_max_tokens(self.model_name):
raise AgentInitializationError(
logger.warning(
f"Max tokens is set to {self.max_tokens}, but the model '{self.model_name}' only supports {get_max_tokens(self.model_name)} tokens. Please set max tokens to {get_max_tokens(self.model_name)} or less."
)
except Exception:
pass
@ -3231,13 +3232,3 @@ class Agent:
f"Full traceback: {traceback.format_exc()}. "
f"Attempting to retry tool execution with 3 attempts"
)
def add_tool_schema(self, tool_schema: dict):
self.tools_list_dictionary = [tool_schema]
self.output_type = "dict-all-except-first"
def add_multiple_tool_schemas(self, tool_schemas: list[dict]):
self.tools_list_dictionary = tool_schemas
self.output_type = "dict-all-except-first"

@ -0,0 +1,270 @@
import uuid
from typing import Any, Callable, Dict, List, Optional, Union
from swarms.structs.agent import Agent
from swarms.structs.concurrent_workflow import ConcurrentWorkflow
from swarms.structs.conversation import Conversation
def _create_voting_prompt(candidate_agents: List[Agent]) -> str:
"""
Create a comprehensive voting prompt for the election.
This method generates a detailed prompt that instructs voter agents on:
- Available candidates
- Required structured output format
- Evaluation criteria
- Voting guidelines
Returns:
str: A formatted voting prompt string
"""
candidate_names = [
(agent.agent_name if hasattr(agent, "agent_name") else str(i))
for i, agent in enumerate(candidate_agents)
]
prompt = f"""
You are participating in an election to choose the best candidate agent.
Available candidates: {', '.join(candidate_names)}
Please vote for one candidate and provide your reasoning with the following structured output:
1. rationality: A detailed explanation of the reasoning behind your decision. Include logical considerations, supporting evidence, and trade-offs that were evaluated when selecting this candidate.
2. self_interest: A comprehensive discussion of how self-interest influenced your decision, if at all. Explain whether personal or role-specific incentives played a role, or if your choice was primarily for the collective benefit of the swarm.
3. candidate_agent_name: The full name or identifier of the candidate you are voting for. This should exactly match one of the available candidate names listed above.
Consider the candidates' capabilities, experience, and alignment with the swarm's objectives when making your decision.
"""
print(prompt)
return prompt
def get_vote_schema():
return [
{
"type": "function",
"function": {
"name": "vote",
"description": "Cast a vote for a CEO candidate with reasoning and self-interest analysis.",
"parameters": {
"type": "object",
"properties": {
"rationality": {
"type": "string",
"description": "A detailed explanation of the reasoning behind this voting decision.",
},
"self_interest": {
"type": "string",
"description": "A comprehensive discussion of how self-interest factored into the decision.",
},
"candidate_agent_name": {
"type": "string",
"description": "The full name or identifier of the chosen candidate.",
},
},
"required": [
"rationality",
"self_interest",
"candidate_agent_name",
],
},
},
}
]
class ElectionSwarm:
"""
A swarm system that conducts elections among multiple agents to choose the best candidate.
The ElectionSwarm orchestrates a voting process where multiple voter agents evaluate
and vote for candidate agents based on their capabilities, experience, and alignment
with swarm objectives. The system uses structured output to ensure consistent voting
format and provides detailed reasoning for each vote.
Attributes:
id (str): Unique identifier for the election swarm
name (str): Name of the election swarm
description (str): Description of the election swarm's purpose
max_loops (int): Maximum number of voting rounds (default: 1)
agents (List[Agent]): List of voter agents that will participate in the election
candidate_agents (List[Agent]): List of candidate agents to be voted on
kwargs (dict): Additional keyword arguments
show_dashboard (bool): Whether to display the election dashboard
conversation (Conversation): Conversation history for the election
"""
def __init__(
self,
name: str = "Election Swarm",
description: str = "An election swarm is a swarm of agents that will vote on a candidate.",
agents: Union[List[Agent], List[Callable]] = None,
candidate_agents: Union[List[Agent], List[Callable]] = None,
id: str = str(uuid.uuid4()),
max_loops: int = 1,
show_dashboard: bool = True,
**kwargs,
):
"""
Initialize the ElectionSwarm.
Args:
name (str, optional): Name of the election swarm
description (str, optional): Description of the election swarm's purpose
agents (Union[List[Agent], List[Callable]], optional): List of voter agents
candidate_agents (Union[List[Agent], List[Callable]], optional): List of candidate agents
id (str, optional): Unique identifier for the election swarm
max_loops (int, optional): Maximum number of voting rounds (default: 1)
show_dashboard (bool, optional): Whether to display the election dashboard (default: True)
**kwargs: Additional keyword arguments
"""
self.id = id
self.name = name
self.description = description
self.max_loops = max_loops
self.agents = agents
self.candidate_agents = candidate_agents
self.kwargs = kwargs
self.show_dashboard = show_dashboard
self.conversation = Conversation()
self.reliability_check()
self.setup_voter_agents()
def reliability_check(self):
"""
Check the reliability of the voter agents.
"""
if self.agents is None:
raise ValueError("Voter agents are not set")
if self.candidate_agents is None:
raise ValueError("Candidate agents are not set")
if self.max_loops is None or self.max_loops < 1:
raise ValueError("Max loops are not set")
def setup_concurrent_workflow(self):
"""
Create a concurrent workflow for running voter agents in parallel.
Returns:
ConcurrentWorkflow: A configured concurrent workflow for the election
"""
return ConcurrentWorkflow(
name=self.name,
description=self.description,
agents=self.agents,
output_type="dict-all-except-first",
show_dashboard=self.show_dashboard,
)
def run_voter_agents(
self, task: str, img: Optional[str] = None, *args, **kwargs
):
"""
Execute the voting process by running all voter agents concurrently.
Args:
task (str): The election task or question to be voted on
img (Optional[str], optional): Image path if visual voting is required
*args: Additional positional arguments
**kwargs: Additional keyword arguments
Returns:
List[Dict[str, Any]]: Results from all voter agents containing their votes and reasoning
"""
concurrent_workflow = self.setup_concurrent_workflow()
results = concurrent_workflow.run(
task=task, img=img, *args, **kwargs
)
conversation_history = (
concurrent_workflow.conversation.conversation_history
)
for message in conversation_history:
self.conversation.add(
role=message["role"], content=message["content"]
)
return results
def parse_results(
self, results: List[Dict[str, Any]]
) -> Dict[str, int]:
"""
Parse voting results to count votes for each candidate.
Args:
results (List[Dict[str, Any]]): List of voting results from voter agents
Returns:
Dict[str, int]: Dictionary mapping candidate names to their vote counts
"""
# Count the number of votes for each candidate
vote_counts = {}
for result in results:
candidate_name = result["candidate_agent_name"]
vote_counts[candidate_name] = (
vote_counts.get(candidate_name, 0) + 1
)
# Find the candidate with the most votes
return vote_counts
def run(
self, task: str, img: Optional[str] = None, *args, **kwargs
):
"""
Execute the complete election process.
This method orchestrates the entire election by:
1. Adding the task to the conversation history
2. Running all voter agents concurrently
3. Collecting and processing the voting results
Args:
task (str): The election task or question to be voted on
img (Optional[str], optional): Image path if visual voting is required
*args: Additional positional arguments
**kwargs: Additional keyword arguments
Returns:
List[Dict[str, Any]]: Complete voting results from all agents
"""
self.conversation.add(role="user", content=task)
results = self.run_voter_agents(task, img, *args, **kwargs)
print(results)
return results
def setup_voter_agents(self):
"""
Configure voter agents with structured output capabilities and voting prompts.
This method sets up each voter agent with:
- Structured output schema for consistent voting format
- Voting-specific system prompts
- Tools for structured response generation
Returns:
List[Agent]: Configured voter agents ready for the election
"""
schema = get_vote_schema()
prompt = _create_voting_prompt(self.candidate_agents)
for agent in self.agents:
agent.tools_list_dictionary = schema
agent.system_prompt += f"\n\n{prompt}"

File diff suppressed because it is too large Load Diff

@ -0,0 +1,253 @@
from swarms.structs.agent import Agent
from typing import List
from swarms.structs.conversation import Conversation
import uuid
import random
from loguru import logger
from typing import Optional
class QASwarm:
"""
A Question and Answer swarm system where random agents ask questions to speaker agents.
This system allows for dynamic Q&A sessions where:
- Multiple agents can act as questioners
- One or multiple agents can act as speakers/responders
- Questions are asked randomly by different agents
- The conversation is tracked and managed
- Agents are showcased to each other with detailed information
"""
def __init__(
self,
name: str = "QandA",
description: str = "Question and Answer Swarm System",
agents: List[Agent] = None,
speaker_agents: List[Agent] = None,
id: str = str(uuid.uuid4()),
max_loops: int = 5,
show_dashboard: bool = True,
speaker_agent: Agent = None,
showcase_agents: bool = True,
**kwargs,
):
self.id = id
self.name = name
self.description = description
self.max_loops = max_loops
self.show_dashboard = show_dashboard
self.agents = agents or []
self.speaker_agents = speaker_agents or []
self.kwargs = kwargs
self.speaker_agent = speaker_agent
self.showcase_agents = showcase_agents
self.conversation = Conversation()
# Validate setup
self._validate_setup()
def _validate_setup(self):
"""Validate that the Q&A system is properly configured."""
if not self.agents:
logger.warning(
"No questioner agents provided. Add agents using add_agent() method."
)
if not self.speaker_agents and not self.speaker_agent:
logger.warning(
"No speaker agents provided. Add speaker agents using add_speaker_agent() method."
)
if (
not self.agents
and not self.speaker_agents
and not self.speaker_agent
):
raise ValueError(
"At least one agent (questioner or speaker) must be provided."
)
def add_agent(self, agent: Agent):
"""Add a questioner agent to the swarm."""
self.agents.append(agent)
logger.info(f"Added questioner agent: {agent.agent_name}")
def add_speaker_agent(self, agent: Agent):
"""Add a speaker agent to the swarm."""
if self.speaker_agents is None:
self.speaker_agents = []
self.speaker_agents.append(agent)
logger.info(f"Added speaker agent: {agent.agent_name}")
def get_agent_info(self, agent: Agent) -> dict:
"""Extract key information about an agent for showcasing."""
info = {
"name": getattr(agent, "agent_name", "Unknown Agent"),
"description": getattr(
agent, "agent_description", "No description available"
),
"role": getattr(agent, "role", "worker"),
}
# Get system prompt preview (first 50 characters)
system_prompt = getattr(agent, "system_prompt", "")
if system_prompt:
info["system_prompt_preview"] = (
system_prompt[:50] + "..."
if len(system_prompt) > 50
else system_prompt
)
else:
info["system_prompt_preview"] = (
"No system prompt available"
)
return info
def showcase_speaker_to_questioner(
self, questioner: Agent, speaker: Agent
) -> str:
"""Create a showcase prompt introducing the speaker agent to the questioner."""
speaker_info = self.get_agent_info(speaker)
showcase_prompt = f"""
You are about to ask a question to a specialized agent. Here's what you need to know about them:
**Speaker Agent Information:**
- **Name**: {speaker_info['name']}
- **Role**: {speaker_info['role']}
- **Description**: {speaker_info['description']}
- **System Prompt Preview**: {speaker_info['system_prompt_preview']}
Please craft a thoughtful, relevant question that takes into account this agent's expertise and background.
Your question should be specific and demonstrate that you understand their role and capabilities.
"""
return showcase_prompt
def showcase_questioner_to_speaker(
self, speaker: Agent, questioner: Agent
) -> str:
"""Create a showcase prompt introducing the questioner agent to the speaker."""
questioner_info = self.get_agent_info(questioner)
showcase_prompt = f"""
You are about to answer a question from another agent. Here's what you need to know about them:
**Questioner Agent Information:**
- **Name**: {questioner_info['name']}
- **Role**: {questioner_info['role']}
- **Description**: {questioner_info['description']}
- **System Prompt Preview**: {questioner_info['system_prompt_preview']}
Please provide a comprehensive answer that demonstrates your expertise and addresses their question thoroughly.
Consider their background and role when formulating your response.
"""
return showcase_prompt
def random_select_agent(self, agents: List[Agent]) -> Agent:
"""Randomly select an agent from the list."""
if not agents:
raise ValueError("No agents available for selection")
return random.choice(agents)
def get_current_speaker(self) -> Agent:
"""Get the current speaker agent (either from speaker_agents list or single speaker_agent)."""
if self.speaker_agent:
return self.speaker_agent
elif self.speaker_agents:
return self.random_select_agent(self.speaker_agents)
else:
raise ValueError("No speaker agent available")
def run(
self, task: str, img: Optional[str] = None, *args, **kwargs
):
"""Run the Q&A session with agent showcasing."""
self.conversation.add(role="user", content=task)
# Get current speaker
current_speaker = self.get_current_speaker()
# Select a random questioner
questioner = self.random_select_agent(self.agents)
# Showcase agents to each other if enabled
if self.showcase_agents:
# Showcase speaker to questioner
speaker_showcase = self.showcase_speaker_to_questioner(
questioner, current_speaker
)
questioner_task = f"{speaker_showcase}\n\nNow ask a question about: {task}"
# Showcase questioner to speaker
questioner_showcase = self.showcase_questioner_to_speaker(
current_speaker, questioner
)
else:
questioner_task = f"Ask a question about {task} to {current_speaker.agent_name}"
# Generate question
question = questioner.run(
task=questioner_task,
img=img,
*args,
**kwargs,
)
self.conversation.add(
role=questioner.agent_name, content=question
)
# Prepare answer task with showcasing if enabled
if self.showcase_agents:
answer_task = f"{questioner_showcase}\n\nAnswer this question from {questioner.agent_name}: {question}"
else:
answer_task = f"Answer the question '{question}' from {questioner.agent_name}"
# Generate answer
answer = current_speaker.run(
task=answer_task,
img=img,
*args,
**kwargs,
)
self.conversation.add(
role=current_speaker.agent_name, content=answer
)
return answer
def run_multi_round(
self,
task: str,
rounds: int = 3,
img: Optional[str] = None,
*args,
**kwargs,
):
"""Run multiple rounds of Q&A with different questioners."""
results = []
for round_num in range(rounds):
logger.info(
f"Starting Q&A round {round_num + 1}/{rounds}"
)
round_result = self.run(task, img, *args, **kwargs)
results.append(
{"round": round_num + 1, "result": round_result}
)
return results
def get_conversation_history(self):
"""Get the conversation history."""
return self.conversation.get_history()
def clear_conversation(self):
"""Clear the conversation history."""
self.conversation = Conversation()
logger.info("Conversation history cleared")

@ -28,6 +28,7 @@ from swarms.structs.malt import MALT
from swarms.structs.deep_research_swarm import DeepResearchSwarm
from swarms.structs.council_judge import CouncilAsAJudge
from swarms.structs.interactive_groupchat import InteractiveGroupChat
from swarms.structs.heavy_swarm import HeavySwarm
from swarms.structs.ma_utils import list_all_agents
from swarms.utils.generate_keys import generate_api_key
@ -49,6 +50,7 @@ SwarmType = Literal[
"DeepResearchSwarm",
"CouncilAsAJudge",
"InteractiveGroupChat",
"HeavySwarm",
]
@ -183,6 +185,10 @@ class SwarmRouter:
conversation: Any = None,
agents_config: Optional[Dict[Any, Any]] = None,
speaker_function: str = None,
heavy_swarm_loops_per_agent: int = 1,
heavy_swarm_question_agent_model_name: str = "gpt-4.1",
heavy_swarm_worker_model_name: str = "claude-3-5-sonnet-20240620",
telemetry_enabled: bool = False,
*args,
**kwargs,
):
@ -210,6 +216,14 @@ class SwarmRouter:
self.conversation = conversation
self.agents_config = agents_config
self.speaker_function = speaker_function
self.heavy_swarm_loops_per_agent = heavy_swarm_loops_per_agent
self.heavy_swarm_question_agent_model_name = (
heavy_swarm_question_agent_model_name
)
self.heavy_swarm_worker_model_name = (
heavy_swarm_worker_model_name
)
self.telemetry_enabled = telemetry_enabled
# Reliability check
self.reliability_check()
@ -234,6 +248,12 @@ class SwarmRouter:
if self.rules is not None:
self.handle_rules()
if self.multi_agent_collab_prompt is True:
self.update_system_prompt_for_agent_in_swarm()
if self.list_all_agents is True:
self.list_agents_to_eachother()
def activate_shared_memory(self):
logger.info("Activating shared memory with all agents ")
@ -283,6 +303,10 @@ class SwarmRouter:
Handles special case for CouncilAsAJudge which may not require agents.
"""
logger.info(
f"Initializing SwarmRouter: {self.name} Reliability Check..."
)
# Check swarm type first since it affects other validations
if self.swarm_type is None:
raise ValueError(
@ -300,6 +324,10 @@ class SwarmRouter:
self.setup()
logger.info(
f"Reliability check for parameters and configurations are complete. SwarmRouter: {self.name} is ready to run!"
)
def _create_swarm(self, task: str = None, *args, **kwargs):
"""
Dynamically create and return the specified swarm type or automatically match the best swarm type for a given task.
@ -321,6 +349,18 @@ class SwarmRouter:
self._create_swarm(self.swarm_type)
elif self.swarm_type == "HeavySwarm":
return HeavySwarm(
name=self.name,
description=self.description,
agents=self.agents,
max_loops=self.max_loops,
output_type=self.output_type,
loops_per_agent=self.heavy_swarm_loops_per_agent,
question_agent_model_name=self.heavy_swarm_question_agent_model_name,
worker_model_name=self.heavy_swarm_worker_model_name,
)
elif self.swarm_type == "AgentRearrange":
return AgentRearrange(
name=self.name,
@ -478,6 +518,24 @@ class SwarmRouter:
return agent_config
def list_agents_to_eachother(self):
if self.swarm_type == "SequentialWorkflow":
self.conversation = (
self.swarm.agent_rearrange.conversation
)
else:
self.conversation = self.swarm.conversation
if self.list_all_agents is True:
list_all_agents(
agents=self.agents,
conversation=self.swarm.conversation,
name=self.name,
description=self.description,
add_collaboration_prompt=True,
add_to_conversation=True,
)
def _run(
self,
task: str,
@ -503,31 +561,12 @@ class SwarmRouter:
"""
self.swarm = self._create_swarm(task, *args, **kwargs)
if self.swarm_type == "SequentialWorkflow":
self.conversation = (
self.swarm.agent_rearrange.conversation
)
else:
self.conversation = self.swarm.conversation
if self.list_all_agents is True:
list_all_agents(
agents=self.agents,
conversation=self.swarm.conversation,
name=self.name,
description=self.description,
add_collaboration_prompt=True,
add_to_conversation=True,
)
if self.multi_agent_collab_prompt is True:
self.update_system_prompt_for_agent_in_swarm()
log_execution(
swarm_id=self.id,
status="start",
swarm_config=self.to_dict(),
swarm_architecture="swarm_router",
enabled_on=self.telemetry_enabled,
)
try:
@ -548,12 +587,13 @@ class SwarmRouter:
status="completion",
swarm_config=self.to_dict(),
swarm_architecture="swarm_router",
enabled_on=self.telemetry_enabled,
)
return result
except Exception as e:
raise RuntimeError(
f"SwarmRouter: Error executing task on swarm: {str(e)} Traceback: {traceback.format_exc()}"
f"SwarmRouter: Error executing task on swarm: {str(e)} Traceback: {traceback.format_exc()}. Try reconfiguring the SwarmRouter Settings and or make sure the individual agents are configured correctly."
)
def run(

@ -1,5 +1,250 @@
from typing import Optional
from swarms.telemetry.main import log_agent_data
import functools
import inspect
import time
from datetime import datetime
def log_function_execution(
swarm_id: Optional[str] = None,
swarm_architecture: Optional[str] = None,
enabled_on: Optional[bool] = True,
):
"""
Decorator to log function execution details including parameters and outputs.
This decorator automatically captures and logs:
- Function name
- Function parameters (args and kwargs)
- Function output/return value
- Execution timestamp
- Execution duration
- Execution status (success/error)
Args:
swarm_id (str, optional): Unique identifier for the swarm instance
swarm_architecture (str, optional): Name of the swarm architecture
enabled_on (bool, optional): Whether logging is enabled. Defaults to True.
Returns:
Decorated function that logs execution details
Example:
>>> @log_function_execution(swarm_id="my-swarm", swarm_architecture="sequential")
... def process_data(data, threshold=0.5):
... return {"processed": len(data), "threshold": threshold}
...
>>> result = process_data([1, 2, 3], threshold=0.8)
"""
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
if not enabled_on:
return func(*args, **kwargs)
# Capture function details
function_name = func.__name__
function_module = func.__module__
start_time = time.time()
timestamp = datetime.now().isoformat()
# Capture function parameters
sig = inspect.signature(func)
bound_args = sig.bind(*args, **kwargs)
bound_args.apply_defaults()
# Convert parameters to serializable format
parameters = {}
for (
param_name,
param_value,
) in bound_args.arguments.items():
try:
# Handle special method parameters
if param_name == "self":
# For instance methods, log class name and instance info
parameters[param_name] = {
"class_name": param_value.__class__.__name__,
"class_module": param_value.__class__.__module__,
"instance_id": hex(id(param_value)),
"type": "instance",
}
elif param_name == "cls":
# For class methods, log class information
parameters[param_name] = {
"class_name": param_value.__name__,
"class_module": param_value.__module__,
"type": "class",
}
elif isinstance(
param_value,
(str, int, float, bool, type(None)),
):
parameters[param_name] = param_value
elif isinstance(param_value, (list, dict, tuple)):
parameters[param_name] = str(param_value)[
:500
] # Truncate large objects
elif hasattr(param_value, "__class__"):
# Handle other object instances
parameters[param_name] = {
"class_name": param_value.__class__.__name__,
"class_module": param_value.__class__.__module__,
"instance_id": hex(id(param_value)),
"type": "object_instance",
}
else:
parameters[param_name] = str(
type(param_value)
)
except Exception:
parameters[param_name] = "<non-serializable>"
# Determine if this is a method call and add context
method_context = _get_method_context(
func, bound_args.arguments
)
execution_data = {
"function_name": function_name,
"function_module": function_module,
"swarm_id": swarm_id,
"swarm_architecture": swarm_architecture,
"timestamp": timestamp,
"parameters": parameters,
"status": "start",
**method_context,
}
try:
# Log function start
log_agent_data(data_dict=execution_data)
# Execute the function
result = func(*args, **kwargs)
# Calculate execution time
end_time = time.time()
execution_time = end_time - start_time
# Log successful execution
success_data = {
**execution_data,
"status": "success",
"execution_time_seconds": execution_time,
"output": _serialize_output(result),
}
log_agent_data(data_dict=success_data)
return result
except Exception as e:
# Calculate execution time even for errors
end_time = time.time()
execution_time = end_time - start_time
# Log error execution
error_data = {
**execution_data,
"status": "error",
"execution_time_seconds": execution_time,
"error_message": str(e),
"error_type": type(e).__name__,
}
try:
log_agent_data(data_dict=error_data)
except Exception:
pass # Silent fail on logging errors
# Re-raise the original exception
raise
return wrapper
return decorator
def _get_method_context(func, arguments):
"""
Helper function to extract method context information.
Args:
func: The function/method being called
arguments: The bound arguments dictionary
Returns:
Dictionary with method context information
"""
context = {}
try:
# Check if this is a method call
if "self" in arguments:
# Instance method
self_obj = arguments["self"]
context.update(
{
"method_type": "instance_method",
"class_name": self_obj.__class__.__name__,
"class_module": self_obj.__class__.__module__,
"instance_id": hex(id(self_obj)),
}
)
elif "cls" in arguments:
# Class method
cls_obj = arguments["cls"]
context.update(
{
"method_type": "class_method",
"class_name": cls_obj.__name__,
"class_module": cls_obj.__module__,
}
)
else:
# Regular function or static method
context.update({"method_type": "function"})
# Try to get qualname for additional context
if hasattr(func, "__qualname__"):
context["qualified_name"] = func.__qualname__
except Exception:
# If anything fails, just mark as unknown
context = {"method_type": "unknown"}
return context
def _serialize_output(output):
"""
Helper function to serialize function output for logging.
Args:
output: The function return value to serialize
Returns:
Serializable representation of the output
"""
try:
if output is None:
return None
elif isinstance(output, (str, int, float, bool)):
return output
elif isinstance(output, (list, dict, tuple)):
# Truncate large outputs to prevent log bloat
output_str = str(output)
return (
output_str[:1000] + "..."
if len(output_str) > 1000
else output_str
)
else:
return str(type(output))
except Exception:
return "<non-serializable-output>"
def log_execution(
@ -7,6 +252,7 @@ def log_execution(
status: Optional[str] = None,
swarm_config: Optional[dict] = None,
swarm_architecture: Optional[str] = None,
enabled_on: Optional[bool] = False,
):
"""
Log execution data for a swarm router instance.
@ -31,6 +277,7 @@ def log_execution(
... )
"""
try:
if enabled_on is None:
log_agent_data(
data_dict={
"swarm_router_id": swarm_id,
@ -39,5 +286,7 @@ def log_execution(
"swarm_architecture": swarm_architecture,
}
)
else:
pass
except Exception:
pass

Loading…
Cancel
Save