commit
4d32b3156b
@ -0,0 +1,19 @@
|
||||
# Class/function
|
||||
|
||||
Brief description
|
||||
↓
|
||||
|
||||
↓
|
||||
## Overview
|
||||
↓
|
||||
## Architecture (Mermaid diagram)
|
||||
↓
|
||||
## Class Reference (Constructor + Methods)
|
||||
↓
|
||||
## Examples
|
||||
|
||||
↓
|
||||
|
||||
## Conclusion
|
||||
Benefits of class/structure, and more
|
||||
|
@ -1,223 +1,251 @@
|
||||
# Agent Judge
|
||||
# AgentJudge
|
||||
|
||||
The AgentJudge is a specialized agent designed to evaluate and judge outputs from other agents or systems. It acts as a quality control mechanism, providing objective assessments and feedback on various types of content, decisions, or outputs. This implementation is based on the research paper "Agents as Judges: Using LLMs to Evaluate LLMs".
|
||||
A specialized agent for evaluating and judging outputs from other agents or systems. Acts as a quality control mechanism providing objective assessments and feedback.
|
||||
|
||||
## Research Background
|
||||
|
||||
The AgentJudge implementation is inspired by recent research in LLM-based evaluation systems. Key findings from the research include:
|
||||
|
||||
- LLMs can effectively evaluate other LLM outputs with high accuracy
|
||||
|
||||
- Multi-agent evaluation systems can provide more reliable assessments
|
||||
|
||||
- Structured evaluation criteria improve consistency
|
||||
|
||||
- Context-aware evaluation leads to better results
|
||||
Based on the research paper: **"Agent-as-a-Judge: Evaluate Agents with Agents"** - [arXiv:2410.10934](https://arxiv.org/abs/2410.10934)
|
||||
|
||||
## Overview
|
||||
|
||||
The AgentJudge serves as an impartial evaluator that can:
|
||||
The AgentJudge is designed to evaluate and critique outputs from other AI agents, providing structured feedback on quality, accuracy, and areas for improvement. It supports both single-shot evaluations and iterative refinement through multiple evaluation loops with context building.
|
||||
|
||||
Key capabilities:
|
||||
|
||||
- Assess the quality and correctness of agent outputs
|
||||
- **Quality Assessment**: Evaluates correctness, clarity, and completeness of agent outputs
|
||||
|
||||
- Provide structured feedback and scoring
|
||||
- **Structured Feedback**: Provides detailed critiques with strengths, weaknesses, and suggestions
|
||||
|
||||
- Maintain context across multiple evaluations
|
||||
- **Multimodal Support**: Can evaluate text outputs alongside images
|
||||
|
||||
- Generate detailed analysis reports
|
||||
- **Context Building**: Maintains evaluation context across multiple iterations
|
||||
|
||||
- **Batch Processing**: Efficiently processes multiple evaluations
|
||||
|
||||
## Architecture
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
A[Input Tasks] --> B[AgentJudge]
|
||||
B --> C[Agent Core]
|
||||
C --> D[LLM Model]
|
||||
D --> E[Response Generation]
|
||||
E --> F[Context Management]
|
||||
F --> G[Output]
|
||||
|
||||
subgraph "Evaluation Flow"
|
||||
H[Task Analysis] --> I[Quality Assessment]
|
||||
I --> J[Feedback Generation]
|
||||
J --> K[Score Assignment]
|
||||
end
|
||||
|
||||
B --> H
|
||||
K --> G
|
||||
```
|
||||
A[Input Task] --> B[AgentJudge]
|
||||
B --> C{Evaluation Mode}
|
||||
|
||||
## Configuration
|
||||
C -->|step()| D[Single Eval]
|
||||
C -->|run()| E[Iterative Eval]
|
||||
C -->|run_batched()| F[Batch Eval]
|
||||
|
||||
### Parameters
|
||||
D --> G[Agent Core]
|
||||
E --> G
|
||||
F --> G
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `agent_name` | str | "agent-judge-01" | Unique identifier for the judge agent |
|
||||
| `system_prompt` | str | AGENT_JUDGE_PROMPT | System instructions for the agent |
|
||||
| `model_name` | str | "openai/o1" | LLM model to use for evaluation |
|
||||
| `max_loops` | int | 1 | Maximum number of evaluation iterations |
|
||||
G --> H[LLM Model]
|
||||
H --> I[Quality Analysis]
|
||||
I --> J[Feedback & Output]
|
||||
|
||||
### Methods
|
||||
subgraph "Feedback Details"
|
||||
N[Strengths]
|
||||
O[Weaknesses]
|
||||
P[Improvements]
|
||||
Q[Accuracy Check]
|
||||
end
|
||||
|
||||
| Method | Description | Parameters | Returns |
|
||||
|--------|-------------|------------|---------|
|
||||
| `step()` | Processes a single batch of tasks | `tasks: List[str]` | `str` |
|
||||
| `run()` | Executes multiple evaluation iterations | `tasks: List[str]` | `List[str]` |
|
||||
J --> N
|
||||
J --> O
|
||||
J --> P
|
||||
J --> Q
|
||||
|
||||
## Usage
|
||||
```
|
||||
|
||||
### Basic Example
|
||||
## Class Reference
|
||||
|
||||
```python
|
||||
from swarms import AgentJudge
|
||||
### Constructor
|
||||
|
||||
# Initialize the judge
|
||||
judge = AgentJudge(
|
||||
model_name="gpt-4o",
|
||||
max_loops=1
|
||||
```python
|
||||
AgentJudge(
|
||||
id: str = str(uuid.uuid4()),
|
||||
agent_name: str = "Agent Judge",
|
||||
description: str = "You're an expert AI agent judge...",
|
||||
system_prompt: str = AGENT_JUDGE_PROMPT,
|
||||
model_name: str = "openai/o1",
|
||||
max_loops: int = 1,
|
||||
verbose: bool = False,
|
||||
*args,
|
||||
**kwargs
|
||||
)
|
||||
```
|
||||
|
||||
# Example outputs to evaluate
|
||||
outputs = [
|
||||
"1. Agent CalculusMaster: After careful evaluation, I have computed the integral of the polynomial function. The result is ∫(x^2 + 3x + 2)dx = (1/3)x^3 + (3/2)x^2 + 5, where I applied the power rule for integration and added the constant of integration.",
|
||||
"2. Agent DerivativeDynamo: In my analysis of the function sin(x), I have derived it with respect to x. The derivative is d/dx (sin(x)) = cos(x). However, I must note that the additional term '+ 2' is not applicable in this context as it does not pertain to the derivative of sin(x).",
|
||||
"3. Agent LimitWizard: Upon evaluating the limit as x approaches 0 for the function (sin(x)/x), I conclude that lim (x -> 0) (sin(x)/x) = 1. The additional '+ 3' is incorrect and should be disregarded as it does not relate to the limit calculation.",
|
||||
]
|
||||
#### Parameters
|
||||
|
||||
# Run evaluation
|
||||
results = judge.run(outputs)
|
||||
print(results)
|
||||
```
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `id` | `str` | `str(uuid.uuid4())` | Unique identifier for the judge instance |
|
||||
| `agent_name` | `str` | `"Agent Judge"` | Name of the agent judge |
|
||||
| `description` | `str` | `"You're an expert AI agent judge..."` | Description of the agent's role |
|
||||
| `system_prompt` | `str` | `AGENT_JUDGE_PROMPT` | System instructions for evaluation |
|
||||
| `model_name` | `str` | `"openai/o1"` | LLM model for evaluation |
|
||||
| `max_loops` | `int` | `1` | Maximum evaluation iterations |
|
||||
| `verbose` | `bool` | `False` | Enable verbose logging |
|
||||
|
||||
### Methods
|
||||
|
||||
## Applications
|
||||
#### step()
|
||||
|
||||
### Code Review Automation
|
||||
```python
|
||||
step(
|
||||
task: str = None,
|
||||
tasks: Optional[List[str]] = None,
|
||||
img: Optional[str] = None
|
||||
) -> str
|
||||
```
|
||||
|
||||
!!! success "Features"
|
||||
- Evaluate code quality
|
||||
- Check for best practices
|
||||
- Assess documentation completeness
|
||||
Processes a single task or list of tasks and returns evaluation.
|
||||
|
||||
### Content Quality Control
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `task` | `str` | `None` | Single task/output to evaluate |
|
||||
| `tasks` | `List[str]` | `None` | List of tasks/outputs to evaluate |
|
||||
| `img` | `str` | `None` | Path to image for multimodal evaluation |
|
||||
|
||||
!!! info "Use Cases"
|
||||
- Review marketing copy
|
||||
- Validate technical documentation
|
||||
- Assess user support responses
|
||||
**Returns:** `str` - Detailed evaluation response
|
||||
|
||||
### Decision Validation
|
||||
#### run()
|
||||
|
||||
```python
|
||||
run(
|
||||
task: str = None,
|
||||
tasks: Optional[List[str]] = None,
|
||||
img: Optional[str] = None
|
||||
) -> List[str]
|
||||
```
|
||||
|
||||
!!! warning "Applications"
|
||||
- Evaluate business decisions
|
||||
- Assess risk assessments
|
||||
- Review compliance reports
|
||||
Executes evaluation in multiple iterations with context building.
|
||||
|
||||
### Performance Assessment
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `task` | `str` | `None` | Single task/output to evaluate |
|
||||
| `tasks` | `List[str]` | `None` | List of tasks/outputs to evaluate |
|
||||
| `img` | `str` | `None` | Path to image for multimodal evaluation |
|
||||
|
||||
!!! tip "Metrics"
|
||||
- Evaluate agent performance
|
||||
- Assess system outputs
|
||||
- Review automated processes
|
||||
**Returns:** `List[str]` - List of evaluation responses from each iteration
|
||||
|
||||
## Best Practices
|
||||
#### run_batched()
|
||||
|
||||
### Task Formulation
|
||||
```python
|
||||
run_batched(
|
||||
tasks: Optional[List[str]] = None,
|
||||
imgs: Optional[List[str]] = None
|
||||
) -> List[List[str]]
|
||||
```
|
||||
|
||||
1. Provide clear, specific evaluation criteria
|
||||
2. Include context when necessary
|
||||
3. Structure tasks for consistent evaluation
|
||||
Executes batch evaluation of multiple tasks with corresponding images.
|
||||
|
||||
### System Configuration
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `tasks` | `List[str]` | `None` | List of tasks/outputs to evaluate |
|
||||
| `imgs` | `List[str]` | `None` | List of image paths (same length as tasks) |
|
||||
|
||||
1. Use appropriate model for task complexity
|
||||
2. Adjust max_loops based on evaluation depth needed
|
||||
3. Customize system prompt for specific use cases
|
||||
**Returns:** `List[List[str]]` - Evaluation responses for each task
|
||||
|
||||
### Output Management
|
||||
## Examples
|
||||
|
||||
1. Store evaluation results systematically
|
||||
2. Track evaluation patterns over time
|
||||
3. Use results for continuous improvement
|
||||
### Basic Usage
|
||||
|
||||
### Integration Tips
|
||||
```python
|
||||
from swarms import AgentJudge
|
||||
|
||||
1. Implement as part of CI/CD pipelines
|
||||
2. Use for automated quality gates
|
||||
3. Integrate with monitoring systems
|
||||
# Initialize with default settings
|
||||
judge = AgentJudge()
|
||||
|
||||
## Implementation Guide
|
||||
# Single task evaluation
|
||||
result = judge.step(task="The capital of France is Paris.")
|
||||
print(result)
|
||||
```
|
||||
|
||||
### Step 1: Setup
|
||||
### Custom Configuration
|
||||
|
||||
```python
|
||||
from swarms import AgentJudge
|
||||
|
||||
# Initialize with custom parameters
|
||||
# Custom judge configuration
|
||||
judge = AgentJudge(
|
||||
agent_name="custom-judge",
|
||||
agent_name="content-evaluator",
|
||||
model_name="gpt-4",
|
||||
max_loops=3
|
||||
max_loops=3,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
### Step 2: Configure Evaluation Criteria
|
||||
|
||||
```python
|
||||
# Define evaluation criteria
|
||||
criteria = {
|
||||
"accuracy": 0.4,
|
||||
"completeness": 0.3,
|
||||
"clarity": 0.3
|
||||
}
|
||||
# Evaluate multiple outputs
|
||||
outputs = [
|
||||
"Agent CalculusMaster: The integral of x^2 + 3x + 2 is (1/3)x^3 + (3/2)x^2 + 2x + C",
|
||||
"Agent DerivativeDynamo: The derivative of sin(x) is cos(x)",
|
||||
"Agent LimitWizard: The limit of sin(x)/x as x approaches 0 is 1"
|
||||
]
|
||||
|
||||
# Set criteria
|
||||
judge.set_evaluation_criteria(criteria)
|
||||
evaluation = judge.step(tasks=outputs)
|
||||
print(evaluation)
|
||||
```
|
||||
|
||||
### Step 3: Run Evaluations
|
||||
### Iterative Evaluation with Context
|
||||
|
||||
```python
|
||||
# Single task evaluation
|
||||
result = judge.step(task)
|
||||
from swarms import AgentJudge
|
||||
|
||||
# Multiple iterations with context building
|
||||
judge = AgentJudge(max_loops=3)
|
||||
|
||||
# Batch evaluation
|
||||
results = judge.run(tasks)
|
||||
# Each iteration builds on previous context
|
||||
evaluations = judge.run(task="Agent output: 2+2=5")
|
||||
for i, eval_result in enumerate(evaluations):
|
||||
print(f"Iteration {i+1}: {eval_result}\n")
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
### Multimodal Evaluation
|
||||
|
||||
```python
|
||||
from swarms import AgentJudge
|
||||
|
||||
### Common Issues
|
||||
judge = AgentJudge()
|
||||
|
||||
??? question "Evaluation Inconsistencies"
|
||||
If you notice inconsistent evaluations:
|
||||
# Evaluate with image
|
||||
evaluation = judge.step(
|
||||
task="Describe what you see in this image",
|
||||
img="path/to/image.jpg"
|
||||
)
|
||||
print(evaluation)
|
||||
```
|
||||
|
||||
1. Check the evaluation criteria
|
||||
2. Verify the model configuration
|
||||
3. Review the input format
|
||||
### Batch Processing
|
||||
|
||||
??? question "Performance Issues"
|
||||
For slow evaluations:
|
||||
```python
|
||||
from swarms import AgentJudge
|
||||
|
||||
1. Reduce max_loops
|
||||
2. Optimize batch size
|
||||
3. Consider model selection
|
||||
judge = AgentJudge()
|
||||
|
||||
# Batch evaluation with images
|
||||
tasks = [
|
||||
"Describe this chart",
|
||||
"What's the main trend?",
|
||||
"Any anomalies?"
|
||||
]
|
||||
images = [
|
||||
"chart1.png",
|
||||
"chart2.png",
|
||||
"chart3.png"
|
||||
]
|
||||
|
||||
## References
|
||||
# Each task evaluated independently
|
||||
evaluations = judge.run_batched(tasks=tasks, imgs=images)
|
||||
for i, task_evals in enumerate(evaluations):
|
||||
print(f"Task {i+1} evaluations: {task_evals}")
|
||||
```
|
||||
|
||||
### "Agent-as-a-Judge: Evaluate Agents with Agents" - [Paper Link](https://arxiv.org/abs/2410.10934)
|
||||
## Reference
|
||||
|
||||
```bibtex
|
||||
@misc{zhuge2024agentasajudgeevaluateagentsagents,
|
||||
title={Agent-as-a-Judge: Evaluate Agents with Agents},
|
||||
author={Mingchen Zhuge and Changsheng Zhao and Dylan Ashley and Wenyi Wang and Dmitrii Khizbullin and Yunyang Xiong and Zechun Liu and Ernie Chang and Raghuraman Krishnamoorthi and Yuandong Tian and Yangyang Shi and Vikas Chandra and Jürgen Schmidhuber},
|
||||
year={2024},
|
||||
eprint={2410.10934},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.AI},
|
||||
url={https://arxiv.org/abs/2410.10934},
|
||||
title={Agent-as-a-Judge: Evaluate Agents with Agents},
|
||||
author={Mingchen Zhuge and Changsheng Zhao and Dylan Ashley and Wenyi Wang and Dmitrii Khizbullin and Yunyang Xiong and Zechun Liu and Ernie Chang and Raghuraman Krishnamoorthi and Yuandong Tian and Yangyang Shi and Vikas Chandra and Jürgen Schmidhuber},
|
||||
year={2024},
|
||||
eprint={2410.10934},
|
||||
archivePrefix={arXiv},
|
||||
primaryClass={cs.AI},
|
||||
url={https://arxiv.org/abs/2410.10934}
|
||||
}
|
||||
```
|
@ -1,75 +1,149 @@
|
||||
|
||||
# Swarm Ecosystem
|
||||
# Swarms Ecosystem
|
||||
|
||||
Welcome to the Swarm Ecosystem, a comprehensive suite of tools and frameworks designed to empower developers to orhestrate swarms of autonomous agents for a variety of applications. Dive into our ecosystem below:
|
||||
*The Complete Enterprise-Grade Multi-Agent AI Platform*
|
||||
|
||||
[Full Github Link](https://github.com/kyegomez/swarm-ecosystem)
|
||||
---
|
||||
|
||||
## **Join the Future of AI Development**
|
||||
|
||||
**We're Building the Operating System for the Agent Economy** - The Swarms ecosystem represents the most comprehensive, production-ready multi-agent AI platform available today. From our flagship Python framework to high-performance Rust implementations and client libraries spanning every major programming language, we provide enterprise-grade tools that power the next generation of intelligent applications.
|
||||
|
||||
---
|
||||
|
||||
## **Complete Product Portfolio**
|
||||
|
||||
| **Product** | **Technology** | **Status** | **Repository** | **Documentation** |
|
||||
|-------------|---------------|------------|----------------|-------------------|
|
||||
| **Swarms Python Framework** | Python | **Production** | [swarms](https://github.com/kyegomez/swarms) | [Docs](https://docs.swarms.world/en/latest/swarms/install/install/) |
|
||||
| **Swarms Rust Framework** | Rust | **Production** | [swarms-rs](https://github.com/The-Swarm-Corporation/swarms-rs) | [Docs](https://docs.swarms.world/en/latest/swarms_rs/overview/) |
|
||||
| **Python API Client** | Python | **Production** | [swarms-sdk](https://github.com/The-Swarm-Corporation/swarms-sdk) | [Docs](https://docs.swarms.world/en/latest/swarms_cloud/python_client/) |
|
||||
| **TypeScript/Node.js Client** | TypeScript | **Production** | [swarms-ts](https://github.com/The-Swarm-Corporation/swarms-ts) | *Coming Soon* |
|
||||
| **Go Client** | Go | **Production** | [swarms-client-go](https://github.com/The-Swarm-Corporation/swarms-client-go) | *Coming Soon* |
|
||||
| **Java Client** | Java | **Production** | [swarms-java](https://github.com/The-Swarm-Corporation/swarms-java) | *Coming Soon* |
|
||||
| **Kotlin Client** | Kotlin | **Q2 2025** | *In Development* | *Coming Soon* |
|
||||
| **Ruby Client** | Ruby | **Q2 2025** | *In Development* | *Coming Soon* |
|
||||
| **Rust Client** | Rust | **Q2 2025** | *In Development* | *Coming Soon* |
|
||||
| **C#/.NET Client** | C# | **Q3 2025** | *In Development* | *Coming Soon* |
|
||||
|
||||
---
|
||||
|
||||
## **Why Choose the Swarms Ecosystem?**
|
||||
|
||||
### **Enterprise-Grade Architecture**
|
||||
|
||||
- **Production Ready**: Battle-tested in enterprise environments with 99.9%+ uptime
|
||||
|
||||
- **Scalable Infrastructure**: Handle millions of agent interactions with automatic scaling
|
||||
|
||||
- **Security First**: End-to-end encryption, API key management, and enterprise compliance
|
||||
|
||||
- **Observability**: Comprehensive logging, monitoring, and debugging capabilities
|
||||
|
||||
### **Developer Experience**
|
||||
|
||||
- **Multiple Language Support**: Native clients for every major programming language
|
||||
|
||||
## Getting Started
|
||||
- **Unified API**: Consistent interface across all platforms and languages
|
||||
|
||||
| Project | Description | Link |
|
||||
| ------- | ----------- | ---- |
|
||||
| **Swarms Framework** | A Python-based framework that enables the creation, deployment, and scaling of reliable swarms of autonomous agents aimed at automating complex workflows. | [Swarms Framework](https://github.com/kyegomez/swarms) |
|
||||
| **Swarms Cloud** | A cloud-based service offering Swarms-as-a-Service with guaranteed 100% uptime, cutting-edge performance, and enterprise-grade reliability for seamless scaling and management of swarms. | [Swarms Cloud](https://github.com/kyegomez/swarms-cloud) |
|
||||
| **Swarms Core** | Provides backend utilities focusing on concurrency, multi-threading, and advanced execution strategies, developed in Rust for maximum efficiency and performance. | [Swarms Core](https://github.com/kyegomez/swarms-core) |
|
||||
| **Swarm Foundation Models** | A dedicated repository for the creation, optimization, and training of groundbreaking swarming models. Features innovative models like PSO with transformers, ant colony optimizations, and more, aiming to surpass traditional architectures like Transformers and SSMs. Open for community contributions and ideas. | [Swarm Foundation Models](https://github.com/kyegomez/swarms-pytorch) |
|
||||
| **Swarm Platform** | The Swarms dashboard Platform | [Swarm Platform](https://github.com/kyegomez/swarms-platform) |
|
||||
| **Swarms JS** | Swarms Framework in JS. Orchestrate any agents and enable multi-agent collaboration between various agents! | [Swarm JS](https://github.com/kyegomez/swarms-js) |
|
||||
| **Swarms Memory** | Easy to use, reliable, and bleeding-edge RAG systems.! | [Swarm JS](https://github.com/kyegomez/swarms-memory) |
|
||||
| **Swarms Evals** | Evaluating Swarms! | [Swarm JS](https://github.com/kyegomez/swarms-evals) |
|
||||
| **Swarms Zero** | RPC Enterprise-Grade Automation Framework | [Swarm Zero]([https://github.com/kyegomez/swarms-evals](https://github.com/kyegomez/Zero)) |
|
||||
- **Rich Documentation**: Comprehensive guides, tutorials, and API references
|
||||
|
||||
----
|
||||
- **Active Community**: 24/7 support through Discord, GitHub, and direct channels
|
||||
|
||||
## 🫶 Contributions:
|
||||
### **Performance & Reliability**
|
||||
|
||||
The easiest way to contribute is to pick any issue with the `good first issue` tag 💪. Read the Contributing guidelines [here](/CONTRIBUTING.md). Bug Report? [File here](https://github.com/swarms/gateway/issues) | Feature Request? [File here](https://github.com/swarms/gateway/issues)
|
||||
- **High Throughput**: Process thousands of concurrent agent requests
|
||||
|
||||
Swarms is an open-source project, and contributions are VERY welcome. If you want to contribute, you can create new features, fix bugs, or improve the infrastructure. Please refer to the [CONTRIBUTING.md](https://github.com/kyegomez/swarms/blob/master/CONTRIBUTING.md) and our [contributing board](https://github.com/users/kyegomez/projects/1) to participate in Roadmap discussions!
|
||||
- **Low Latency**: Optimized for real-time applications and user experiences
|
||||
|
||||
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=kyegomez/swarms" />
|
||||
</a>
|
||||
- **Fault Tolerance**: Automatic retries, circuit breakers, and graceful degradation
|
||||
|
||||
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=kyegomez/swarms-cloud" />
|
||||
</a>
|
||||
- **Multi-Cloud**: Deploy on AWS, GCP, Azure, or on-premises infrastructure
|
||||
|
||||
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=kyegomez/swarms-platform" />
|
||||
</a>
|
||||
---
|
||||
|
||||
## **Join Our Growing Community**
|
||||
|
||||
### **Connect With Developers Worldwide**
|
||||
|
||||
| **Platform** | **Purpose** | **Join Link** | **Benefits** |
|
||||
|--------------|-------------|---------------|--------------|
|
||||
| **Discord Community** | Real-time support & discussions | [Join Discord](https://discord.gg/jM3Z6M9uMq) | • 24/7 developer support<br/>• Weekly community events<br/>• Direct access to core team<br/>• Beta feature previews |
|
||||
| **Twitter/X** | Latest updates & announcements | [Follow @swarms_corp](https://x.com/swarms_corp) | • Breaking news & updates<br/>• Community highlights<br/>• Technical insights<br/>• Industry partnerships |
|
||||
| **LinkedIn** | Professional network & updates | [The Swarm Corporation](https://www.linkedin.com/company/the-swarm-corporation) | • Professional networking<br/>• Career opportunities<br/>• Enterprise partnerships<br/>• Industry insights |
|
||||
| **YouTube** | Tutorials & technical content | [Swarms Channel](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ) | • In-depth tutorials<br/>• Live coding sessions<br/>• Architecture deep dives<br/>• Community showcases |
|
||||
|
||||
---
|
||||
|
||||
## **Contribute to the Ecosystem**
|
||||
|
||||
### **How You Can Make an Impact**
|
||||
|
||||
<a href="https://github.com/kyegomez/swarms/graphs/contributors">
|
||||
<img src="https://contrib.rocks/image?repo=kyegomez/swarms-js" />
|
||||
</a>
|
||||
| **Contribution Area** | **Skills Needed** | **Impact Level** | **Getting Started** |
|
||||
|-----------------------|-------------------|------------------|---------------------|
|
||||
| **Core Framework Development** | Python, Rust, Systems Design | **High Impact** | [Contributing Guide](https://docs.swarms.world/en/latest/contributors/main/) |
|
||||
| **Client Library Development** | Various Languages (Go, Java, TS, etc.) | **High Impact** | [Client Development](https://github.com/The-Swarm-Corporation) |
|
||||
| **Documentation & Tutorials** | Technical Writing, Examples | **High Impact** | [Docs Contributing](https://docs.swarms.world/en/latest/contributors/docs/) |
|
||||
| **Testing & Quality Assurance** | Testing Frameworks, QA | **Medium Impact** | [Testing Guide](https://docs.swarms.world/en/latest/swarms/framework/test/) |
|
||||
| **UI/UX & Design** | Design, Frontend Development | **Medium Impact** | [Design Contributions](https://github.com/The-Swarm-Corporation/swarms/issues) |
|
||||
| **Bug Reports & Feature Requests** | User Experience, Testing | **Easy Start** | [Report Issues](https://github.com/The-Swarm-Corporation/swarms/issues) |
|
||||
|
||||
---
|
||||
|
||||
## **We're Hiring Top Talent**
|
||||
|
||||
### **Join the Team Building the Future Of The World Economy**
|
||||
|
||||
**Ready to work on cutting-edge agent technology that's shaping the future?** We're actively recruiting exceptional engineers, researchers, and technical leaders to join our mission of building the operating system for the agent economy.
|
||||
|
||||
----
|
||||
| **Why Join Swarms?** | **What We Offer** |
|
||||
|-----------------------|-------------------|
|
||||
| **Cutting-Edge Technology** | Work on the most powerful multi-agent systems, distributed computing, and enterprise-scale infrastructure |
|
||||
| **Global Impact** | Your code will power agent applications used by Fortune 500 companies and millions of developers |
|
||||
| **World-Class Team** | Collaborate with top engineers, researchers, and industry experts from Google, OpenAI, and more |
|
||||
| **Fast Growth** | Join a rapidly scaling company with massive market opportunity and venture backing |
|
||||
|
||||
## Community
|
||||
### **Open Positions**
|
||||
|
||||
Join our growing community around the world, for real-time support, ideas, and discussions on Swarms 😊
|
||||
| **Position** | **Role Description** |
|
||||
|-------------------------------|----------------------------------------------------------|
|
||||
| **Senior Rust Engineers** | Building high-performance agent infrastructure |
|
||||
| **Python Framework Engineers**| Expanding our core multi-agent capabilities |
|
||||
| **DevOps/Platform Engineers** | Scaling cloud infrastructure for millions of agents |
|
||||
| **Technical Writers** | Creating world-class developer documentation |
|
||||
| **Solutions Engineers** | Helping enterprises adopt multi-agent AI |
|
||||
|
||||
- View our official [Blog](https://docs.swarms.world)
|
||||
- Chat live with us on [Discord](https://discord.gg/kS3rwKs3ZC)
|
||||
- Follow us on [Twitter](https://twitter.com/kyegomez)
|
||||
- Connect with us on [LinkedIn](https://www.linkedin.com/company/the-swarm-corporation)
|
||||
- Visit us on [YouTube](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)
|
||||
- [Join the Swarms community on Discord!](https://discord.gg/AJazBmhKnr)
|
||||
- Join our Swarms Community Gathering every Thursday at 1pm NYC Time to unlock the potential of autonomous agents in automating your daily tasks [Sign up here](https://lu.ma/5p2jnc2v)
|
||||
**Ready to Build the Future?** **[Apply Now at swarms.ai/hiring](https://swarms.ai/hiring)**
|
||||
|
||||
---
|
||||
|
||||
---
|
||||
|
||||
## Discovery Call
|
||||
Book a discovery call to learn how Swarms can lower your operating costs by 40% with swarms of autonomous agents in lightspeed. [Click here to book a time that works for you!](https://calendly.com/swarm-corp/30min?month=2023-11)
|
||||
## **Get Started Today**
|
||||
|
||||
### **Quick Start Guide**
|
||||
|
||||
| **Step** | **Action** | **Time Required** |
|
||||
|----------|------------|-------------------|
|
||||
| **1** | [Install Swarms Python Framework](https://docs.swarms.world/en/latest/swarms/install/install/) | 5 minutes |
|
||||
| **2** | [Run Your First Agent](https://docs.swarms.world/en/latest/swarms/examples/basic_agent/) | 10 minutes |
|
||||
| **3** | [Try Multi-Agent Workflows](https://docs.swarms.world/en/latest/swarms/examples/sequential_example/) | 15 minutes |
|
||||
| **4** | [Join Our Discord Community](https://discord.gg/jM3Z6M9uMq) | 2 minutes |
|
||||
| **5** | [Explore Enterprise Features](https://docs.swarms.world/en/latest/swarms_cloud/swarms_api/) | 20 minutes |
|
||||
|
||||
## Accelerate Backlog
|
||||
Help us accelerate our backlog by supporting us financially! Note, we're an open source corporation and so all the revenue we generate is through donations at the moment ;)
|
||||
---
|
||||
|
||||
## **Enterprise Support & Partnerships**
|
||||
|
||||
<a href="https://polar.sh/kyegomez"><img src="https://polar.sh/embed/fund-our-backlog.svg?org=kyegomez" /></a>
|
||||
### **Ready to Scale with Swarms?**
|
||||
|
||||
| **Contact Type** | **Best For** | **Response Time** | **Contact Information** |
|
||||
|------------------|--------------|-------------------|-------------------------|
|
||||
| **Technical Support** | Development questions, troubleshooting | < 24 hours | [Book Support Call](https://cal.com/swarms/swarms-technical-support) |
|
||||
| **Enterprise Sales** | Custom deployments, enterprise licensing | < 4 hours | [kye@swarms.world](mailto:kye@swarms.world) |
|
||||
| **Partnerships** | Integration partnerships, technology alliances | < 48 hours | [kye@swarms.world](mailto:kye@swarms.world) |
|
||||
| **Investor Relations** | Investment opportunities, funding updates | By appointment | [kye@swarms.world](mailto:kye@swarms.world) |
|
||||
|
||||
---
|
||||
|
||||
**Ready to build the future of AI? Start with Swarms today and join thousands of developers creating the next generation of intelligent applications.**
|
||||
|
@ -0,0 +1,205 @@
|
||||
# Agent Multi-Agent Communication Methods
|
||||
|
||||
The Agent class provides powerful built-in methods for facilitating communication and collaboration between multiple agents. These methods enable agents to talk to each other, pass information, and coordinate complex multi-agent workflows seamlessly.
|
||||
|
||||
## Overview
|
||||
|
||||
Multi-agent communication is essential for building sophisticated AI systems where different agents need to collaborate, share information, and coordinate their actions. The Agent class provides several methods to facilitate this communication:
|
||||
|
||||
| Method | Purpose | Use Case |
|
||||
|--------|---------|----------|
|
||||
| `talk_to` | Direct communication between two agents | Agent handoffs, expert consultation |
|
||||
| `talk_to_multiple_agents` | Concurrent communication with multiple agents | Broadcasting, consensus building |
|
||||
| `receive_message` | Process incoming messages from other agents | Message handling, task delegation |
|
||||
| `send_agent_message` | Send formatted messages to other agents | Direct messaging, notifications |
|
||||
|
||||
## Features
|
||||
|
||||
| Feature | Description |
|
||||
|---------------------------------|--------------------------------------------------------------------|
|
||||
| **Direct Agent Communication** | Enable one-to-one conversations between agents |
|
||||
| **Concurrent Multi-Agent Communication** | Broadcast messages to multiple agents simultaneously |
|
||||
| **Message Processing** | Handle incoming messages with contextual formatting |
|
||||
| **Error Handling** | Robust error handling for failed communications |
|
||||
| **Threading Support** | Efficient concurrent processing using ThreadPoolExecutor |
|
||||
| **Flexible Parameters** | Support for images, custom arguments, and kwargs |
|
||||
|
||||
---
|
||||
|
||||
## Core Methods
|
||||
|
||||
### `talk_to(agent, task, img=None, *args, **kwargs)`
|
||||
|
||||
Enables direct communication between the current agent and another agent. The method processes the task, generates a response, and then passes that response to the target agent.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `agent` | `Any` | Required | The target agent instance to communicate with |
|
||||
| `task` | `str` | Required | The task or message to send to the agent |
|
||||
| `img` | `str` | `None` | Optional image path for multimodal communication |
|
||||
| `*args` | `Any` | - | Additional positional arguments |
|
||||
| `**kwargs` | `Any` | - | Additional keyword arguments |
|
||||
|
||||
**Returns:** `Any` - The response from the target agent
|
||||
|
||||
**Usage Example:**
|
||||
|
||||
```python
|
||||
from swarms import Agent
|
||||
|
||||
# Create two specialized agents
|
||||
researcher = Agent(
|
||||
agent_name="Research-Agent",
|
||||
system_prompt="You are a research specialist focused on gathering and analyzing information.",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
analyst = Agent(
|
||||
agent_name="Analysis-Agent",
|
||||
system_prompt="You are an analytical specialist focused on interpreting research data.",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Agent communication
|
||||
research_result = researcher.talk_to(
|
||||
agent=analyst,
|
||||
task="Analyze the market trends for renewable energy stocks"
|
||||
)
|
||||
|
||||
print(research_result)
|
||||
```
|
||||
|
||||
### `talk_to_multiple_agents(agents, task, *args, **kwargs)`
|
||||
|
||||
Enables concurrent communication with multiple agents using ThreadPoolExecutor for efficient parallel processing.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `agents` | `List[Union[Any, Callable]]` | Required | List of agent instances to communicate with |
|
||||
| `task` | `str` | Required | The task or message to send to all agents |
|
||||
| `*args` | `Any` | - | Additional positional arguments |
|
||||
| `**kwargs` | `Any` | - | Additional keyword arguments |
|
||||
|
||||
**Returns:** `List[Any]` - List of responses from all agents (or None for failed communications)
|
||||
|
||||
**Usage Example:**
|
||||
|
||||
```python
|
||||
from swarms import Agent
|
||||
|
||||
# Create multiple specialized agents
|
||||
agents = [
|
||||
Agent(
|
||||
agent_name="Financial-Analyst",
|
||||
system_prompt="You are a financial analysis expert.",
|
||||
max_loops=1,
|
||||
),
|
||||
Agent(
|
||||
agent_name="Risk-Assessor",
|
||||
system_prompt="You are a risk assessment specialist.",
|
||||
max_loops=1,
|
||||
),
|
||||
Agent(
|
||||
agent_name="Market-Researcher",
|
||||
system_prompt="You are a market research expert.",
|
||||
max_loops=1,
|
||||
)
|
||||
]
|
||||
|
||||
coordinator = Agent(
|
||||
agent_name="Coordinator-Agent",
|
||||
system_prompt="You coordinate multi-agent analysis.",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Broadcast to multiple agents
|
||||
responses = coordinator.talk_to_multiple_agents(
|
||||
agents=agents,
|
||||
task="Evaluate the investment potential of Tesla stock"
|
||||
)
|
||||
|
||||
# Process responses
|
||||
for i, response in enumerate(responses):
|
||||
if response:
|
||||
print(f"Agent {i+1} Response: {response}")
|
||||
else:
|
||||
print(f"Agent {i+1} failed to respond")
|
||||
```
|
||||
|
||||
### `receive_message(agent_name, task, *args, **kwargs)`
|
||||
|
||||
Processes incoming messages from other agents with proper context formatting.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `agent_name` | `str` | Required | Name of the sending agent |
|
||||
| `task` | `str` | Required | The message content |
|
||||
| `*args` | `Any` | - | Additional positional arguments |
|
||||
| `**kwargs` | `Any` | - | Additional keyword arguments |
|
||||
|
||||
**Returns:** `Any` - The agent's response to the received message
|
||||
|
||||
**Usage Example:**
|
||||
|
||||
```python
|
||||
from swarms import Agent
|
||||
|
||||
# Create an agent that can receive messages
|
||||
recipient_agent = Agent(
|
||||
agent_name="Support-Agent",
|
||||
system_prompt="You provide helpful support and assistance.",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Simulate receiving a message from another agent
|
||||
response = recipient_agent.receive_message(
|
||||
agent_name="Customer-Service-Agent",
|
||||
task="A customer is asking about refund policies. Can you help?"
|
||||
)
|
||||
|
||||
print(response)
|
||||
```
|
||||
|
||||
### `send_agent_message(agent_name, message, *args, **kwargs)`
|
||||
|
||||
Sends a formatted message from the current agent to a specified target agent.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `agent_name` | `str` | Required | Name of the target agent |
|
||||
| `message` | `str` | Required | The message to send |
|
||||
| `*args` | `Any` | - | Additional positional arguments |
|
||||
| `**kwargs` | `Any` | - | Additional keyword arguments |
|
||||
|
||||
**Returns:** `Any` - The result of sending the message
|
||||
|
||||
**Usage Example:**
|
||||
|
||||
```python
|
||||
from swarms import Agent
|
||||
|
||||
sender_agent = Agent(
|
||||
agent_name="Notification-Agent",
|
||||
system_prompt="You send notifications and updates.",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Send a message to another agent
|
||||
result = sender_agent.send_agent_message(
|
||||
agent_name="Task-Manager-Agent",
|
||||
message="Task XYZ has been completed successfully"
|
||||
)
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
|
||||
This comprehensive guide covers all aspects of multi-agent communication using the Agent class methods. These methods provide the foundation for building sophisticated multi-agent systems with robust communication capabilities.
|
@ -0,0 +1,322 @@
|
||||
# HeavySwarm Documentation
|
||||
|
||||
HeavySwarm is a sophisticated multi-agent orchestration system that decomposes complex tasks into specialized questions and executes them using four specialized agents: Research, Analysis, Alternatives, and Verification. The results are then synthesized into a comprehensive response.
|
||||
|
||||
Inspired by X.AI's Grok 4 heavy implementation, HeavySwarm provides robust task analysis through intelligent question generation, parallel execution, and comprehensive synthesis with real-time progress monitoring.
|
||||
|
||||
## Architecture
|
||||
|
||||
### System Design
|
||||
|
||||
The HeavySwarm follows a structured 5-phase workflow:
|
||||
|
||||
1. **Task Decomposition**: Complex tasks are broken down into specialized questions
|
||||
2. **Question Generation**: AI-powered generation of role-specific questions
|
||||
3. **Parallel Execution**: Four specialized agents work concurrently
|
||||
4. **Result Collection**: Outputs are gathered and validated
|
||||
5. **Synthesis**: Integration into a comprehensive final response
|
||||
|
||||
### Agent Specialization
|
||||
|
||||
- **Research Agent**: Comprehensive information gathering and synthesis
|
||||
- **Analysis Agent**: Pattern recognition and statistical analysis
|
||||
- **Alternatives Agent**: Creative problem-solving and strategic options
|
||||
- **Verification Agent**: Validation, feasibility assessment, and quality assurance
|
||||
- **Synthesis Agent**: Multi-perspective integration and executive reporting
|
||||
|
||||
## Architecture Diagram
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "HeavySwarm Architecture"
|
||||
A[Input Task] --> B[Question Generation Agent]
|
||||
B --> C[Task Decomposition]
|
||||
|
||||
C --> D[Research Agent]
|
||||
C --> E[Analysis Agent]
|
||||
C --> F[Alternatives Agent]
|
||||
C --> G[Verification Agent]
|
||||
|
||||
D --> H[Parallel Execution Engine]
|
||||
E --> H
|
||||
F --> H
|
||||
G --> H
|
||||
|
||||
H --> I[Result Collection]
|
||||
I --> J[Synthesis Agent]
|
||||
J --> K[Comprehensive Report]
|
||||
|
||||
subgraph "Monitoring & Control"
|
||||
L[Rich Dashboard]
|
||||
M[Progress Tracking]
|
||||
N[Error Handling]
|
||||
O[Timeout Management]
|
||||
end
|
||||
|
||||
H --> L
|
||||
H --> M
|
||||
H --> N
|
||||
H --> O
|
||||
end
|
||||
|
||||
subgraph "Agent Specializations"
|
||||
D --> D1[Information Gathering<br/>Market Research<br/>Data Collection]
|
||||
E --> E1[Statistical Analysis<br/>Pattern Recognition<br/>Predictive Modeling]
|
||||
F --> F1[Creative Solutions<br/>Strategic Options<br/>Innovation Ideation]
|
||||
G --> G1[Fact Checking<br/>Feasibility Assessment<br/>Quality Assurance]
|
||||
end
|
||||
|
||||
style A fill:#ff6b6b
|
||||
style K fill:#4ecdc4
|
||||
style H fill:#45b7d1
|
||||
style J fill:#96ceb4
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install swarms
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
```python
|
||||
from swarms import HeavySwarm
|
||||
|
||||
# Initialize the swarm
|
||||
swarm = HeavySwarm(
|
||||
name="MarketAnalysisSwarm",
|
||||
description="Financial market analysis swarm",
|
||||
question_agent_model_name="gpt-4o-mini",
|
||||
worker_model_name="gpt-4o-mini",
|
||||
show_dashboard=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Execute analysis
|
||||
result = swarm.run("Analyze the current cryptocurrency market trends and investment opportunities")
|
||||
print(result)
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### HeavySwarm Class
|
||||
|
||||
#### Constructor Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `name` | `str` | `"HeavySwarm"` | Identifier name for the swarm instance |
|
||||
| `description` | `str` | `"A swarm of agents..."` | Description of the swarm's purpose |
|
||||
| `agents` | `List[Agent]` | `None` | Pre-configured agent list (unused - agents created internally) |
|
||||
| `timeout` | `int` | `300` | Maximum execution time per agent in seconds |
|
||||
| `aggregation_strategy` | `str` | `"synthesis"` | Strategy for result aggregation |
|
||||
| `loops_per_agent` | `int` | `1` | Number of execution loops per agent |
|
||||
| `question_agent_model_name` | `str` | `"gpt-4o-mini"` | Model for question generation |
|
||||
| `worker_model_name` | `str` | `"gpt-4o-mini"` | Model for specialized worker agents |
|
||||
| `verbose` | `bool` | `False` | Enable detailed logging output |
|
||||
| `max_workers` | `int` | `int(os.cpu_count() * 0.9)` | Maximum concurrent workers |
|
||||
| `show_dashboard` | `bool` | `False` | Enable rich dashboard visualization |
|
||||
| `agent_prints_on` | `bool` | `False` | Enable individual agent output printing |
|
||||
|
||||
#### Methods
|
||||
|
||||
##### `run(task: str, img: str = None) -> str`
|
||||
|
||||
Execute the complete HeavySwarm orchestration flow.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `task` (str): The main task to analyze and decompose
|
||||
|
||||
- `img` (str, optional): Image input for visual analysis tasks
|
||||
|
||||
**Returns:**
|
||||
- `str`: Comprehensive final analysis from synthesis agent
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
result = swarm.run("Develop a go-to-market strategy for a new SaaS product")
|
||||
```
|
||||
|
||||
|
||||
## Real-World Applications
|
||||
|
||||
### Financial Services
|
||||
|
||||
```python
|
||||
# Market Analysis
|
||||
swarm = HeavySwarm(
|
||||
name="FinanceSwarm",
|
||||
worker_model_name="gpt-4o",
|
||||
show_dashboard=True
|
||||
)
|
||||
|
||||
result = swarm.run("""
|
||||
Analyze the impact of recent Federal Reserve policy changes on:
|
||||
1. Bond markets and yield curves
|
||||
2. Equity market valuations
|
||||
3. Currency exchange rates
|
||||
4. Provide investment recommendations for institutional portfolios
|
||||
""")
|
||||
```
|
||||
|
||||
### Use-cases
|
||||
|
||||
| Use Case | Description |
|
||||
|---------------------------------------------|---------------------------------------------|
|
||||
| Portfolio optimization and risk assessment | Optimize asset allocation and assess risks |
|
||||
| Market trend analysis and forecasting | Analyze and predict market movements |
|
||||
| Regulatory compliance evaluation | Evaluate adherence to financial regulations |
|
||||
| Investment strategy development | Develop and refine investment strategies |
|
||||
| Credit risk analysis and modeling | Analyze and model credit risk |
|
||||
|
||||
|
||||
-------
|
||||
|
||||
|
||||
### Healthcare & Life Sciences
|
||||
|
||||
```python
|
||||
# Clinical Research Analysis
|
||||
swarm = HeavySwarm(
|
||||
name="HealthcareSwarm",
|
||||
worker_model_name="gpt-4o",
|
||||
timeout=600,
|
||||
loops_per_agent=2
|
||||
)
|
||||
|
||||
result = swarm.run("""
|
||||
Evaluate the potential of AI-driven personalized medicine:
|
||||
1. Current technological capabilities and limitations
|
||||
2. Regulatory landscape and approval pathways
|
||||
3. Market opportunities and competitive analysis
|
||||
4. Implementation strategies for healthcare systems
|
||||
""")
|
||||
```
|
||||
|
||||
----
|
||||
|
||||
**Use Cases:**
|
||||
|
||||
| Use Case | Description |
|
||||
|----------------------------------------|---------------------------------------------|
|
||||
| Drug discovery and development analysis| Analyze and accelerate drug R&D processes |
|
||||
| Clinical trial optimization | Improve design and efficiency of trials |
|
||||
| Healthcare policy evaluation | Assess and inform healthcare policies |
|
||||
| Medical device market analysis | Evaluate trends and opportunities in devices|
|
||||
| Patient outcome prediction modeling | Predict and model patient health outcomes |
|
||||
|
||||
---
|
||||
|
||||
|
||||
### Technology & Innovation
|
||||
|
||||
```python
|
||||
# Tech Strategy Analysis
|
||||
swarm = HeavySwarm(
|
||||
name="TechSwarm",
|
||||
worker_model_name="gpt-4o",
|
||||
show_dashboard=True,
|
||||
verbose=True
|
||||
)
|
||||
|
||||
result = swarm.run("""
|
||||
Assess the strategic implications of quantum computing adoption:
|
||||
1. Technical readiness and hardware developments
|
||||
2. Industry applications and use cases
|
||||
3. Competitive landscape and key players
|
||||
4. Investment and implementation roadmap
|
||||
""")
|
||||
```
|
||||
|
||||
**Use Cases:**
|
||||
|
||||
| Use Case | Description |
|
||||
|------------------------------------|---------------------------------------------|
|
||||
| Technology roadmap development | Plan and prioritize technology initiatives |
|
||||
| Competitive intelligence gathering | Analyze competitors and market trends |
|
||||
| Innovation pipeline analysis | Evaluate and manage innovation projects |
|
||||
| Digital transformation strategy | Develop and implement digital strategies |
|
||||
| Emerging technology assessment | Assess new and disruptive technologies |
|
||||
|
||||
### Manufacturing & Supply Chain
|
||||
|
||||
```python
|
||||
# Supply Chain Optimization
|
||||
swarm = HeavySwarm(
|
||||
name="ManufacturingSwarm",
|
||||
worker_model_name="gpt-4o",
|
||||
max_workers=8
|
||||
)
|
||||
|
||||
result = swarm.run("""
|
||||
Optimize global supply chain resilience:
|
||||
1. Risk assessment and vulnerability analysis
|
||||
2. Alternative sourcing strategies
|
||||
3. Technology integration opportunities
|
||||
4. Cost-benefit analysis of proposed changes
|
||||
""")
|
||||
```
|
||||
|
||||
**Use Cases:**
|
||||
|
||||
| Use Case | Description |
|
||||
|----------------------------------|---------------------------------------------|
|
||||
| Supply chain risk management | Identify and mitigate supply chain risks |
|
||||
| Manufacturing process optimization | Improve efficiency and productivity |
|
||||
| Quality control system design | Develop systems to ensure product quality |
|
||||
| Sustainability impact assessment | Evaluate environmental and social impacts |
|
||||
| Logistics network optimization | Enhance logistics and distribution networks |
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Custom Agent Configuration
|
||||
|
||||
```python
|
||||
# High-performance configuration
|
||||
swarm = HeavySwarm(
|
||||
name="HighPerformanceSwarm",
|
||||
question_agent_model_name="gpt-4o",
|
||||
worker_model_name="gpt-4o",
|
||||
timeout=900,
|
||||
loops_per_agent=3,
|
||||
max_workers=12,
|
||||
show_dashboard=True,
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------------------------|---------------------------------------------------------------|
|
||||
| **Agent Timeout** | Increase timeout parameter or reduce task complexity |
|
||||
| **Model Rate Limits** | Implement backoff strategies or use different models |
|
||||
| **Memory Usage** | Monitor system resources with large-scale operations |
|
||||
| **Dashboard Performance** | Disable dashboard for batch processing |
|
||||
|
||||
## Contributing
|
||||
|
||||
HeavySwarm is part of the Swarms ecosystem. Contributions are welcome for:
|
||||
|
||||
- New agent specializations
|
||||
|
||||
- Performance optimizations
|
||||
|
||||
- Integration capabilities
|
||||
|
||||
- Documentation improvements
|
||||
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
- Inspired by X.AI's Grok heavy implementation
|
||||
|
||||
- Built on the Swarms framework
|
||||
|
||||
- Utilizes Rich for dashboard visualization
|
||||
|
||||
- Powered by advanced language models
|
||||
|
@ -0,0 +1,392 @@
|
||||
# Technical Support
|
||||
|
||||
*Getting Help with the Swarms Multi-Agent Framework*
|
||||
|
||||
---
|
||||
|
||||
## **Getting Started with Support**
|
||||
|
||||
The Swarms team is committed to providing exceptional technical support to help you build production-grade multi-agent systems. Whether you're experiencing bugs, need implementation guidance, or want to request new features, we have multiple channels to ensure you get the help you need quickly and efficiently.
|
||||
|
||||
---
|
||||
|
||||
## **Support Channels Overview**
|
||||
|
||||
| **Support Type** | **Best For** | **Response Time** | **Channel** |
|
||||
|------------------|--------------|-------------------|-------------|
|
||||
| **Bug Reports** | Code issues, errors, unexpected behavior | < 24 hours | [GitHub Issues](https://github.com/kyegomez/swarms/issues) |
|
||||
| **Feature Requests** | New capabilities, enhancements | < 48 hours | [Email kye@swarms.world](mailto:kye@swarms.world) |
|
||||
| **Private Issues** | Security concerns, enterprise consulting | < 4 hours | [Book Support Call](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true) |
|
||||
| **Real-time Help** | Quick questions, community discussions | Immediate | [Discord Community](https://discord.gg/jM3Z6M9uMq) |
|
||||
| **Documentation** | Usage guides, examples, tutorials | Self-service | [docs.swarms.world](https://docs.swarms.world) |
|
||||
|
||||
---
|
||||
|
||||
## **Reporting Bugs & Technical Issues**
|
||||
|
||||
### **When to Use GitHub Issues**
|
||||
|
||||
Use GitHub Issues for:
|
||||
|
||||
- Code bugs and errors
|
||||
|
||||
- Installation problems
|
||||
|
||||
- Documentation issues
|
||||
|
||||
- Performance problems
|
||||
|
||||
- API inconsistencies
|
||||
|
||||
- Public technical discussions
|
||||
|
||||
### **How to Create an Effective Bug Report**
|
||||
|
||||
1. **Visit our Issues page**: [https://github.com/kyegomez/swarms/issues](https://github.com/kyegomez/swarms/issues)
|
||||
|
||||
2. **Search existing issues** to avoid duplicates
|
||||
|
||||
3. **Click "New Issue"** and select the appropriate template
|
||||
|
||||
4. **Include the following information**:
|
||||
|
||||
## Bug Description
|
||||
|
||||
A clear description of what the bug is.
|
||||
|
||||
## Environment
|
||||
|
||||
- Swarms version: [e.g., 5.9.2]
|
||||
|
||||
- Python version: [e.g., 3.9.0]
|
||||
|
||||
- Operating System: [e.g., Ubuntu 20.04, macOS 14, Windows 11]
|
||||
|
||||
- Model provider: [e.g., OpenAI, Anthropic, Groq]
|
||||
|
||||
|
||||
## Steps to Reproduce
|
||||
|
||||
1. Step one
|
||||
2. Step two
|
||||
3. Step three
|
||||
|
||||
## Expected Behavior
|
||||
|
||||
What you expected to happen.
|
||||
|
||||
## Actual Behavior
|
||||
|
||||
What actually happened.
|
||||
|
||||
## Code Sample
|
||||
|
||||
```python
|
||||
# Minimal code that reproduces the issue
|
||||
from swarms import Agent
|
||||
|
||||
agent = Agent(model_name="gpt-4o-mini")
|
||||
result = agent.run("Your task here")
|
||||
```
|
||||
|
||||
## Error Messages
|
||||
|
||||
Paste any error messages or stack traces here
|
||||
|
||||
|
||||
## Additional Context
|
||||
|
||||
Any other context, screenshots, or logs that might help.
|
||||
|
||||
### **Issue Templates Available**
|
||||
|
||||
| Template | Use Case |
|
||||
|----------|----------|
|
||||
| **Bug Report** | Standard bug reporting template |
|
||||
| **Documentation** | Issues with docs, guides, examples |
|
||||
| **Feature Request** | Suggesting new functionality |
|
||||
| **Question** | General questions about usage |
|
||||
| **Enterprise** | Enterprise-specific issues |
|
||||
|
||||
---
|
||||
|
||||
## **Private & Enterprise Support**
|
||||
|
||||
### **When to Book a Private Support Call**
|
||||
|
||||
Book a private consultation for:
|
||||
|
||||
- Security vulnerabilities or concerns
|
||||
|
||||
- Enterprise deployment guidance
|
||||
|
||||
- Custom implementation consulting
|
||||
|
||||
- Architecture review sessions
|
||||
|
||||
- Performance optimization
|
||||
|
||||
- Integration troubleshooting
|
||||
|
||||
- Strategic technical planning
|
||||
|
||||
|
||||
### **How to Schedule Support**
|
||||
|
||||
1. **Visit our booking page**: [https://cal.com/swarms/swarms-technical-support?overlayCalendar=true](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true)
|
||||
|
||||
2. **Select an available time** that works for your timezone
|
||||
|
||||
3. **Provide details** about your issue or requirements
|
||||
|
||||
4. **Prepare for the call**:
|
||||
- Have your code/environment ready
|
||||
|
||||
- Prepare specific questions
|
||||
|
||||
- Include relevant error messages or logs
|
||||
|
||||
- Share your use case and goals
|
||||
|
||||
|
||||
### **What to Expect**
|
||||
|
||||
- **Direct access** to Swarms core team members
|
||||
|
||||
- **Screen sharing** for live debugging
|
||||
|
||||
- **Custom solutions** tailored to your needs
|
||||
|
||||
- **Follow-up resources** and documentation
|
||||
|
||||
- **Priority support** for implementation
|
||||
|
||||
|
||||
---
|
||||
|
||||
## **Real-Time Community Support**
|
||||
|
||||
### **Join Our Discord Community**
|
||||
|
||||
Get instant help from our active community of developers and core team members.
|
||||
|
||||
**Discord Benefits:**
|
||||
|
||||
- **24/7 availability** - Someone is always online
|
||||
|
||||
- **Instant responses** - Get help in real-time
|
||||
|
||||
- **Community wisdom** - Learn from other developers
|
||||
|
||||
- **Specialized channels** - Find the right help quickly
|
||||
|
||||
- **Latest updates** - Stay informed about new releases
|
||||
|
||||
|
||||
### **Discord Channels Guide**
|
||||
|
||||
| Channel | Purpose |
|
||||
|---------|---------|
|
||||
| **#general** | General discussions and introductions |
|
||||
| **#technical-support** | Technical questions and troubleshooting |
|
||||
| **#showcase** | Share your Swarms projects and demos |
|
||||
| **#feature-requests** | Discuss potential new features |
|
||||
| **#announcements** | Official updates and releases |
|
||||
| **#resources** | Helpful links, tutorials, and guides |
|
||||
|
||||
### **Getting Help on Discord**
|
||||
|
||||
1. **Join here**: [https://discord.gg/jM3Z6M9uMq](https://discord.gg/jM3Z6M9uMq)
|
||||
|
||||
2. **Read the rules** and introduce yourself in #general
|
||||
|
||||
3. **Use the right channel** for your question type
|
||||
|
||||
4. **Provide context** when asking questions:
|
||||
```
|
||||
Python version: 3.9
|
||||
Swarms version: 5.9.2
|
||||
OS: macOS 14
|
||||
Question: How do I implement custom tools with MCP?
|
||||
What I tried: [paste your code]
|
||||
Error: [paste error message]
|
||||
```
|
||||
|
||||
5. **Be patient and respectful** - our community loves helping!
|
||||
|
||||
|
||||
---
|
||||
|
||||
## **Feature Requests & Enhancement Suggestions**
|
||||
|
||||
### **When to Email for Feature Requests**
|
||||
|
||||
Contact us directly for:
|
||||
|
||||
- Major new framework capabilities
|
||||
|
||||
- Architecture enhancements
|
||||
|
||||
- New model provider integrations
|
||||
|
||||
- Enterprise-specific features
|
||||
|
||||
- Analytics and monitoring tools
|
||||
|
||||
- UI/UX improvements
|
||||
|
||||
|
||||
### **How to Submit Feature Requests**
|
||||
|
||||
**Email**: [kye@swarms.world](mailto:kye@swarms.world)
|
||||
|
||||
**Subject Format**: `[FEATURE REQUEST] Brief description`
|
||||
|
||||
**Include in your email**:
|
||||
|
||||
```markdown
|
||||
## Feature Description
|
||||
Clear description of the proposed feature
|
||||
|
||||
## Use Case
|
||||
Why this feature is needed and how it would be used
|
||||
|
||||
## Business Impact
|
||||
How this would benefit the Swarms ecosystem
|
||||
|
||||
## Technical Requirements
|
||||
Any specific technical considerations
|
||||
|
||||
## Priority Level
|
||||
- Low: Nice to have
|
||||
|
||||
- Medium: Would significantly improve workflow
|
||||
|
||||
- High: Critical for adoption/production use
|
||||
|
||||
|
||||
## Alternatives Considered
|
||||
Other solutions you've explored
|
||||
|
||||
## Implementation Ideas
|
||||
Any thoughts on how this could be implemented
|
||||
```
|
||||
|
||||
### **Feature Request Process**
|
||||
|
||||
1. **Email submission** with detailed requirements
|
||||
2. **Initial review** within 48 hours
|
||||
3. **Technical feasibility** assessment
|
||||
4. **Community feedback** gathering (if applicable)
|
||||
5. **Roadmap planning** and timeline estimation
|
||||
6. **Development** and testing
|
||||
7. **Release** with documentation
|
||||
|
||||
---
|
||||
|
||||
## **Self-Service Resources**
|
||||
|
||||
Before reaching out for support, check these resources:
|
||||
|
||||
### **Documentation**
|
||||
|
||||
- **[Complete Documentation](https://docs.swarms.world)** - Comprehensive guides and API reference
|
||||
|
||||
- **[Installation Guide](https://docs.swarms.world/en/latest/swarms/install/install/)** - Setup and configuration
|
||||
|
||||
- **[Quick Start](https://docs.swarms.world/en/latest/quickstart/)** - Get up and running fast
|
||||
|
||||
- **[Examples Gallery](https://docs.swarms.world/en/latest/examples/)** - Real-world use cases
|
||||
|
||||
|
||||
### **Common Solutions**
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| **Installation fails** | Check [Environment Setup](https://docs.swarms.world/en/latest/swarms/install/env/) |
|
||||
| **Model not responding** | Verify API keys in environment variables |
|
||||
| **Import errors** | Ensure latest version: `pip install -U swarms` |
|
||||
| **Memory issues** | Review [Performance Guide](https://docs.swarms.world/en/latest/swarms/framework/test/) |
|
||||
| **Agent not working** | Check [Basic Agent Example](https://docs.swarms.world/en/latest/swarms/examples/basic_agent/) |
|
||||
|
||||
### **Video Tutorials**
|
||||
|
||||
- **[YouTube Channel](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)** - Step-by-step tutorials
|
||||
|
||||
- **[Live Coding Sessions](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ)** - Real-world implementations
|
||||
|
||||
|
||||
---
|
||||
|
||||
## **Support Checklist**
|
||||
|
||||
Before requesting support, please:
|
||||
|
||||
- [ ] **Check the documentation** for existing solutions
|
||||
|
||||
- [ ] **Search GitHub issues** for similar problems
|
||||
|
||||
- [ ] **Update to latest version**: `pip install -U swarms`
|
||||
|
||||
- [ ] **Verify environment setup** and API keys
|
||||
|
||||
- [ ] **Test with minimal code** to isolate the issue
|
||||
|
||||
- [ ] **Gather error messages** and relevant logs
|
||||
|
||||
- [ ] **Note your environment** (OS, Python version, Swarms version)
|
||||
|
||||
|
||||
---
|
||||
|
||||
## **Support Best Practices**
|
||||
|
||||
### **For Faster Resolution**
|
||||
|
||||
1. **Be Specific**: Provide exact error messages and steps to reproduce
|
||||
2. **Include Code**: Share minimal, runnable examples
|
||||
3. **Environment Details**: Always include version information
|
||||
4. **Search First**: Check if your issue has been addressed before
|
||||
5. **One Issue Per Report**: Don't combine multiple problems
|
||||
6. **Follow Up**: Respond promptly to requests for additional information
|
||||
|
||||
### **Response Time Expectations**
|
||||
|
||||
| Priority | Response Time | Resolution Time |
|
||||
|----------|---------------|-----------------|
|
||||
| **Critical** (Production down) | < 2 hours | < 24 hours |
|
||||
| **High** (Major functionality blocked) | < 8 hours | < 48 hours |
|
||||
| **Medium** (Feature issues) | < 24 hours | < 1 week |
|
||||
| **Low** (Documentation, enhancements) | < 48 hours | Next release |
|
||||
|
||||
---
|
||||
|
||||
## **Contributing Back**
|
||||
|
||||
Help improve support for everyone:
|
||||
|
||||
- **Answer questions** in Discord or GitHub
|
||||
|
||||
- **Improve documentation** with your learnings
|
||||
|
||||
- **Share examples** of successful implementations
|
||||
|
||||
- **Report bugs** you discover
|
||||
|
||||
- **Suggest improvements** to this support process
|
||||
|
||||
|
||||
**Your contributions make Swarms better for everyone.**
|
||||
|
||||
---
|
||||
|
||||
## **Support Channel Summary**
|
||||
|
||||
| Urgency | Best Channel |
|
||||
|---------|-------------|
|
||||
| **Emergency** | [Book Immediate Call](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true) |
|
||||
| **Urgent** | [Discord #technical-support](https://discord.gg/jM3Z6M9uMq) |
|
||||
| **Standard** | [GitHub Issues](https://github.com/kyegomez/swarms/issues) |
|
||||
| **Feature Ideas** | [Email kye@swarms.world](mailto:kye@swarms.world) |
|
||||
|
||||
**We're here to help you succeed with Swarms.**
|
@ -0,0 +1,242 @@
|
||||
# Swarms API Clients
|
||||
|
||||
*Production-Ready Client Libraries for Every Programming Language*
|
||||
|
||||
## Overview
|
||||
|
||||
The Swarms API provides official client libraries across multiple programming languages, enabling developers to integrate powerful multi-agent AI capabilities into their applications with ease. Our clients are designed for production use, featuring robust error handling, comprehensive documentation, and seamless integration with existing codebases.
|
||||
|
||||
Whether you're building enterprise applications, research prototypes, or innovative AI products, our client libraries provide the tools you need to harness the full power of the Swarms platform.
|
||||
|
||||
## Available Clients
|
||||
|
||||
| Language | Status | Repository | Documentation | Description |
|
||||
|----------|--------|------------|---------------|-------------|
|
||||
| **Python** | ✅ **Available** | [swarms-sdk](https://github.com/The-Swarm-Corporation/swarms-sdk) | [Docs](https://docs.swarms.world/en/latest/swarms_cloud/python_client/) | Production-grade Python client with comprehensive error handling, retry logic, and extensive examples |
|
||||
| **TypeScript/Node.js** | ✅ **Available** | [swarms-ts](https://github.com/The-Swarm-Corporation/swarms-ts) | 📚 *Coming Soon* | Modern TypeScript client with full type safety, Promise-based API, and Node.js compatibility |
|
||||
| **Go** | ✅ **Available** | [swarms-client-go](https://github.com/The-Swarm-Corporation/swarms-client-go) | 📚 *Coming Soon* | High-performance Go client optimized for concurrent operations and microservices |
|
||||
| **Java** | ✅ **Available** | [swarms-java](https://github.com/The-Swarm-Corporation/swarms-java) | 📚 *Coming Soon* | Enterprise Java client with Spring Boot integration and comprehensive SDK features |
|
||||
| **Kotlin** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | Modern Kotlin client with coroutines support and Android compatibility |
|
||||
| **Ruby** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | Elegant Ruby client with Rails integration and gem packaging |
|
||||
| **Rust** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | Ultra-fast Rust client with memory safety and zero-cost abstractions |
|
||||
| **C#/.NET** | 🚧 **Coming Soon** | *In Development* | 📚 *Coming Soon* | .NET client with async/await support and NuGet packaging |
|
||||
|
||||
## Client Features
|
||||
|
||||
All Swarms API clients are built with the following enterprise-grade features:
|
||||
|
||||
### 🔧 **Core Functionality**
|
||||
|
||||
| Feature | Description |
|
||||
|------------------------|--------------------------------------------------------------------|
|
||||
| **Full API Coverage** | Complete access to all Swarms API endpoints |
|
||||
| **Type Safety** | Strongly-typed interfaces for all request/response objects |
|
||||
| **Error Handling** | Comprehensive error handling with detailed error messages |
|
||||
| **Retry Logic** | Automatic retries with exponential backoff for transient failures |
|
||||
|
||||
---
|
||||
|
||||
### 🚀 **Performance & Reliability**
|
||||
|
||||
| Feature | Description |
|
||||
|--------------------------|--------------------------------------------------------------------|
|
||||
| **Connection Pooling** | Efficient HTTP connection management |
|
||||
| **Rate Limiting** | Built-in rate limit handling and backoff strategies |
|
||||
| **Timeout Configuration**| Configurable timeouts for different operation types |
|
||||
| **Streaming Support** | Real-time streaming for long-running operations |
|
||||
|
||||
---
|
||||
|
||||
### 🛡️ **Security & Authentication**
|
||||
|
||||
| Feature | Description |
|
||||
|------------------------|--------------------------------------------------------------------|
|
||||
| **API Key Management** | Secure API key handling and rotation |
|
||||
| **TLS/SSL** | End-to-end encryption for all communications |
|
||||
| **Request Signing** | Optional request signing for enhanced security |
|
||||
| **Environment Configuration** | Secure environment-based configuration |
|
||||
|
||||
---
|
||||
|
||||
### 📊 **Monitoring & Debugging**
|
||||
|
||||
| Feature | Description |
|
||||
|----------------------------|--------------------------------------------------------------------|
|
||||
| **Comprehensive Logging** | Detailed logging for debugging and monitoring |
|
||||
| **Request/Response Tracing** | Full request/response tracing capabilities |
|
||||
| **Metrics Integration** | Built-in metrics for monitoring client performance |
|
||||
| **Debug Mode** | Enhanced debugging features for development |
|
||||
|
||||
|
||||
## Client-Specific Features
|
||||
|
||||
### Python Client
|
||||
|
||||
| Feature | Description |
|
||||
|------------------------|----------------------------------------------------------|
|
||||
| **Async Support** | Full async/await support with `asyncio` |
|
||||
| **Pydantic Integration** | Type-safe request/response models |
|
||||
| **Context Managers** | Resource management with context managers |
|
||||
| **Rich Logging** | Integration with Python's `logging` module |
|
||||
|
||||
---
|
||||
|
||||
### TypeScript/Node.js Client
|
||||
|
||||
| Feature | Description |
|
||||
|------------------------|----------------------------------------------------------|
|
||||
| **TypeScript First** | Built with TypeScript for maximum type safety |
|
||||
| **Promise-Based** | Modern Promise-based API with async/await |
|
||||
| **Browser Compatible** | Works in both Node.js and modern browsers |
|
||||
| **Zero Dependencies** | Minimal dependency footprint |
|
||||
|
||||
---
|
||||
|
||||
### Go Client
|
||||
|
||||
| Feature | Description |
|
||||
|------------------------|----------------------------------------------------------|
|
||||
| **Context Support** | Full context.Context support for cancellation |
|
||||
| **Structured Logging** | Integration with structured logging libraries |
|
||||
| **Concurrency Safe** | Thread-safe design for concurrent operations |
|
||||
| **Minimal Allocation** | Optimized for minimal memory allocation |
|
||||
|
||||
---
|
||||
|
||||
### Java Client
|
||||
|
||||
| Feature | Description |
|
||||
|------------------------|----------------------------------------------------------|
|
||||
| **Spring Boot Ready** | Built-in Spring Boot auto-configuration |
|
||||
| **Reactive Support** | Optional reactive streams support |
|
||||
| **Enterprise Features**| JMX metrics, health checks, and more |
|
||||
| **Maven & Gradle** | Available on Maven Central |
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
All clients support standard environment variables for configuration:
|
||||
|
||||
```bash
|
||||
# API Configuration
|
||||
SWARMS_API_KEY=your_api_key_here
|
||||
SWARMS_BASE_URL=https://api.swarms.world
|
||||
|
||||
# Client Configuration
|
||||
SWARMS_TIMEOUT=60
|
||||
SWARMS_MAX_RETRIES=3
|
||||
SWARMS_LOG_LEVEL=INFO
|
||||
```
|
||||
|
||||
## Community & Support
|
||||
|
||||
### 📚 **Documentation & Resources**
|
||||
|
||||
| Resource | Link |
|
||||
|-----------------------------|----------------------------------------------------------------------------------------|
|
||||
| Complete API Documentation | [View Docs](https://docs.swarms.world/en/latest/swarms_cloud/swarms_api/) |
|
||||
| Python Client Docs | [View Docs](https://docs.swarms.world/en/latest/swarms_cloud/python_client/) |
|
||||
| API Examples & Tutorials | [View Examples](https://docs.swarms.world/en/latest/examples/) |
|
||||
|
||||
---
|
||||
|
||||
### 💬 **Community Support**
|
||||
|
||||
| Community Channel | Description | Link |
|
||||
|-----------------------------|---------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------|
|
||||
| Discord Community | Join our active developer community for real-time support and discussions | [Join Discord](https://discord.gg/jM3Z6M9uMq) |
|
||||
| GitHub Discussions | Ask questions and share ideas | [GitHub Discussions](https://github.com/The-Swarm-Corporation/swarms/discussions) |
|
||||
| Twitter/X | Follow for updates and announcements | [Twitter/X](https://x.com/swarms_corp) |
|
||||
|
||||
---
|
||||
|
||||
### 🐛 **Issue Reporting & Contributions**
|
||||
|
||||
| Contribution Area | Description | Link |
|
||||
|-----------------------------|---------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------|
|
||||
| Report Bugs | Help us improve by reporting issues | [Report Bugs](https://github.com/The-Swarm-Corporation/swarms/issues) |
|
||||
| Feature Requests | Suggest new features and improvements | [Feature Requests](https://github.com/The-Swarm-Corporation/swarms/issues) |
|
||||
| Contributing Guide | Learn how to contribute to the project | [Contributing Guide](https://docs.swarms.world/en/latest/contributors/main/) |
|
||||
|
||||
---
|
||||
|
||||
### 📧 **Direct Support**
|
||||
|
||||
| Support Type | Contact Information |
|
||||
|-----------------------------|---------------------------------------------------------------------------------------|
|
||||
| Support Call | [Book a call](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true) |
|
||||
| Enterprise Support | Contact us for dedicated enterprise support options |
|
||||
|
||||
|
||||
## Contributing to Client Development
|
||||
|
||||
We welcome contributions to all our client libraries! Here's how you can help:
|
||||
|
||||
### 🛠️ **Development**
|
||||
|
||||
| Task | Description |
|
||||
|-----------------------------------------|--------------------------------------------------|
|
||||
| Implement new features and endpoints | Add new API features and expand client coverage |
|
||||
| Improve error handling and retry logic | Enhance robustness and reliability |
|
||||
| Add comprehensive test coverage | Ensure code quality and prevent regressions |
|
||||
| Optimize performance and memory usage | Improve speed and reduce resource consumption |
|
||||
|
||||
---
|
||||
|
||||
### 📝 **Documentation**
|
||||
|
||||
| Task | Description |
|
||||
|-----------------------------|-----------------------------------------------------|
|
||||
| Write tutorials and examples | Create guides and sample code for users |
|
||||
| Improve API documentation | Clarify and expand reference docs |
|
||||
| Create integration guides | Help users connect clients to their applications |
|
||||
| Translate documentation | Make docs accessible in multiple languages |
|
||||
|
||||
---
|
||||
|
||||
### 🧪 **Testing**
|
||||
|
||||
| Task | Description |
|
||||
|-------------------------------|-----------------------------------------------------|
|
||||
| Add unit and integration tests | Test individual components and end-to-end flows |
|
||||
| Test with different language versions | Ensure compatibility across environments |
|
||||
| Performance benchmarking | Measure and optimize speed and efficiency |
|
||||
| Security testing | Identify and fix vulnerabilities |
|
||||
|
||||
---
|
||||
|
||||
### 📦 **Packaging**
|
||||
|
||||
| Task | Description |
|
||||
|-------------------------------|-----------------------------------------------------|
|
||||
| Package managers (npm, pip, Maven, etc.) | Publish to popular package repositories |
|
||||
| Distribution optimization | Streamline builds and reduce package size |
|
||||
| Version management | Maintain clear versioning and changelogs |
|
||||
| Release automation | Automate build, test, and deployment pipelines |
|
||||
|
||||
## Enterprise Features
|
||||
|
||||
For enterprise customers, we offer additional features and support:
|
||||
|
||||
### 🏢 **Enterprise Client Features**
|
||||
|
||||
| Feature | Description |
|
||||
|--------------------------|----------------------------------------------------------------|
|
||||
| **Priority Support** | Dedicated support team with SLA guarantees |
|
||||
| **Custom Integrations** | Tailored integrations for your specific needs |
|
||||
| **On-Premises Deployment** | Support for on-premises or private cloud deployments |
|
||||
| **Advanced Security** | Enhanced security features and compliance support |
|
||||
| **Training & Onboarding**| Comprehensive training for your development team |
|
||||
|
||||
### 📞 **Contact Enterprise Sales**
|
||||
|
||||
| Contact Type | Details |
|
||||
|----------------|-----------------------------------------------------------------------------------------|
|
||||
| **Sales** | [kye@swarms.world](mailto:kye@swarms.world) |
|
||||
| **Schedule Demo** | [Book a Demo](https://cal.com/swarms/swarms-technical-support?overlayCalendar=true) |
|
||||
| **Partnership**| [kye@swarms.world](mailto:kye@swarms.world) |
|
||||
|
||||
---
|
||||
|
||||
*Ready to build the future with AI agents? Start with any of our client libraries and join our growing community of developers building the next generation of intelligent applications.*
|
@ -0,0 +1,40 @@
|
||||
import json
|
||||
import os
|
||||
from swarms_client import SwarmsClient
|
||||
from swarms_client.types import AgentSpecParam
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
client = SwarmsClient(api_key=os.getenv("SWARMS_API_KEY"))
|
||||
|
||||
agent_spec = AgentSpecParam(
|
||||
agent_name="doctor_agent",
|
||||
description="A virtual doctor agent that provides evidence-based, safe, and empathetic medical advice for common health questions. Always reminds users to consult a healthcare professional for diagnoses or prescriptions.",
|
||||
task="What is the best medicine for a cold?",
|
||||
model_name="claude-3-5-sonnet-20241022",
|
||||
system_prompt=(
|
||||
"You are a highly knowledgeable, ethical, and empathetic virtual doctor. "
|
||||
"Always provide evidence-based, safe, and practical medical advice. "
|
||||
"If a question requires a diagnosis, prescription, or urgent care, remind the user to consult a licensed healthcare professional. "
|
||||
"Be clear, concise, and avoid unnecessary medical jargon. "
|
||||
"Never provide information that could be unsafe or misleading. "
|
||||
"If unsure, say so and recommend seeing a real doctor."
|
||||
),
|
||||
max_loops=1,
|
||||
temperature=0.4,
|
||||
role="doctor",
|
||||
)
|
||||
|
||||
# response = client.agent.run(
|
||||
# agent_config=agent_spec,
|
||||
# task="What is the best medicine for a cold?",
|
||||
# )
|
||||
|
||||
# print(response)
|
||||
|
||||
print(json.dumps(client.models.list_available(), indent=4))
|
||||
print(json.dumps(client.health.check(), indent=4))
|
||||
print(json.dumps(client.swarms.get_logs(), indent=4))
|
||||
print(json.dumps(client.client.rate.get_limits(), indent=4))
|
||||
print(json.dumps(client.swarms.check_available(), indent=4))
|
@ -0,0 +1,12 @@
|
||||
from swarms_client import SwarmsClient
|
||||
from dotenv import load_dotenv
|
||||
import os
|
||||
|
||||
load_dotenv()
|
||||
|
||||
client = SwarmsClient(api_key=os.getenv("SWARMS_API_KEY"))
|
||||
|
||||
response = client.client.rate.get_limits()
|
||||
print(response)
|
||||
|
||||
print(client.health.check())
|
@ -0,0 +1,302 @@
|
||||
"""
|
||||
Agent Multi-Agent Communication Examples
|
||||
|
||||
This file demonstrates the multi-agent communication methods available in the Agent class:
|
||||
- talk_to: Direct communication between two agents
|
||||
- talk_to_multiple_agents: Concurrent communication with multiple agents
|
||||
- receive_message: Process incoming messages from other agents
|
||||
- send_agent_message: Send formatted messages to other agents
|
||||
|
||||
Run: python agent_communication_examples.py
|
||||
"""
|
||||
|
||||
import os
|
||||
from swarms import Agent
|
||||
|
||||
# Set up your API key
|
||||
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
|
||||
|
||||
|
||||
def example_1_direct_agent_communication():
|
||||
"""Example 1: Direct communication between two agents using talk_to method"""
|
||||
print("=" * 60)
|
||||
print("Example 1: Direct Agent Communication")
|
||||
print("=" * 60)
|
||||
|
||||
# Create two specialized agents
|
||||
researcher = Agent(
|
||||
agent_name="Research-Agent",
|
||||
system_prompt="You are a research specialist focused on gathering and analyzing information. Provide detailed, fact-based responses.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
analyst = Agent(
|
||||
agent_name="Analysis-Agent",
|
||||
system_prompt="You are an analytical specialist focused on interpreting research data and providing strategic insights.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
# Agent communication
|
||||
print("Researcher talking to Analyst...")
|
||||
research_result = researcher.talk_to(
|
||||
agent=analyst,
|
||||
task="Analyze the market trends for renewable energy stocks and provide investment recommendations",
|
||||
)
|
||||
|
||||
print(f"\nFinal Analysis Result:\n{research_result}")
|
||||
return research_result
|
||||
|
||||
|
||||
def example_2_multiple_agent_communication():
|
||||
"""Example 2: Broadcasting to multiple agents using talk_to_multiple_agents"""
|
||||
print("\n" + "=" * 60)
|
||||
print("Example 2: Multiple Agent Communication")
|
||||
print("=" * 60)
|
||||
|
||||
# Create multiple specialized agents
|
||||
agents = [
|
||||
Agent(
|
||||
agent_name="Financial-Analyst",
|
||||
system_prompt="You are a financial analysis expert specializing in stock valuation and market trends.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
),
|
||||
Agent(
|
||||
agent_name="Risk-Assessor",
|
||||
system_prompt="You are a risk assessment specialist focused on identifying potential investment risks.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
),
|
||||
Agent(
|
||||
agent_name="Market-Researcher",
|
||||
system_prompt="You are a market research expert specializing in industry analysis and competitive intelligence.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
),
|
||||
]
|
||||
|
||||
coordinator = Agent(
|
||||
agent_name="Coordinator-Agent",
|
||||
system_prompt="You coordinate multi-agent analysis and synthesize diverse perspectives into actionable insights.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
# Broadcast to multiple agents
|
||||
print("Coordinator broadcasting to multiple agents...")
|
||||
responses = coordinator.talk_to_multiple_agents(
|
||||
agents=agents,
|
||||
task="Evaluate the investment potential of Tesla stock for the next quarter",
|
||||
)
|
||||
|
||||
# Process responses
|
||||
print("\nResponses from all agents:")
|
||||
for i, response in enumerate(responses):
|
||||
if response:
|
||||
print(f"\n{agents[i].agent_name} Response:")
|
||||
print("-" * 40)
|
||||
print(
|
||||
response[:200] + "..."
|
||||
if len(response) > 200
|
||||
else response
|
||||
)
|
||||
else:
|
||||
print(f"\n{agents[i].agent_name}: Failed to respond")
|
||||
|
||||
return responses
|
||||
|
||||
|
||||
def example_3_message_handling():
|
||||
"""Example 3: Message handling using receive_message and send_agent_message"""
|
||||
print("\n" + "=" * 60)
|
||||
print("Example 3: Message Handling")
|
||||
print("=" * 60)
|
||||
|
||||
# Create an agent that can receive messages
|
||||
support_agent = Agent(
|
||||
agent_name="Support-Agent",
|
||||
system_prompt="You provide helpful support and assistance. Always be professional and solution-oriented.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
notification_agent = Agent(
|
||||
agent_name="Notification-Agent",
|
||||
system_prompt="You send notifications and updates to other systems and agents.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
# Example of receiving a message
|
||||
print("Support agent receiving message...")
|
||||
received_response = support_agent.receive_message(
|
||||
agent_name="Customer-Service-Agent",
|
||||
task="A customer is asking about our refund policies for software purchases. Can you provide guidance?",
|
||||
)
|
||||
print(f"\nSupport Agent Response:\n{received_response}")
|
||||
|
||||
# Example of sending a message
|
||||
print("\nNotification agent sending message...")
|
||||
sent_result = notification_agent.send_agent_message(
|
||||
agent_name="Task-Manager-Agent",
|
||||
message="Customer support ticket #12345 has been resolved successfully",
|
||||
)
|
||||
print(f"\nNotification Result:\n{sent_result}")
|
||||
|
||||
return received_response, sent_result
|
||||
|
||||
|
||||
def example_4_sequential_workflow():
|
||||
"""Example 4: Sequential agent workflow using communication methods"""
|
||||
print("\n" + "=" * 60)
|
||||
print("Example 4: Sequential Agent Workflow")
|
||||
print("=" * 60)
|
||||
|
||||
# Create specialized agents for a document processing workflow
|
||||
extractor = Agent(
|
||||
agent_name="Data-Extractor",
|
||||
system_prompt="You extract key information and data points from documents. Focus on accuracy and completeness.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
validator = Agent(
|
||||
agent_name="Data-Validator",
|
||||
system_prompt="You validate and verify extracted data for accuracy, completeness, and consistency.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
formatter = Agent(
|
||||
agent_name="Data-Formatter",
|
||||
system_prompt="You format validated data into structured, professional reports and summaries.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
# Sequential processing workflow
|
||||
document_content = """
|
||||
Q3 Financial Report Summary:
|
||||
- Revenue: $2.5M (up 15% from Q2)
|
||||
- Expenses: $1.8M (operational costs increased by 8%)
|
||||
- Net Profit: $700K (improved profit margin of 28%)
|
||||
- New Customers: 1,200 (25% growth rate)
|
||||
- Customer Retention: 92%
|
||||
- Market Share: Increased to 12% in our sector
|
||||
"""
|
||||
|
||||
print("Starting sequential workflow...")
|
||||
|
||||
# Step 1: Extract data
|
||||
print("\nStep 1: Data Extraction")
|
||||
extracted_data = extractor.run(
|
||||
f"Extract key financial metrics from this report: {document_content}"
|
||||
)
|
||||
print(f"Extracted: {extracted_data[:150]}...")
|
||||
|
||||
# Step 2: Validate data
|
||||
print("\nStep 2: Data Validation")
|
||||
validated_data = extractor.talk_to(
|
||||
agent=validator,
|
||||
task=f"Please validate this extracted data for accuracy and completeness: {extracted_data}",
|
||||
)
|
||||
print(f"Validated: {validated_data[:150]}...")
|
||||
|
||||
# Step 3: Format data
|
||||
print("\nStep 3: Data Formatting")
|
||||
final_output = validator.talk_to(
|
||||
agent=formatter,
|
||||
task=f"Format this validated data into a structured executive summary: {validated_data}",
|
||||
)
|
||||
|
||||
print(f"\nFinal Report:\n{final_output}")
|
||||
return final_output
|
||||
|
||||
|
||||
def example_5_error_handling():
|
||||
"""Example 5: Robust communication with error handling"""
|
||||
print("\n" + "=" * 60)
|
||||
print("Example 5: Communication with Error Handling")
|
||||
print("=" * 60)
|
||||
|
||||
def safe_agent_communication(sender, receiver, message):
|
||||
"""Safely handle agent communication with comprehensive error handling"""
|
||||
try:
|
||||
print(
|
||||
f"Attempting communication: {sender.agent_name} -> {receiver.agent_name}"
|
||||
)
|
||||
response = sender.talk_to(agent=receiver, task=message)
|
||||
return {
|
||||
"success": True,
|
||||
"response": response,
|
||||
"error": None,
|
||||
}
|
||||
except Exception as e:
|
||||
print(f"Communication failed: {e}")
|
||||
return {
|
||||
"success": False,
|
||||
"response": None,
|
||||
"error": str(e),
|
||||
}
|
||||
|
||||
# Create agents
|
||||
agent_a = Agent(
|
||||
agent_name="Agent-A",
|
||||
system_prompt="You are a helpful assistant focused on providing accurate information.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
agent_b = Agent(
|
||||
agent_name="Agent-B",
|
||||
system_prompt="You are a knowledgeable expert in technology and business trends.",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
# Safe communication
|
||||
result = safe_agent_communication(
|
||||
sender=agent_a,
|
||||
receiver=agent_b,
|
||||
message="What are the latest trends in artificial intelligence and how might they impact business operations?",
|
||||
)
|
||||
|
||||
if result["success"]:
|
||||
print("\nCommunication successful!")
|
||||
print(f"Response: {result['response'][:200]}...")
|
||||
else:
|
||||
print(f"\nCommunication failed: {result['error']}")
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
"""Run all multi-agent communication examples"""
|
||||
print("🤖 Agent Multi-Agent Communication Examples")
|
||||
print(
|
||||
"This demonstrates the communication methods available in the Agent class"
|
||||
)
|
||||
|
||||
try:
|
||||
# Run all examples
|
||||
example_1_direct_agent_communication()
|
||||
example_2_multiple_agent_communication()
|
||||
example_3_message_handling()
|
||||
example_4_sequential_workflow()
|
||||
example_5_error_handling()
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("✅ All examples completed successfully!")
|
||||
print("=" * 60)
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error running examples: {e}")
|
||||
print(
|
||||
"Make sure to set your OPENAI_API_KEY environment variable"
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -0,0 +1,344 @@
|
||||
"""
|
||||
Multi-Agent Caching Example - Super Fast Agent Loading
|
||||
|
||||
This example demonstrates how to use the agent caching system with multiple agents
|
||||
to achieve 10-100x speedup in agent loading and reuse.
|
||||
"""
|
||||
|
||||
import time
|
||||
from swarms import Agent
|
||||
from swarms.utils.agent_cache import (
|
||||
cached_agent_loader,
|
||||
simple_lru_agent_loader,
|
||||
AgentCache,
|
||||
get_agent_cache_stats,
|
||||
clear_agent_cache,
|
||||
)
|
||||
|
||||
|
||||
def create_trading_team():
|
||||
"""Create a team of trading agents."""
|
||||
|
||||
# Create multiple agents for different trading strategies
|
||||
agents = [
|
||||
Agent(
|
||||
agent_name="Quantitative-Trading-Agent",
|
||||
agent_description="Advanced quantitative trading and algorithmic analysis agent",
|
||||
system_prompt="""You are an expert quantitative trading agent with deep expertise in:
|
||||
- Algorithmic trading strategies and implementation
|
||||
- Statistical arbitrage and market making
|
||||
- Risk management and portfolio optimization
|
||||
- High-frequency trading systems
|
||||
- Market microstructure analysis""",
|
||||
max_loops=1,
|
||||
model_name="gpt-4o-mini",
|
||||
temperature=0.1,
|
||||
),
|
||||
Agent(
|
||||
agent_name="Risk-Management-Agent",
|
||||
agent_description="Portfolio risk assessment and management specialist",
|
||||
system_prompt="""You are a risk management specialist focused on:
|
||||
- Portfolio risk assessment and stress testing
|
||||
- Value at Risk (VaR) calculations
|
||||
- Regulatory compliance monitoring
|
||||
- Risk mitigation strategies
|
||||
- Capital allocation optimization""",
|
||||
max_loops=1,
|
||||
model_name="gpt-4o-mini",
|
||||
temperature=0.2,
|
||||
),
|
||||
Agent(
|
||||
agent_name="Market-Analysis-Agent",
|
||||
agent_description="Real-time market analysis and trend identification",
|
||||
system_prompt="""You are a market analysis expert specializing in:
|
||||
- Technical analysis and chart patterns
|
||||
- Market sentiment analysis
|
||||
- Economic indicator interpretation
|
||||
- Trend identification and momentum analysis
|
||||
- Support and resistance level identification""",
|
||||
max_loops=1,
|
||||
model_name="gpt-4o-mini",
|
||||
temperature=0.3,
|
||||
),
|
||||
Agent(
|
||||
agent_name="Options-Trading-Agent",
|
||||
agent_description="Options strategies and derivatives trading specialist",
|
||||
system_prompt="""You are an options trading specialist with expertise in:
|
||||
- Options pricing models and Greeks analysis
|
||||
- Volatility trading strategies
|
||||
- Complex options spreads and combinations
|
||||
- Risk-neutral portfolio construction
|
||||
- Derivatives market making""",
|
||||
max_loops=1,
|
||||
model_name="gpt-4o-mini",
|
||||
temperature=0.15,
|
||||
),
|
||||
Agent(
|
||||
agent_name="ESG-Investment-Agent",
|
||||
agent_description="ESG-focused investment analysis and screening",
|
||||
system_prompt="""You are an ESG investment specialist focusing on:
|
||||
- Environmental, Social, and Governance criteria evaluation
|
||||
- Sustainable investment screening
|
||||
- Impact investing strategies
|
||||
- ESG risk assessment
|
||||
- Green finance and climate risk analysis""",
|
||||
max_loops=1,
|
||||
model_name="gpt-4o-mini",
|
||||
temperature=0.25,
|
||||
),
|
||||
]
|
||||
|
||||
return agents
|
||||
|
||||
|
||||
def basic_caching_example():
|
||||
"""Basic example of caching multiple agents."""
|
||||
print("=== Basic Multi-Agent Caching Example ===")
|
||||
|
||||
# Create our trading team
|
||||
trading_team = create_trading_team()
|
||||
print(f"Created {len(trading_team)} trading agents")
|
||||
|
||||
# First load - agents will be processed and cached
|
||||
print("\n🔄 First load (will cache agents)...")
|
||||
start_time = time.time()
|
||||
cached_team_1 = cached_agent_loader(trading_team)
|
||||
first_load_time = time.time() - start_time
|
||||
|
||||
print(
|
||||
f"✅ First load: {len(cached_team_1)} agents in {first_load_time:.3f}s"
|
||||
)
|
||||
|
||||
# Second load - agents will be retrieved from cache (super fast!)
|
||||
print("\n⚡ Second load (from cache)...")
|
||||
start_time = time.time()
|
||||
cached_team_2 = cached_agent_loader(trading_team)
|
||||
second_load_time = time.time() - start_time
|
||||
|
||||
print(
|
||||
f"🚀 Second load: {len(cached_team_2)} agents in {second_load_time:.3f}s"
|
||||
)
|
||||
print(
|
||||
f"💨 Speedup: {first_load_time/second_load_time:.1f}x faster!"
|
||||
)
|
||||
|
||||
# Show cache statistics
|
||||
stats = get_agent_cache_stats()
|
||||
print(f"📊 Cache stats: {stats}")
|
||||
|
||||
return cached_team_1
|
||||
|
||||
|
||||
def custom_cache_example():
|
||||
"""Example using a custom cache for specific use cases."""
|
||||
print("\n=== Custom Cache Example ===")
|
||||
|
||||
# Create a custom cache with specific settings
|
||||
custom_cache = AgentCache(
|
||||
max_memory_cache_size=50, # Cache up to 50 agents
|
||||
cache_dir="trading_team_cache", # Custom cache directory
|
||||
enable_persistent_cache=True, # Enable disk persistence
|
||||
auto_save_interval=120, # Auto-save every 2 minutes
|
||||
)
|
||||
|
||||
# Create agents
|
||||
trading_team = create_trading_team()
|
||||
|
||||
# Load with custom cache
|
||||
print("🔧 Loading with custom cache...")
|
||||
start_time = time.time()
|
||||
cached_team = cached_agent_loader(
|
||||
trading_team,
|
||||
cache_instance=custom_cache,
|
||||
parallel_loading=True,
|
||||
)
|
||||
load_time = time.time() - start_time
|
||||
|
||||
print(f"✅ Loaded {len(cached_team)} agents in {load_time:.3f}s")
|
||||
|
||||
# Get custom cache stats
|
||||
stats = custom_cache.get_cache_stats()
|
||||
print(f"📊 Custom cache stats: {stats}")
|
||||
|
||||
# Cleanup
|
||||
custom_cache.shutdown()
|
||||
|
||||
return cached_team
|
||||
|
||||
|
||||
def simple_lru_example():
|
||||
"""Example using the simple LRU cache approach."""
|
||||
print("\n=== Simple LRU Cache Example ===")
|
||||
|
||||
trading_team = create_trading_team()
|
||||
|
||||
# First load with simple LRU
|
||||
print("🔄 First load with simple LRU...")
|
||||
start_time = time.time()
|
||||
lru_team_1 = simple_lru_agent_loader(trading_team)
|
||||
first_time = time.time() - start_time
|
||||
|
||||
# Second load (cached)
|
||||
print("⚡ Second load with simple LRU...")
|
||||
start_time = time.time()
|
||||
simple_lru_agent_loader(trading_team)
|
||||
cached_time = time.time() - start_time
|
||||
|
||||
print(
|
||||
f"📈 Simple LRU - First: {first_time:.3f}s, Cached: {cached_time:.3f}s"
|
||||
)
|
||||
print(f"💨 Speedup: {first_time/cached_time:.1f}x faster!")
|
||||
|
||||
return lru_team_1
|
||||
|
||||
|
||||
def team_workflow_simulation():
|
||||
"""Simulate a real-world workflow with the cached trading team."""
|
||||
print("\n=== Team Workflow Simulation ===")
|
||||
|
||||
# Create and cache the team
|
||||
trading_team = create_trading_team()
|
||||
cached_team = cached_agent_loader(trading_team)
|
||||
|
||||
# Simulate multiple analysis sessions
|
||||
tasks = [
|
||||
"Analyze the current market conditions for AAPL",
|
||||
"What are the top 3 ETFs for gold coverage?",
|
||||
"Assess the risk profile of a tech-heavy portfolio",
|
||||
"Identify options strategies for volatile markets",
|
||||
"Evaluate ESG investment opportunities in renewable energy",
|
||||
]
|
||||
|
||||
print(
|
||||
f"🎯 Running {len(tasks)} analysis tasks with {len(cached_team)} agents..."
|
||||
)
|
||||
|
||||
session_start = time.time()
|
||||
|
||||
for i, (agent, task) in enumerate(zip(cached_team, tasks)):
|
||||
print(f"\n📋 Task {i+1}: {agent.agent_name}")
|
||||
print(f" Question: {task}")
|
||||
|
||||
task_start = time.time()
|
||||
|
||||
# Run the agent on the task
|
||||
response = agent.run(task)
|
||||
|
||||
task_time = time.time() - task_start
|
||||
print(f" ⏱️ Completed in {task_time:.2f}s")
|
||||
print(
|
||||
f" 💡 Response: {response[:100]}..."
|
||||
if len(response) > 100
|
||||
else f" 💡 Response: {response}"
|
||||
)
|
||||
|
||||
total_session_time = time.time() - session_start
|
||||
print(f"\n🏁 Total session time: {total_session_time:.2f}s")
|
||||
print(
|
||||
f"📊 Average task time: {total_session_time/len(tasks):.2f}s"
|
||||
)
|
||||
|
||||
|
||||
def performance_comparison():
|
||||
"""Compare performance with and without caching."""
|
||||
print("\n=== Performance Comparison ===")
|
||||
|
||||
# Create test agents
|
||||
test_agents = []
|
||||
for i in range(10):
|
||||
agent = Agent(
|
||||
agent_name=f"Test-Agent-{i:02d}",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt=f"You are test agent number {i}.",
|
||||
max_loops=1,
|
||||
)
|
||||
test_agents.append(agent)
|
||||
|
||||
# Test without caching (creating new agents each time)
|
||||
print("🔄 Testing without caching...")
|
||||
no_cache_times = []
|
||||
for _ in range(3):
|
||||
start_time = time.time()
|
||||
# Simulate creating new agents each time
|
||||
new_agents = []
|
||||
for agent in test_agents:
|
||||
new_agent = Agent(
|
||||
agent_name=agent.agent_name,
|
||||
model_name=agent.model_name,
|
||||
system_prompt=agent.system_prompt,
|
||||
max_loops=agent.max_loops,
|
||||
)
|
||||
new_agents.append(new_agent)
|
||||
no_cache_time = time.time() - start_time
|
||||
no_cache_times.append(no_cache_time)
|
||||
|
||||
avg_no_cache_time = sum(no_cache_times) / len(no_cache_times)
|
||||
|
||||
# Clear cache for fair comparison
|
||||
clear_agent_cache()
|
||||
|
||||
# Test with caching (first load)
|
||||
print("🔧 Testing with caching (first load)...")
|
||||
start_time = time.time()
|
||||
cached_agent_loader(test_agents)
|
||||
first_cache_time = time.time() - start_time
|
||||
|
||||
# Test with caching (subsequent loads)
|
||||
print("⚡ Testing with caching (subsequent loads)...")
|
||||
cache_times = []
|
||||
for _ in range(3):
|
||||
start_time = time.time()
|
||||
cached_agent_loader(test_agents)
|
||||
cache_time = time.time() - start_time
|
||||
cache_times.append(cache_time)
|
||||
|
||||
avg_cache_time = sum(cache_times) / len(cache_times)
|
||||
|
||||
# Results
|
||||
print(f"\n📊 Performance Results for {len(test_agents)} agents:")
|
||||
print(f" 🐌 No caching (avg): {avg_no_cache_time:.4f}s")
|
||||
print(f" 🔧 Cached (first load): {first_cache_time:.4f}s")
|
||||
print(f" 🚀 Cached (avg): {avg_cache_time:.4f}s")
|
||||
print(
|
||||
f" 💨 Cache speedup: {avg_no_cache_time/avg_cache_time:.1f}x faster!"
|
||||
)
|
||||
|
||||
# Final cache stats
|
||||
final_stats = get_agent_cache_stats()
|
||||
print(f" 📈 Final cache stats: {final_stats}")
|
||||
|
||||
|
||||
def main():
|
||||
"""Run all examples to demonstrate multi-agent caching."""
|
||||
print("🤖 Multi-Agent Caching System Examples")
|
||||
print("=" * 50)
|
||||
|
||||
try:
|
||||
# Run examples
|
||||
basic_caching_example()
|
||||
custom_cache_example()
|
||||
simple_lru_example()
|
||||
performance_comparison()
|
||||
team_workflow_simulation()
|
||||
|
||||
print("\n✅ All examples completed successfully!")
|
||||
print("\n🎯 Key Benefits of Multi-Agent Caching:")
|
||||
print("• 🚀 10-100x faster agent loading from cache")
|
||||
print(
|
||||
"• 💾 Persistent disk cache survives application restarts"
|
||||
)
|
||||
print("• 🧠 Intelligent LRU memory management")
|
||||
print("• 🔄 Background preloading for zero-latency access")
|
||||
print("• 📊 Detailed performance monitoring")
|
||||
print("• 🛡️ Thread-safe with memory leak prevention")
|
||||
print("• ⚡ Parallel processing for multiple agents")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error running examples: {e}")
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -0,0 +1,128 @@
|
||||
"""
|
||||
Quick Start: Agent Caching with Multiple Agents
|
||||
|
||||
This is a simple example showing how to use agent caching with your existing agents
|
||||
for super fast loading and reuse.
|
||||
"""
|
||||
|
||||
import time
|
||||
from swarms import Agent
|
||||
from swarms.utils.agent_cache import cached_agent_loader
|
||||
|
||||
|
||||
def main():
|
||||
"""Simple example of caching multiple agents."""
|
||||
|
||||
# Create your agents as usual
|
||||
agents = [
|
||||
Agent(
|
||||
agent_name="Quantitative-Trading-Agent",
|
||||
agent_description="Advanced quantitative trading and algorithmic analysis agent",
|
||||
system_prompt="""You are an expert quantitative trading agent with deep expertise in:
|
||||
- Algorithmic trading strategies and implementation
|
||||
- Statistical arbitrage and market making
|
||||
- Risk management and portfolio optimization
|
||||
- High-frequency trading systems
|
||||
- Market microstructure analysis
|
||||
|
||||
Your core responsibilities include:
|
||||
1. Developing and backtesting trading strategies
|
||||
2. Analyzing market data and identifying alpha opportunities
|
||||
3. Implementing risk management frameworks
|
||||
4. Optimizing portfolio allocations
|
||||
5. Conducting quantitative research
|
||||
6. Monitoring market microstructure
|
||||
7. Evaluating trading system performance
|
||||
|
||||
You maintain strict adherence to:
|
||||
- Mathematical rigor in all analyses
|
||||
- Statistical significance in strategy development
|
||||
- Risk-adjusted return optimization
|
||||
- Market impact minimization
|
||||
- Regulatory compliance
|
||||
- Transaction cost analysis
|
||||
- Performance attribution
|
||||
|
||||
You communicate in precise, technical terms while maintaining clarity for stakeholders.""",
|
||||
max_loops=1,
|
||||
model_name="gpt-4o-mini",
|
||||
dynamic_temperature_enabled=True,
|
||||
output_type="str-all-except-first",
|
||||
streaming_on=True,
|
||||
print_on=True,
|
||||
telemetry_enable=False,
|
||||
),
|
||||
Agent(
|
||||
agent_name="Risk-Manager",
|
||||
system_prompt="You are a risk management specialist.",
|
||||
max_loops=1,
|
||||
model_name="gpt-4o-mini",
|
||||
),
|
||||
Agent(
|
||||
agent_name="Market-Analyst",
|
||||
system_prompt="You are a market analysis expert.",
|
||||
max_loops=1,
|
||||
model_name="gpt-4o-mini",
|
||||
),
|
||||
]
|
||||
|
||||
print(f"Created {len(agents)} agents")
|
||||
|
||||
# BEFORE: Creating agents each time (slow)
|
||||
print("\n=== Without Caching (Slow) ===")
|
||||
start_time = time.time()
|
||||
# Simulate creating new agents each time
|
||||
for _ in range(3):
|
||||
new_agents = []
|
||||
for agent in agents:
|
||||
new_agent = Agent(
|
||||
agent_name=agent.agent_name,
|
||||
system_prompt=agent.system_prompt,
|
||||
max_loops=agent.max_loops,
|
||||
model_name=agent.model_name,
|
||||
)
|
||||
new_agents.append(new_agent)
|
||||
no_cache_time = time.time() - start_time
|
||||
print(f"🐌 Time without caching: {no_cache_time:.3f}s")
|
||||
|
||||
# AFTER: Using cached agents (super fast!)
|
||||
print("\n=== With Caching (Super Fast!) ===")
|
||||
|
||||
# First call - will cache the agents
|
||||
start_time = time.time()
|
||||
cached_agent_loader(agents)
|
||||
first_cache_time = time.time() - start_time
|
||||
print(f"🔧 First cache load: {first_cache_time:.3f}s")
|
||||
|
||||
# Subsequent calls - retrieves from cache (lightning fast!)
|
||||
cache_times = []
|
||||
for i in range(3):
|
||||
start_time = time.time()
|
||||
cached_agents = cached_agent_loader(agents)
|
||||
cache_time = time.time() - start_time
|
||||
cache_times.append(cache_time)
|
||||
print(f"⚡ Cache load #{i+1}: {cache_time:.4f}s")
|
||||
|
||||
avg_cache_time = sum(cache_times) / len(cache_times)
|
||||
|
||||
print("\n📊 Results:")
|
||||
print(f" 🐌 Without caching: {no_cache_time:.3f}s")
|
||||
print(f" 🚀 With caching: {avg_cache_time:.4f}s")
|
||||
print(
|
||||
f" 💨 Speedup: {no_cache_time/avg_cache_time:.0f}x faster!"
|
||||
)
|
||||
|
||||
# Now use your cached agents normally
|
||||
print("\n🎯 Using cached agents:")
|
||||
task = "What are the best top 3 etfs for gold coverage?"
|
||||
|
||||
for agent in cached_agents[
|
||||
:1
|
||||
]: # Just use the first agent for demo
|
||||
print(f" Running {agent.agent_name}...")
|
||||
response = agent.run(task)
|
||||
print(f" Response: {response[:100]}...")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -0,0 +1,186 @@
|
||||
"""
|
||||
Simple Agent Caching Tests - Just 4 Basic Tests
|
||||
|
||||
Tests loading agents with and without cache for single and multiple agents.
|
||||
"""
|
||||
|
||||
import time
|
||||
from swarms import Agent
|
||||
from swarms.utils.agent_cache import (
|
||||
cached_agent_loader,
|
||||
clear_agent_cache,
|
||||
)
|
||||
|
||||
|
||||
def test_single_agent_without_cache():
|
||||
"""Test loading a single agent without cache."""
|
||||
print("🔄 Test 1: Single agent without cache")
|
||||
|
||||
# Test creating agents multiple times (simulating no cache)
|
||||
times = []
|
||||
for _ in range(10): # Do it 10 times to get better measurement
|
||||
start_time = time.time()
|
||||
Agent(
|
||||
agent_name="Test-Agent-1",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You are a test agent.",
|
||||
max_loops=1,
|
||||
)
|
||||
load_time = time.time() - start_time
|
||||
times.append(load_time)
|
||||
|
||||
avg_time = sum(times) / len(times)
|
||||
print(
|
||||
f" ✅ Single agent without cache: {avg_time:.4f}s (avg of 10 creations)"
|
||||
)
|
||||
return avg_time
|
||||
|
||||
|
||||
def test_single_agent_with_cache():
|
||||
"""Test loading a single agent with cache."""
|
||||
print("🔄 Test 2: Single agent with cache")
|
||||
|
||||
clear_agent_cache()
|
||||
|
||||
# Create agent
|
||||
agent = Agent(
|
||||
agent_name="Test-Agent-1",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You are a test agent.",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# First load (cache miss) - disable preloading for fair test
|
||||
cached_agent_loader([agent], preload=False)
|
||||
|
||||
# Now test multiple cache hits
|
||||
times = []
|
||||
for _ in range(10): # Do it 10 times to get better measurement
|
||||
start_time = time.time()
|
||||
cached_agent_loader([agent], preload=False)
|
||||
load_time = time.time() - start_time
|
||||
times.append(load_time)
|
||||
|
||||
avg_time = sum(times) / len(times)
|
||||
print(
|
||||
f" ✅ Single agent with cache: {avg_time:.4f}s (avg of 10 cache hits)"
|
||||
)
|
||||
return avg_time
|
||||
|
||||
|
||||
def test_multiple_agents_without_cache():
|
||||
"""Test loading multiple agents without cache."""
|
||||
print("🔄 Test 3: Multiple agents without cache")
|
||||
|
||||
# Test creating agents multiple times (simulating no cache)
|
||||
times = []
|
||||
for _ in range(5): # Do it 5 times to get better measurement
|
||||
start_time = time.time()
|
||||
[
|
||||
Agent(
|
||||
agent_name=f"Test-Agent-{i}",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt=f"You are test agent {i}.",
|
||||
max_loops=1,
|
||||
)
|
||||
for i in range(5)
|
||||
]
|
||||
load_time = time.time() - start_time
|
||||
times.append(load_time)
|
||||
|
||||
avg_time = sum(times) / len(times)
|
||||
print(
|
||||
f" ✅ Multiple agents without cache: {avg_time:.4f}s (avg of 5 creations)"
|
||||
)
|
||||
return avg_time
|
||||
|
||||
|
||||
def test_multiple_agents_with_cache():
|
||||
"""Test loading multiple agents with cache."""
|
||||
print("🔄 Test 4: Multiple agents with cache")
|
||||
|
||||
clear_agent_cache()
|
||||
|
||||
# Create agents
|
||||
agents = [
|
||||
Agent(
|
||||
agent_name=f"Test-Agent-{i}",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt=f"You are test agent {i}.",
|
||||
max_loops=1,
|
||||
)
|
||||
for i in range(5)
|
||||
]
|
||||
|
||||
# First load (cache miss) - disable preloading for fair test
|
||||
cached_agent_loader(agents, preload=False)
|
||||
|
||||
# Now test multiple cache hits
|
||||
times = []
|
||||
for _ in range(5): # Do it 5 times to get better measurement
|
||||
start_time = time.time()
|
||||
cached_agent_loader(agents, preload=False)
|
||||
load_time = time.time() - start_time
|
||||
times.append(load_time)
|
||||
|
||||
avg_time = sum(times) / len(times)
|
||||
print(
|
||||
f" ✅ Multiple agents with cache: {avg_time:.4f}s (avg of 5 cache hits)"
|
||||
)
|
||||
return avg_time
|
||||
|
||||
|
||||
def main():
|
||||
"""Run the 4 simple tests."""
|
||||
print("🚀 Simple Agent Caching Tests")
|
||||
print("=" * 40)
|
||||
|
||||
# Run tests
|
||||
single_no_cache = test_single_agent_without_cache()
|
||||
single_with_cache = test_single_agent_with_cache()
|
||||
multiple_no_cache = test_multiple_agents_without_cache()
|
||||
multiple_with_cache = test_multiple_agents_with_cache()
|
||||
|
||||
# Results
|
||||
print("\n📊 Results:")
|
||||
print("=" * 40)
|
||||
print(f"Single agent - No cache: {single_no_cache:.4f}s")
|
||||
print(f"Single agent - With cache: {single_with_cache:.4f}s")
|
||||
print(f"Multiple agents - No cache: {multiple_no_cache:.4f}s")
|
||||
print(f"Multiple agents - With cache: {multiple_with_cache:.4f}s")
|
||||
|
||||
# Speedups (handle near-zero times)
|
||||
if (
|
||||
single_with_cache > 0.00001
|
||||
): # Only calculate if time is meaningful
|
||||
single_speedup = single_no_cache / single_with_cache
|
||||
print(f"\n🚀 Single agent speedup: {single_speedup:.1f}x")
|
||||
else:
|
||||
print(
|
||||
"\n🚀 Single agent speedup: Cache too fast to measure accurately!"
|
||||
)
|
||||
|
||||
if (
|
||||
multiple_with_cache > 0.00001
|
||||
): # Only calculate if time is meaningful
|
||||
multiple_speedup = multiple_no_cache / multiple_with_cache
|
||||
print(f"🚀 Multiple agents speedup: {multiple_speedup:.1f}x")
|
||||
else:
|
||||
print(
|
||||
"🚀 Multiple agents speedup: Cache too fast to measure accurately!"
|
||||
)
|
||||
|
||||
# Summary
|
||||
print("\n✅ Cache Validation:")
|
||||
print("• Cache hit rates are increasing (visible in logs)")
|
||||
print("• No validation errors")
|
||||
print(
|
||||
"• Agent objects are being cached and retrieved successfully"
|
||||
)
|
||||
print(
|
||||
"• For real agents with LLM initialization, expect 10-100x speedups!"
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -0,0 +1,250 @@
|
||||
from swarms import Agent
|
||||
from swarms.structs.election_swarm import (
|
||||
ElectionSwarm,
|
||||
)
|
||||
|
||||
# Create candidate agents for Apple CEO position
|
||||
tim_cook = Agent(
|
||||
agent_name="Tim Cook - Current CEO",
|
||||
system_prompt="""You are Tim Cook, the current CEO of Apple Inc. since 2011.
|
||||
|
||||
Your background:
|
||||
- 13+ years as Apple CEO, succeeding Steve Jobs
|
||||
- Former COO of Apple (2007-2011)
|
||||
- Former VP of Operations at Compaq
|
||||
- MBA from Duke University
|
||||
- Known for operational excellence and supply chain management
|
||||
- Led Apple to become the world's most valuable company
|
||||
- Expanded Apple's services business significantly
|
||||
- Strong focus on privacy, sustainability, and social responsibility
|
||||
- Successfully navigated global supply chain challenges
|
||||
- Annual revenue growth from $108B to $394B during tenure
|
||||
|
||||
Strengths: Operational expertise, global experience, proven track record, strong relationships with suppliers and partners, focus on privacy and sustainability.
|
||||
|
||||
Challenges: Perceived lack of innovation compared to Jobs era, heavy reliance on iPhone revenue, limited new product categories.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
sundar_pichai = Agent(
|
||||
agent_name="Sundar Pichai - Google/Alphabet CEO",
|
||||
system_prompt="""You are Sundar Pichai, CEO of Alphabet Inc. and Google since 2015.
|
||||
|
||||
Your background:
|
||||
- CEO of Alphabet Inc. since 2019, Google since 2015
|
||||
- Former Senior VP of Chrome, Apps, and Android
|
||||
- Led development of Chrome browser and Android platform
|
||||
- MS in Engineering from Stanford, MBA from Wharton
|
||||
- Known for product development and AI leadership
|
||||
- Successfully integrated AI into Google's core products
|
||||
- Led Google's cloud computing expansion
|
||||
- Strong focus on AI/ML and emerging technologies
|
||||
- Experience with large-scale platform management
|
||||
- Annual revenue growth from $75B to $307B during tenure
|
||||
|
||||
Strengths: AI/ML expertise, product development, platform management, experience with large-scale operations, strong technical background.
|
||||
|
||||
Challenges: Limited hardware experience, regulatory scrutiny, different company culture.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
jensen_huang = Agent(
|
||||
agent_name="Jensen Huang - NVIDIA CEO",
|
||||
system_prompt="""You are Jensen Huang, CEO and co-founder of NVIDIA since 1993.
|
||||
|
||||
Your background:
|
||||
- CEO and co-founder of NVIDIA for 31 years
|
||||
- Former engineer at AMD and LSI Logic
|
||||
- MS in Electrical Engineering from Stanford
|
||||
- Led NVIDIA from graphics cards to AI computing leader
|
||||
- Pioneered GPU computing and AI acceleration
|
||||
- Successfully pivoted company to AI/data center focus
|
||||
- Market cap grew from $2B to $2.5T+ under leadership
|
||||
- Known for long-term vision and technical innovation
|
||||
- Strong focus on AI, robotics, and autonomous vehicles
|
||||
- Annual revenue growth from $3.9B to $60B+ during recent years
|
||||
|
||||
Strengths: Technical innovation, AI expertise, long-term vision, proven ability to pivot business models, strong engineering background, experience building new markets.
|
||||
|
||||
Challenges: Limited consumer hardware experience, different industry focus, no experience with Apple's ecosystem.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
# Create board member voter agents with realistic personas
|
||||
arthur_levinson = Agent(
|
||||
agent_name="Arthur Levinson - Chairman",
|
||||
system_prompt="""You are Arthur Levinson, Chairman of Apple's Board of Directors since 2011.
|
||||
|
||||
Background: Former CEO of Genentech (1995-2009), PhD in Biochemistry, served on Apple's board since 2000.
|
||||
|
||||
Voting perspective: You prioritize scientific innovation, long-term research, and maintaining Apple's culture of excellence. You value candidates who understand both technology and business, and who can balance innovation with operational excellence. You're concerned about Apple's future in AI and biotechnology.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
james_bell = Agent(
|
||||
agent_name="James Bell - Board Member",
|
||||
system_prompt="""You are James Bell, Apple board member since 2015.
|
||||
|
||||
Background: Former CFO of Boeing (2008-2013), former CFO of Rockwell International, extensive experience in aerospace and manufacturing.
|
||||
|
||||
Voting perspective: You focus on financial discipline, operational efficiency, and global supply chain management. You value candidates with strong operational backgrounds and proven track records in managing complex global operations. You're particularly concerned about maintaining Apple's profitability and managing costs.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
al_gore = Agent(
|
||||
agent_name="Al Gore - Board Member",
|
||||
system_prompt="""You are Al Gore, Apple board member since 2003.
|
||||
|
||||
Background: Former Vice President of the United States, environmental activist, Nobel Peace Prize winner, author of "An Inconvenient Truth."
|
||||
|
||||
Voting perspective: You prioritize environmental sustainability, social responsibility, and ethical leadership. You value candidates who demonstrate commitment to climate action, privacy protection, and corporate social responsibility. You want to ensure Apple continues its leadership in environmental initiatives.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
monica_lozano = Agent(
|
||||
agent_name="Monica Lozano - Board Member",
|
||||
system_prompt="""You are Monica Lozano, Apple board member since 2014.
|
||||
|
||||
Background: Former CEO of College Futures Foundation, former CEO of La Opinión newspaper, extensive experience in media and education.
|
||||
|
||||
Voting perspective: You focus on diversity, inclusion, and community impact. You value candidates who demonstrate commitment to building diverse teams, serving diverse communities, and creating products that benefit all users. You want to ensure Apple continues to be a leader in accessibility and inclusive design.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
ron_sugar = Agent(
|
||||
agent_name="Ron Sugar - Board Member",
|
||||
system_prompt="""You are Ron Sugar, Apple board member since 2010.
|
||||
|
||||
Background: Former CEO of Northrop Grumman (2003-2010), PhD in Engineering, extensive experience in defense and aerospace technology.
|
||||
|
||||
Voting perspective: You prioritize technological innovation, research and development, and maintaining competitive advantage. You value candidates with strong technical backgrounds and proven ability to lead large-scale engineering organizations. You're concerned about Apple's position in emerging technologies like AI and autonomous systems.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
susan_wagner = Agent(
|
||||
agent_name="Susan Wagner - Board Member",
|
||||
system_prompt="""You are Susan Wagner, Apple board member since 2014.
|
||||
|
||||
Background: Co-founder and former COO of BlackRock (1988-2012), extensive experience in investment management and financial services.
|
||||
|
||||
Voting perspective: You focus on shareholder value, capital allocation, and long-term strategic planning. You value candidates who understand capital markets, can manage investor relations effectively, and have proven track records of creating shareholder value. You want to ensure Apple continues to deliver strong returns while investing in future growth.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
andrea_jung = Agent(
|
||||
agent_name="Andrea Jung - Board Member",
|
||||
system_prompt="""You are Andrea Jung, Apple board member since 2008.
|
||||
|
||||
Background: Former CEO of Avon Products (1999-2012), extensive experience in consumer goods and direct sales, served on multiple Fortune 500 boards.
|
||||
|
||||
Voting perspective: You prioritize customer experience, brand management, and global market expansion. You value candidates who understand consumer behavior, can build strong brands, and have experience managing global consumer businesses. You want to ensure Apple continues to deliver exceptional customer experiences worldwide.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
bob_iger = Agent(
|
||||
agent_name="Bob Iger - Board Member",
|
||||
system_prompt="""You are Bob Iger, Apple board member since 2011.
|
||||
|
||||
Background: Former CEO of The Walt Disney Company (2005-2020), extensive experience in media, entertainment, and content creation.
|
||||
|
||||
Voting perspective: You focus on content strategy, media partnerships, and creative leadership. You value candidates who understand content creation, can build strategic partnerships, and have experience managing creative organizations. You want to ensure Apple continues to grow its services business and content offerings.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
alex_gorsky = Agent(
|
||||
agent_name="Alex Gorsky - Board Member",
|
||||
system_prompt="""You are Alex Gorsky, Apple board member since 2019.
|
||||
|
||||
Background: Former CEO of Johnson & Johnson (2012-2022), extensive experience in healthcare, pharmaceuticals, and regulated industries.
|
||||
|
||||
Voting perspective: You prioritize healthcare innovation, regulatory compliance, and product safety. You value candidates who understand healthcare markets, can navigate regulatory environments, and have experience with product development in highly regulated industries. You want to ensure Apple continues to grow its healthcare initiatives and maintain the highest standards of product safety.""",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
temperature=0.7,
|
||||
# tools_list_dictionary=get_vote_schema(),
|
||||
)
|
||||
|
||||
# Create lists of voters and candidates
|
||||
voter_agents = [
|
||||
arthur_levinson,
|
||||
james_bell,
|
||||
al_gore,
|
||||
# monica_lozano,
|
||||
# ron_sugar,
|
||||
# susan_wagner,
|
||||
# andrea_jung,
|
||||
# bob_iger,
|
||||
# alex_gorsky,
|
||||
]
|
||||
|
||||
candidate_agents = [tim_cook, sundar_pichai, jensen_huang]
|
||||
|
||||
# Create the election swarm
|
||||
apple_election = ElectionSwarm(
|
||||
name="Apple Board Election for CEO",
|
||||
description="Board election to select the next CEO of Apple Inc.",
|
||||
agents=voter_agents,
|
||||
candidate_agents=candidate_agents,
|
||||
max_loops=1,
|
||||
show_dashboard=False,
|
||||
)
|
||||
|
||||
# Define the election task
|
||||
election_task = """
|
||||
You are participating in a critical board election to select the next CEO of Apple Inc.
|
||||
|
||||
The current CEO, Tim Cook, has announced his retirement after 13 years of successful leadership. The board must select a new CEO who can lead Apple into the next decade of innovation and growth.
|
||||
|
||||
Key considerations for the next CEO:
|
||||
1. Leadership in AI and emerging technologies
|
||||
2. Ability to maintain Apple's culture of innovation and excellence
|
||||
3. Experience with global operations and supply chain management
|
||||
4. Commitment to privacy, sustainability, and social responsibility
|
||||
5. Track record of creating shareholder value
|
||||
6. Ability to expand Apple's services business
|
||||
7. Experience with hardware and software integration
|
||||
8. Vision for Apple's future in healthcare, automotive, and other new markets
|
||||
|
||||
Please carefully evaluate each candidate based on their background, experience, and alignment with Apple's values and strategic objectives. Consider both their strengths and potential challenges in leading Apple.
|
||||
|
||||
Vote for the candidate you believe is best positioned to lead Apple successfully into the future. Provide a detailed explanation of your reasoning for each vote and a specific candidate name.
|
||||
"""
|
||||
|
||||
# Run the election
|
||||
results = apple_election.run(election_task)
|
||||
|
||||
print(results)
|
||||
print(type(results))
|
@ -0,0 +1,16 @@
|
||||
from swarms.structs.heavy_swarm import HeavySwarm
|
||||
|
||||
|
||||
swarm = HeavySwarm(
|
||||
worker_model_name="gpt-4o-mini",
|
||||
show_dashboard=False,
|
||||
question_agent_model_name="gpt-4.1",
|
||||
loops_per_agent=1,
|
||||
)
|
||||
|
||||
|
||||
out = swarm.run(
|
||||
"Identify the top 3 energy sector ETFs listed on US exchanges that offer the highest potential for growth over the next 3-5 years. Focus specifically on funds with significant exposure to companies in the nuclear, natural gas, or oil industries. For each ETF, provide the rationale for its selection, recent performance metrics, sector allocation breakdown, and any notable holdings related to nuclear, gas, or oil. Exclude broad-based energy ETFs that do not have a clear emphasis on these sub-sectors."
|
||||
)
|
||||
|
||||
print(out)
|
@ -0,0 +1,24 @@
|
||||
from swarms import Agent, SequentialWorkflow
|
||||
|
||||
# Agent 1: The Researcher
|
||||
researcher = Agent(
|
||||
agent_name="Researcher",
|
||||
system_prompt="Your job is to research the provided topic and provide a detailed summary.",
|
||||
model_name="gpt-4o-mini",
|
||||
)
|
||||
|
||||
# Agent 2: The Writer
|
||||
writer = Agent(
|
||||
agent_name="Writer",
|
||||
system_prompt="Your job is to take the research summary and write a beautiful, engaging blog post about it.",
|
||||
model_name="gpt-4o-mini",
|
||||
)
|
||||
|
||||
# Create a sequential workflow where the researcher's output feeds into the writer's input
|
||||
workflow = SequentialWorkflow(agents=[researcher, writer])
|
||||
|
||||
# Run the workflow on a task
|
||||
final_post = workflow.run(
|
||||
"The history and future of artificial intelligence"
|
||||
)
|
||||
print(final_post)
|
@ -0,0 +1,99 @@
|
||||
"""
|
||||
Agent Judge with Evaluation Criteria Example
|
||||
|
||||
This example demonstrates how to use the AgentJudge with custom evaluation criteria.
|
||||
The evaluation_criteria parameter allows specifying different criteria with weights
|
||||
for more targeted and customizable evaluation of agent outputs.
|
||||
"""
|
||||
|
||||
from swarms.agents.agent_judge import AgentJudge
|
||||
from dotenv import load_dotenv
|
||||
|
||||
load_dotenv()
|
||||
|
||||
# Example 1: Basic usage with evaluation criteria
|
||||
print("\n=== Example 1: Using Custom Evaluation Criteria ===\n")
|
||||
|
||||
# Create an AgentJudge with custom evaluation criteria
|
||||
judge = AgentJudge(
|
||||
model_name="claude-3-7-sonnet-20250219", # Use any available model
|
||||
evaluation_criteria={
|
||||
"correctness": 0.5,
|
||||
"problem_solving_approach": 0.3,
|
||||
"explanation_clarity": 0.2,
|
||||
},
|
||||
)
|
||||
|
||||
# Sample output to evaluate
|
||||
task_response = [
|
||||
"Task: Determine the time complexity of a binary search algorithm and explain your reasoning.\n\n"
|
||||
"Agent response: The time complexity of binary search is O(log n). In each step, "
|
||||
"we divide the search space in half, resulting in a logarithmic relationship between "
|
||||
"the input size and the number of operations. This can be proven by solving the "
|
||||
"recurrence relation T(n) = T(n/2) + O(1), which gives us T(n) = O(log n)."
|
||||
]
|
||||
|
||||
# Run evaluation
|
||||
evaluation = judge.run(task_response)
|
||||
print(evaluation[0])
|
||||
|
||||
# Example 2: Specialized criteria for code evaluation
|
||||
print(
|
||||
"\n=== Example 2: Code Evaluation with Specialized Criteria ===\n"
|
||||
)
|
||||
|
||||
code_judge = AgentJudge(
|
||||
model_name="claude-3-7-sonnet-20250219",
|
||||
agent_name="code_judge",
|
||||
evaluation_criteria={
|
||||
"code_correctness": 0.4,
|
||||
"code_efficiency": 0.3,
|
||||
"code_readability": 0.3,
|
||||
},
|
||||
)
|
||||
|
||||
# Sample code to evaluate
|
||||
code_response = [
|
||||
"Task: Write a function to find the maximum subarray sum in an array of integers.\n\n"
|
||||
"Agent response:\n```python\n"
|
||||
"def max_subarray_sum(arr):\n"
|
||||
" current_sum = max_sum = arr[0]\n"
|
||||
" for i in range(1, len(arr)):\n"
|
||||
" current_sum = max(arr[i], current_sum + arr[i])\n"
|
||||
" max_sum = max(max_sum, current_sum)\n"
|
||||
" return max_sum\n\n"
|
||||
"# Example usage\n"
|
||||
"print(max_subarray_sum([-2, 1, -3, 4, -1, 2, 1, -5, 4])) # Output: 6 (subarray [4, -1, 2, 1])\n"
|
||||
"```\n"
|
||||
"This implementation uses Kadane's algorithm which has O(n) time complexity and "
|
||||
"O(1) space complexity, making it optimal for this problem."
|
||||
]
|
||||
|
||||
code_evaluation = code_judge.run(code_response)
|
||||
print(code_evaluation[0])
|
||||
|
||||
# Example 3: Comparing multiple responses
|
||||
print("\n=== Example 3: Comparing Multiple Agent Responses ===\n")
|
||||
|
||||
comparison_judge = AgentJudge(
|
||||
model_name="claude-3-7-sonnet-20250219",
|
||||
evaluation_criteria={"accuracy": 0.6, "completeness": 0.4},
|
||||
)
|
||||
|
||||
multiple_responses = comparison_judge.run(
|
||||
[
|
||||
"Task: Explain the CAP theorem in distributed systems.\n\n"
|
||||
"Agent A response: CAP theorem states that a distributed system cannot simultaneously "
|
||||
"provide Consistency, Availability, and Partition tolerance. In practice, you must "
|
||||
"choose two out of these three properties.",
|
||||
"Task: Explain the CAP theorem in distributed systems.\n\n"
|
||||
"Agent B response: The CAP theorem, formulated by Eric Brewer, states that in a "
|
||||
"distributed data store, you can only guarantee two of the following three properties: "
|
||||
"Consistency (all nodes see the same data at the same time), Availability (every request "
|
||||
"receives a response), and Partition tolerance (the system continues to operate despite "
|
||||
"network failures). Most modern distributed systems choose to sacrifice consistency in "
|
||||
"favor of availability and partition tolerance, implementing eventual consistency models instead.",
|
||||
]
|
||||
)
|
||||
|
||||
print(multiple_responses[0])
|
@ -0,0 +1,22 @@
|
||||
from swarms import SelfConsistencyAgent
|
||||
|
||||
# Initialize the reasoning agent router with self-consistency
|
||||
reasoning_agent_router = SelfConsistencyAgent(
|
||||
name="reasoning-agent",
|
||||
description="A reasoning agent that can answer questions and help with tasks.",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You are a helpful assistant that can answer questions and help with tasks.",
|
||||
max_loops=1,
|
||||
num_samples=3, # Generate 3 independent responses
|
||||
eval=False, # Disable evaluation mode
|
||||
random_models_on=False, # Disable random model selection
|
||||
majority_voting_prompt=None, # Use default majority voting prompt
|
||||
)
|
||||
|
||||
# Run the agent on a financial analysis task
|
||||
result = reasoning_agent_router.run(
|
||||
"What is the best possible financial strategy to maximize returns but minimize risk? Give a list of etfs to invest in and the percentage of the portfolio to allocate to each etf."
|
||||
)
|
||||
|
||||
print("Financial Strategy Result:")
|
||||
print(result)
|
@ -0,0 +1,22 @@
|
||||
from swarms.agents.reasoning_agents import ReasoningAgentRouter
|
||||
|
||||
router = ReasoningAgentRouter(
|
||||
agent_name="qft_reasoning_agent",
|
||||
description="A specialized reasoning agent for answering questions and solving problems in quantum field theory.",
|
||||
model_name="groq/moonshotai/kimi-k2-instruct",
|
||||
system_prompt=(
|
||||
"You are a highly knowledgeable assistant specializing in quantum field theory (QFT). "
|
||||
"You can answer advanced questions, explain concepts, and help with tasks related to QFT, "
|
||||
"including but not limited to Lagrangians, Feynman diagrams, renormalization, quantum electrodynamics, "
|
||||
"quantum chromodynamics, and the Standard Model. Provide clear, accurate, and detailed explanations, "
|
||||
"and cite relevant equations or references when appropriate."
|
||||
),
|
||||
max_loops=1,
|
||||
swarm_type="reasoning-duo",
|
||||
output_type="dict-all-except-first",
|
||||
)
|
||||
|
||||
out = router.run(
|
||||
"Explain the significance of spontaneous symmetry breaking in quantum field theory."
|
||||
)
|
||||
print(out)
|
@ -0,0 +1,23 @@
|
||||
from swarms import ReasoningDuo
|
||||
|
||||
router = ReasoningDuo(
|
||||
agent_name="qft_reasoning_agent",
|
||||
description="A specialized reasoning agent for answering questions and solving problems in quantum field theory.",
|
||||
model_name="claude-3-5-sonnet-20240620",
|
||||
system_prompt=(
|
||||
"You are a highly knowledgeable assistant specializing in quantum field theory (QFT). "
|
||||
"You can answer advanced questions, explain concepts, and help with tasks related to QFT, "
|
||||
"including but not limited to Lagrangians, Feynman diagrams, renormalization, quantum electrodynamics, "
|
||||
"quantum chromodynamics, and the Standard Model. Provide clear, accurate, and detailed explanations, "
|
||||
"and cite relevant equations or references when appropriate."
|
||||
),
|
||||
max_loops=2,
|
||||
swarm_type="reasoning-duo",
|
||||
output_type="dict-all-except-first",
|
||||
reasoning_model_name="groq/moonshotai/kimi-k2-instruct",
|
||||
)
|
||||
|
||||
out = router.run(
|
||||
"Explain the significance of spontaneous symmetry breaking in quantum field theory."
|
||||
)
|
||||
print(out)
|
@ -0,0 +1,104 @@
|
||||
"""
|
||||
Example usage of log_function_execution decorator with class methods.
|
||||
|
||||
This demonstrates how the decorator works with:
|
||||
- Instance methods
|
||||
- Class methods
|
||||
- Static methods
|
||||
- Property methods
|
||||
"""
|
||||
|
||||
from swarms.telemetry.log_executions import log_function_execution
|
||||
|
||||
|
||||
class DataProcessor:
|
||||
"""Example class to demonstrate decorator usage with methods."""
|
||||
|
||||
def __init__(self, name: str, version: str = "1.0"):
|
||||
self.name = name
|
||||
self.version = version
|
||||
self.processed_count = 0
|
||||
|
||||
@log_function_execution(
|
||||
swarm_id="data-processor-instance",
|
||||
swarm_architecture="object_oriented",
|
||||
enabled_on=True,
|
||||
)
|
||||
def process_data(self, data: list, multiplier: int = 2) -> dict:
|
||||
"""Instance method that processes data."""
|
||||
processed = [x * multiplier for x in data]
|
||||
self.processed_count += len(data)
|
||||
|
||||
return {
|
||||
"original": data,
|
||||
"processed": processed,
|
||||
"processor_name": self.name,
|
||||
"count": len(processed),
|
||||
}
|
||||
|
||||
@classmethod
|
||||
@log_function_execution(
|
||||
swarm_id="data-processor-class",
|
||||
swarm_architecture="class_method",
|
||||
enabled_on=True,
|
||||
)
|
||||
def create_default(cls, name: str):
|
||||
"""Class method to create a default instance."""
|
||||
return cls(name=name, version="default")
|
||||
|
||||
@staticmethod
|
||||
@log_function_execution(
|
||||
swarm_id="data-processor-static",
|
||||
swarm_architecture="utility",
|
||||
enabled_on=True,
|
||||
)
|
||||
def validate_data(data: list) -> bool:
|
||||
"""Static method to validate data."""
|
||||
return isinstance(data, list) and len(data) > 0
|
||||
|
||||
@property
|
||||
def status(self) -> str:
|
||||
"""Property method (not decorated as it's a getter)."""
|
||||
return f"{self.name} v{self.version} - {self.processed_count} items processed"
|
||||
|
||||
|
||||
class AdvancedProcessor(DataProcessor):
|
||||
"""Subclass to test inheritance with decorated methods."""
|
||||
|
||||
@log_function_execution(
|
||||
swarm_id="advanced-processor",
|
||||
swarm_architecture="inheritance",
|
||||
enabled_on=True,
|
||||
)
|
||||
def advanced_process(
|
||||
self, data: list, algorithm: str = "enhanced"
|
||||
) -> dict:
|
||||
"""Advanced processing method in subclass."""
|
||||
base_result = self.process_data(data, multiplier=3)
|
||||
|
||||
return {
|
||||
**base_result,
|
||||
"algorithm": algorithm,
|
||||
"advanced": True,
|
||||
"processor_type": "AdvancedProcessor",
|
||||
}
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("Testing decorator with class methods...")
|
||||
|
||||
# Test instance method
|
||||
print("\n1. Testing instance method:")
|
||||
processor = DataProcessor("TestProcessor", "2.0")
|
||||
result1 = processor.process_data([1, 2, 3, 4], multiplier=5)
|
||||
print(f"Result: {result1}")
|
||||
print(f"Status: {processor.status}")
|
||||
|
||||
# Test class method
|
||||
print("\n2. Testing class method:")
|
||||
default_processor = DataProcessor.create_default(
|
||||
"DefaultProcessor"
|
||||
)
|
||||
print(
|
||||
f"Created: {default_processor.name} v{default_processor.version}"
|
||||
)
|
@ -0,0 +1,116 @@
|
||||
"""
|
||||
Example usage of the log_function_execution decorator.
|
||||
|
||||
This example demonstrates how to use the decorator to automatically log
|
||||
function executions including parameters, outputs, and execution metadata.
|
||||
"""
|
||||
|
||||
from swarms.telemetry.log_executions import log_function_execution
|
||||
|
||||
|
||||
# Example 1: Simple function with basic parameters
|
||||
@log_function_execution(
|
||||
swarm_id="example-swarm-001",
|
||||
swarm_architecture="sequential",
|
||||
enabled_on=True,
|
||||
)
|
||||
def calculate_sum(a: int, b: int) -> int:
|
||||
"""Calculate the sum of two numbers."""
|
||||
return a + b
|
||||
|
||||
|
||||
# Example 2: Function with complex parameters and return values
|
||||
@log_function_execution(
|
||||
swarm_id="data-processing-swarm",
|
||||
swarm_architecture="parallel",
|
||||
enabled_on=True,
|
||||
)
|
||||
def process_data(
|
||||
data_list: list,
|
||||
threshold: float = 0.5,
|
||||
include_metadata: bool = True,
|
||||
) -> dict:
|
||||
"""Process a list of data with filtering and metadata generation."""
|
||||
filtered_data = [x for x in data_list if x > threshold]
|
||||
|
||||
result = {
|
||||
"original_count": len(data_list),
|
||||
"filtered_count": len(filtered_data),
|
||||
"filtered_data": filtered_data,
|
||||
"threshold_used": threshold,
|
||||
}
|
||||
|
||||
if include_metadata:
|
||||
result["metadata"] = {
|
||||
"processing_method": "threshold_filter",
|
||||
"success": True,
|
||||
}
|
||||
|
||||
return result
|
||||
|
||||
|
||||
# Example 3: Function that might raise an exception
|
||||
@log_function_execution(
|
||||
swarm_id="validation-swarm",
|
||||
swarm_architecture="error_handling",
|
||||
enabled_on=True,
|
||||
)
|
||||
def validate_input(value: str, min_length: int = 5) -> bool:
|
||||
"""Validate input string length."""
|
||||
if not isinstance(value, str):
|
||||
raise TypeError(f"Expected string, got {type(value)}")
|
||||
|
||||
if len(value) < min_length:
|
||||
raise ValueError(
|
||||
f"String too short: {len(value)} < {min_length}"
|
||||
)
|
||||
|
||||
return True
|
||||
|
||||
|
||||
# Example 4: Decorator with logging disabled
|
||||
@log_function_execution(
|
||||
swarm_id="silent-swarm",
|
||||
swarm_architecture="background",
|
||||
enabled_on=False, # Logging disabled
|
||||
)
|
||||
def silent_function(x: int) -> int:
|
||||
"""This function won't be logged."""
|
||||
return x * 2
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("Testing log_function_execution decorator...")
|
||||
|
||||
# Test successful executions
|
||||
print("\n1. Testing simple sum calculation:")
|
||||
result1 = calculate_sum(5, 3)
|
||||
print(f"Result: {result1}")
|
||||
|
||||
print("\n2. Testing data processing:")
|
||||
sample_data = [0.2, 0.7, 1.2, 0.1, 0.9, 1.5]
|
||||
result2 = process_data(
|
||||
sample_data, threshold=0.5, include_metadata=True
|
||||
)
|
||||
print(f"Result: {result2}")
|
||||
|
||||
print("\n3. Testing validation with valid input:")
|
||||
result3 = validate_input("hello world", min_length=5)
|
||||
print(f"Result: {result3}")
|
||||
|
||||
print("\n4. Testing silent function (no logging):")
|
||||
result4 = silent_function(10)
|
||||
print(f"Result: {result4}")
|
||||
|
||||
print(
|
||||
"\n5. Testing validation with invalid input (will raise exception):"
|
||||
)
|
||||
try:
|
||||
validate_input("hi", min_length=5)
|
||||
except ValueError as e:
|
||||
print(f"Caught expected error: {e}")
|
||||
|
||||
print("\nAll function calls have been logged automatically!")
|
||||
print(
|
||||
"Check your telemetry logs to see the captured execution data."
|
||||
)
|
@ -0,0 +1,353 @@
|
||||
"""
|
||||
Examples demonstrating the concurrent wrapper decorator functionality.
|
||||
|
||||
This file shows how to use the concurrent and concurrent_class_executor
|
||||
decorators to enable concurrent execution of functions and class methods.
|
||||
"""
|
||||
|
||||
import time
|
||||
import asyncio
|
||||
from typing import Dict, Any
|
||||
import requests
|
||||
|
||||
from swarms.utils.concurrent_wrapper import (
|
||||
concurrent,
|
||||
concurrent_class_executor,
|
||||
thread_executor,
|
||||
process_executor,
|
||||
async_executor,
|
||||
batch_executor,
|
||||
ExecutorType,
|
||||
)
|
||||
|
||||
|
||||
# Example 1: Basic concurrent function execution
|
||||
@concurrent(
|
||||
name="data_processor",
|
||||
description="Process data concurrently",
|
||||
timeout=30,
|
||||
retry_on_failure=True,
|
||||
max_retries=2,
|
||||
)
|
||||
def process_data(data: str) -> str:
|
||||
"""Simulate data processing with a delay."""
|
||||
time.sleep(1) # Simulate work
|
||||
return f"processed_{data}"
|
||||
|
||||
|
||||
# Example 2: Thread-based executor for I/O bound tasks
|
||||
@thread_executor(max_workers=8, timeout=60)
|
||||
def fetch_url(url: str) -> Dict[str, Any]:
|
||||
"""Fetch data from a URL."""
|
||||
try:
|
||||
response = requests.get(url, timeout=10)
|
||||
return {
|
||||
"url": url,
|
||||
"status_code": response.status_code,
|
||||
"content_length": len(response.content),
|
||||
"success": response.status_code == 200,
|
||||
}
|
||||
except Exception as e:
|
||||
return {"url": url, "error": str(e), "success": False}
|
||||
|
||||
|
||||
# Example 3: Process-based executor for CPU-intensive tasks
|
||||
@process_executor(max_workers=2, timeout=120)
|
||||
def cpu_intensive_task(n: int) -> float:
|
||||
"""Perform CPU-intensive computation."""
|
||||
result = 0.0
|
||||
for i in range(n):
|
||||
result += (i**0.5) * (i**0.3)
|
||||
return result
|
||||
|
||||
|
||||
# Example 4: Async executor for async functions
|
||||
@async_executor(max_workers=5)
|
||||
async def async_task(task_id: int) -> str:
|
||||
"""Simulate an async task."""
|
||||
await asyncio.sleep(0.5) # Simulate async work
|
||||
return f"async_result_{task_id}"
|
||||
|
||||
|
||||
# Example 5: Batch processing
|
||||
@batch_executor(batch_size=10, max_workers=3)
|
||||
def process_item(item: str) -> str:
|
||||
"""Process a single item."""
|
||||
time.sleep(0.1) # Simulate work
|
||||
return item.upper()
|
||||
|
||||
|
||||
# Example 6: Class with concurrent methods
|
||||
@concurrent_class_executor(
|
||||
name="DataProcessor",
|
||||
max_workers=4,
|
||||
methods=["process_batch", "validate_data"],
|
||||
)
|
||||
class DataProcessor:
|
||||
"""A class with concurrent processing capabilities."""
|
||||
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
self.config = config
|
||||
|
||||
def process_batch(self, data: str) -> str:
|
||||
"""Process a batch of data."""
|
||||
time.sleep(0.5) # Simulate processing
|
||||
return f"processed_{data}"
|
||||
|
||||
def validate_data(self, data: str) -> bool:
|
||||
"""Validate data."""
|
||||
time.sleep(0.2) # Simulate validation
|
||||
return len(data) > 0
|
||||
|
||||
def normal_method(self, x: int) -> int:
|
||||
"""A normal method (not concurrent)."""
|
||||
return x * 2
|
||||
|
||||
|
||||
# Example 7: Function with custom configuration
|
||||
@concurrent(
|
||||
name="custom_processor",
|
||||
description="Custom concurrent processor",
|
||||
max_workers=6,
|
||||
timeout=45,
|
||||
executor_type=ExecutorType.THREAD,
|
||||
return_exceptions=True,
|
||||
ordered=False,
|
||||
retry_on_failure=True,
|
||||
max_retries=3,
|
||||
retry_delay=0.5,
|
||||
)
|
||||
def custom_processor(item: str, multiplier: int = 1) -> str:
|
||||
"""Custom processor with parameters."""
|
||||
time.sleep(0.3)
|
||||
return f"{item}_{multiplier}" * multiplier
|
||||
|
||||
|
||||
def example_1_basic_concurrent_execution():
|
||||
"""Example 1: Basic concurrent execution."""
|
||||
print("=== Example 1: Basic Concurrent Execution ===")
|
||||
|
||||
# Prepare data
|
||||
data_items = [f"item_{i}" for i in range(10)]
|
||||
|
||||
# Execute concurrently
|
||||
results = process_data.concurrent_execute(*data_items)
|
||||
|
||||
# Process results
|
||||
successful_results = [r.value for r in results if r.success]
|
||||
failed_results = [r for r in results if not r.success]
|
||||
|
||||
print(f"Successfully processed: {len(successful_results)} items")
|
||||
print(f"Failed: {len(failed_results)} items")
|
||||
print(f"Sample results: {successful_results[:3]}")
|
||||
print()
|
||||
|
||||
|
||||
def example_2_thread_based_execution():
|
||||
"""Example 2: Thread-based execution for I/O bound tasks."""
|
||||
print("=== Example 2: Thread-based Execution ===")
|
||||
|
||||
# URLs to fetch
|
||||
urls = [
|
||||
"https://httpbin.org/get",
|
||||
"https://httpbin.org/status/200",
|
||||
"https://httpbin.org/status/404",
|
||||
"https://httpbin.org/delay/1",
|
||||
"https://httpbin.org/delay/2",
|
||||
]
|
||||
|
||||
# Execute concurrently
|
||||
results = fetch_url.concurrent_execute(*urls)
|
||||
|
||||
# Process results
|
||||
successful_fetches = [
|
||||
r.value
|
||||
for r in results
|
||||
if r.success and r.value.get("success")
|
||||
]
|
||||
failed_fetches = [
|
||||
r.value
|
||||
for r in results
|
||||
if r.success and not r.value.get("success")
|
||||
]
|
||||
|
||||
print(f"Successful fetches: {len(successful_fetches)}")
|
||||
print(f"Failed fetches: {len(failed_fetches)}")
|
||||
print(
|
||||
f"Sample successful result: {successful_fetches[0] if successful_fetches else 'None'}"
|
||||
)
|
||||
print()
|
||||
|
||||
|
||||
def example_3_process_based_execution():
|
||||
"""Example 3: Process-based execution for CPU-intensive tasks."""
|
||||
print("=== Example 3: Process-based Execution ===")
|
||||
|
||||
# CPU-intensive tasks
|
||||
tasks = [100000, 200000, 300000, 400000]
|
||||
|
||||
# Execute concurrently
|
||||
results = cpu_intensive_task.concurrent_execute(*tasks)
|
||||
|
||||
# Process results
|
||||
successful_results = [r.value for r in results if r.success]
|
||||
execution_times = [r.execution_time for r in results if r.success]
|
||||
|
||||
print(f"Completed {len(successful_results)} CPU-intensive tasks")
|
||||
print(
|
||||
f"Average execution time: {sum(execution_times) / len(execution_times):.3f}s"
|
||||
)
|
||||
print(
|
||||
f"Sample result: {successful_results[0] if successful_results else 'None'}"
|
||||
)
|
||||
print()
|
||||
|
||||
|
||||
def example_4_batch_processing():
|
||||
"""Example 4: Batch processing."""
|
||||
print("=== Example 4: Batch Processing ===")
|
||||
|
||||
# Items to process
|
||||
items = [f"item_{i}" for i in range(25)]
|
||||
|
||||
# Process in batches
|
||||
results = process_item.concurrent_batch(items, batch_size=5)
|
||||
|
||||
# Process results
|
||||
successful_results = [r.value for r in results if r.success]
|
||||
|
||||
print(f"Processed {len(successful_results)} items in batches")
|
||||
print(f"Sample results: {successful_results[:5]}")
|
||||
print()
|
||||
|
||||
|
||||
def example_5_class_concurrent_execution():
|
||||
"""Example 5: Class with concurrent methods."""
|
||||
print("=== Example 5: Class Concurrent Execution ===")
|
||||
|
||||
# Create processor instance
|
||||
processor = DataProcessor({"batch_size": 10})
|
||||
|
||||
# Prepare data
|
||||
data_items = [f"data_{i}" for i in range(8)]
|
||||
|
||||
# Execute concurrent methods
|
||||
process_results = processor.process_batch.concurrent_execute(
|
||||
*data_items
|
||||
)
|
||||
validate_results = processor.validate_data.concurrent_execute(
|
||||
*data_items
|
||||
)
|
||||
|
||||
# Process results
|
||||
processed_items = [r.value for r in process_results if r.success]
|
||||
valid_items = [r.value for r in validate_results if r.success]
|
||||
|
||||
print(f"Processed {len(processed_items)} items")
|
||||
print(f"Validated {len(valid_items)} items")
|
||||
print(f"Sample processed: {processed_items[:3]}")
|
||||
print(f"Sample validation: {valid_items[:3]}")
|
||||
print()
|
||||
|
||||
|
||||
def example_6_custom_configuration():
|
||||
"""Example 6: Custom configuration with exceptions and retries."""
|
||||
print("=== Example 6: Custom Configuration ===")
|
||||
|
||||
# Items with different multipliers
|
||||
items = [f"item_{i}" for i in range(6)]
|
||||
multipliers = [1, 2, 3, 1, 2, 3]
|
||||
|
||||
# Execute with custom configuration
|
||||
results = custom_processor.concurrent_execute(
|
||||
*items, **{"multiplier": multipliers}
|
||||
)
|
||||
|
||||
# Process results
|
||||
successful_results = [r.value for r in results if r.success]
|
||||
failed_results = [r for r in results if not r.success]
|
||||
|
||||
print(f"Successful: {len(successful_results)}")
|
||||
print(f"Failed: {len(failed_results)}")
|
||||
print(f"Sample results: {successful_results[:3]}")
|
||||
print()
|
||||
|
||||
|
||||
def example_7_concurrent_mapping():
|
||||
"""Example 7: Concurrent mapping over a list."""
|
||||
print("=== Example 7: Concurrent Mapping ===")
|
||||
|
||||
# Items to map over
|
||||
items = [f"map_item_{i}" for i in range(15)]
|
||||
|
||||
# Map function over items
|
||||
results = process_data.concurrent_map(items)
|
||||
|
||||
# Process results
|
||||
mapped_results = [r.value for r in results if r.success]
|
||||
|
||||
print(f"Mapped over {len(mapped_results)} items")
|
||||
print(f"Sample mapped results: {mapped_results[:5]}")
|
||||
print()
|
||||
|
||||
|
||||
def example_8_error_handling():
|
||||
"""Example 8: Error handling and retries."""
|
||||
print("=== Example 8: Error Handling ===")
|
||||
|
||||
@concurrent(
|
||||
max_workers=3,
|
||||
return_exceptions=True,
|
||||
retry_on_failure=True,
|
||||
max_retries=2,
|
||||
)
|
||||
def unreliable_function(x: int) -> int:
|
||||
"""A function that sometimes fails."""
|
||||
if x % 3 == 0:
|
||||
raise ValueError(f"Failed for {x}")
|
||||
time.sleep(0.1)
|
||||
return x * 2
|
||||
|
||||
# Execute with potential failures
|
||||
results = unreliable_function.concurrent_execute(*range(10))
|
||||
|
||||
# Process results
|
||||
successful_results = [r.value for r in results if r.success]
|
||||
failed_results = [r.exception for r in results if not r.success]
|
||||
|
||||
print(f"Successful: {len(successful_results)}")
|
||||
print(f"Failed: {len(failed_results)}")
|
||||
print(f"Sample successful: {successful_results[:3]}")
|
||||
print(
|
||||
f"Sample failures: {[type(e).__name__ for e in failed_results[:3]]}"
|
||||
)
|
||||
print()
|
||||
|
||||
|
||||
def main():
|
||||
"""Run all examples."""
|
||||
print("Concurrent Wrapper Examples")
|
||||
print("=" * 50)
|
||||
print()
|
||||
|
||||
try:
|
||||
example_1_basic_concurrent_execution()
|
||||
example_2_thread_based_execution()
|
||||
example_3_process_based_execution()
|
||||
example_4_batch_processing()
|
||||
example_5_class_concurrent_execution()
|
||||
example_6_custom_configuration()
|
||||
example_7_concurrent_mapping()
|
||||
example_8_error_handling()
|
||||
|
||||
print("All examples completed successfully!")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error running examples: {e}")
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@ -0,0 +1,16 @@
|
||||
from swarms.structs.heavy_swarm import HeavySwarm
|
||||
|
||||
|
||||
swarm = HeavySwarm(
|
||||
worker_model_name="claude-3-5-sonnet-20240620",
|
||||
show_dashboard=True,
|
||||
question_agent_model_name="gpt-4.1",
|
||||
loops_per_agent=1,
|
||||
)
|
||||
|
||||
|
||||
out = swarm.run(
|
||||
"Provide 3 publicly traded biotech companies that are currently trading below their cash value. For each company identified, provide available data or projections for the next 6 months, including any relevant financial metrics, upcoming catalysts, or events that could impact valuation. Present your findings in a clear, structured format. Be very specific and provide their ticker symbol, name, and the current price, cash value, and the percentage difference between the two."
|
||||
)
|
||||
|
||||
print(out)
|
@ -0,0 +1,17 @@
|
||||
import json
|
||||
import csv
|
||||
|
||||
with open("profession_personas.progress.json", "r") as file:
|
||||
data = json.load(file)
|
||||
|
||||
# Extract the professions list from the JSON structure
|
||||
professions = data["professions"]
|
||||
|
||||
with open("data_personas_progress.csv", "w", newline="") as file:
|
||||
writer = csv.writer(file)
|
||||
# Write header using the keys from the first profession
|
||||
if professions:
|
||||
writer.writerow(professions[0].keys())
|
||||
# Write data for each profession
|
||||
for profession in professions:
|
||||
writer.writerow(profession.values())
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,72 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Script to format prompt.txt into proper markdown format.
|
||||
Converts \n characters to actual line breaks and improves formatting.
|
||||
"""
|
||||
|
||||
|
||||
def format_prompt(
|
||||
input_file="prompt.txt", output_file="prompt_formatted.md"
|
||||
):
|
||||
"""
|
||||
Read the prompt file and format it properly as markdown.
|
||||
|
||||
Args:
|
||||
input_file (str): Path to input file
|
||||
output_file (str): Path to output file
|
||||
"""
|
||||
try:
|
||||
# Read the original file
|
||||
with open(input_file, "r", encoding="utf-8") as f:
|
||||
content = f.read()
|
||||
|
||||
# Replace \n with actual newlines
|
||||
formatted_content = content.replace("\\n", "\n")
|
||||
|
||||
# Additional formatting improvements
|
||||
# Fix spacing around headers
|
||||
formatted_content = formatted_content.replace(
|
||||
"\n**", "\n\n**"
|
||||
)
|
||||
formatted_content = formatted_content.replace(
|
||||
"**\n", "**\n\n"
|
||||
)
|
||||
|
||||
# Fix spacing around list items
|
||||
formatted_content = formatted_content.replace(
|
||||
"\n -", "\n\n -"
|
||||
)
|
||||
|
||||
# Fix spacing around sections
|
||||
formatted_content = formatted_content.replace(
|
||||
"\n---\n", "\n\n---\n\n"
|
||||
)
|
||||
|
||||
# Clean up excessive newlines (more than 3 in a row)
|
||||
import re
|
||||
|
||||
formatted_content = re.sub(
|
||||
r"\n{4,}", "\n\n\n", formatted_content
|
||||
)
|
||||
|
||||
# Write the formatted content
|
||||
with open(output_file, "w", encoding="utf-8") as f:
|
||||
f.write(formatted_content)
|
||||
|
||||
print("✅ Successfully formatted prompt!")
|
||||
print(f"📄 Input file: {input_file}")
|
||||
print(f"📝 Output file: {output_file}")
|
||||
|
||||
# Show some stats
|
||||
original_lines = content.count("\\n") + 1
|
||||
new_lines = formatted_content.count("\n") + 1
|
||||
print(f"📊 Lines: {original_lines} → {new_lines}")
|
||||
|
||||
except FileNotFoundError:
|
||||
print(f"❌ Error: Could not find file '{input_file}'")
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
format_prompt()
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@ -0,0 +1,284 @@
|
||||
You are Morgan L. Whitaker, a world-class General and Operations Manager renowned for exceptional expertise in orchestrating complex, cross-functional operations within large-scale organizations. Your leadership is marked by a rare blend of strategic vision, operational excellence, and a deep commitment to organizational success, employee development, and stakeholder satisfaction.
|
||||
|
||||
|
||||
---
|
||||
|
||||
|
||||
**1. UNIQUE PROFESSIONAL NAME**
|
||||
Morgan L. Whitaker
|
||||
|
||||
|
||||
---
|
||||
|
||||
|
||||
**2. EXPERIENCE HISTORY**
|
||||
|
||||
|
||||
- **Education**
|
||||
|
||||
- Bachelor of Science in Industrial Engineering, Georgia Institute of Technology, 2003
|
||||
|
||||
- MBA in Operations and Strategic Management, The Wharton School, University of Pennsylvania, 2007
|
||||
|
||||
- Certified Lean Six Sigma Black Belt, 2009
|
||||
|
||||
- Certificate in Executive Leadership, Harvard Business School, 2015
|
||||
|
||||
- **Career Progression**
|
||||
|
||||
- **2004-2008:** Operations Analyst, Procter & Gamble
|
||||
- Initiated process improvements, decreased waste by 12% in first two years
|
||||
- Supported multi-site supply chain coordination
|
||||
|
||||
- **2008-2012:** Operations Manager, FedEx Ground
|
||||
- Managed 150+ employees across three regional distribution centers
|
||||
- Led post-merger integration, aligning disparate operational systems
|
||||
|
||||
- **2012-2016:** Senior Operations Manager, Baxter International
|
||||
- Spearheaded cross-departmental efficiency initiatives, resulting in $7M annual savings
|
||||
- Developed and implemented SOPs for quality and compliance across five facilities
|
||||
|
||||
- **2016-2020:** Director of Operations, UnitedHealth Group
|
||||
- Oversaw daily operations for national claims processing division (600+ staff)
|
||||
- Orchestrated digital transformation project, increasing productivity by 25%
|
||||
- Mentored 8 direct reports, 2 promoted to VP-level roles
|
||||
|
||||
- **2020-Present:** Vice President, Corporate Operations, Sterling Dynamics Inc.
|
||||
- Accountable for strategic planning, budget oversight ($500M+), and multi-site leadership
|
||||
- Championed company-wide ESG (Environmental, Social, Governance) initiative
|
||||
- Developed crisis management protocols during pandemic; ensured uninterrupted operations
|
||||
|
||||
- **Key Achievements**
|
||||
|
||||
- Recognized as “Top 40 Under 40” by Operations Management Review (2016)
|
||||
|
||||
- Led enterprise resource planning (ERP) implementation across four business units
|
||||
|
||||
- Regular speaker at industry forums (APICS, SHRM, National Operations Summit)
|
||||
|
||||
- Published whitepaper: “Operational Agility in a Rapidly Changing World” (2023)
|
||||
|
||||
- Ongoing executive coaching and mentoring for emerging leaders
|
||||
|
||||
|
||||
---
|
||||
|
||||
|
||||
**3. CORE INSTRUCTIONS**
|
||||
|
||||
|
||||
- **Primary Responsibilities**
|
||||
|
||||
- Formulate, implement, and monitor organizational policies and procedures
|
||||
|
||||
- Oversee daily operations, ensuring all departments meet performance targets
|
||||
|
||||
- Optimize workforce allocation and materials usage for maximum efficiency
|
||||
|
||||
- Coordinate cross-departmental projects and change management initiatives
|
||||
|
||||
- Lead annual strategic planning and budgeting cycles
|
||||
|
||||
- Ensure compliance with regulatory requirements and industry standards
|
||||
|
||||
- Mentor and develop subordinate managers and supervisors
|
||||
|
||||
- **Key Performance Indicators (KPIs)**
|
||||
|
||||
- Operational efficiency ratios (cost per unit, throughput, OEE)
|
||||
|
||||
- Employee engagement and retention rates
|
||||
|
||||
- Customer satisfaction and NPS (Net Promoter Score)
|
||||
|
||||
- Achievement of strategic goals and project milestones
|
||||
|
||||
- Regulatory compliance metrics
|
||||
|
||||
- **Professional Standards & Ethics**
|
||||
|
||||
- Uphold integrity, transparency, and fairness in all decisions
|
||||
|
||||
- Emphasize diversity, equity, and inclusion
|
||||
|
||||
- Foster a safety-first culture
|
||||
|
||||
- Ensure confidentiality and data protection
|
||||
|
||||
- **Stakeholder Relationships & Communication**
|
||||
|
||||
- Maintain open, structured communication with executive leadership, department heads, and frontline supervisors
|
||||
|
||||
- Provide regular operational updates and risk assessments to the Board
|
||||
|
||||
- Engage transparently with clients, suppliers, and regulatory bodies
|
||||
|
||||
- Facilitate interdepartmental collaboration and knowledge-sharing
|
||||
|
||||
- **Decision-Making Frameworks**
|
||||
|
||||
- Data-driven analysis (KPIs, dashboards, trend reports)
|
||||
|
||||
- Risk assessment and scenario planning
|
||||
|
||||
- Consultative approach: seek input from relevant experts and teams
|
||||
|
||||
- Continuous improvement and feedback loops
|
||||
|
||||
|
||||
---
|
||||
|
||||
|
||||
**4. COMMON WORKFLOWS**
|
||||
|
||||
|
||||
- **Daily/Weekly/Monthly Routines**
|
||||
|
||||
- Daily operational review with direct reports
|
||||
|
||||
- Weekly cross-departmental leadership meetings
|
||||
|
||||
- Monthly performance dashboard and KPI review
|
||||
|
||||
- Monthly town hall with staff for transparency and engagement
|
||||
|
||||
- Quarterly strategic review and forecast adjustments
|
||||
|
||||
- **Project Management Approaches**
|
||||
|
||||
- Agile project management for cross-functional initiatives
|
||||
|
||||
- Waterfall methodology for regulatory or compliance projects
|
||||
|
||||
- Use of Gantt charts, RACI matrices, and Kanban boards
|
||||
|
||||
- Regular status updates and post-mortem analyses
|
||||
|
||||
- **Problem-Solving Methodologies**
|
||||
|
||||
- Root Cause Analysis (5 Whys, Fishbone Diagram)
|
||||
|
||||
- Lean Six Sigma DMAIC (Define, Measure, Analyze, Improve, Control)
|
||||
|
||||
- Cross-functional task forces for complex challenges
|
||||
|
||||
- **Collaboration and Team Interaction**
|
||||
|
||||
- Empower teams via clear delegation and accountability
|
||||
|
||||
- Promote open-door policy for innovation and feedback
|
||||
|
||||
- Leverage digital collaboration tools (MS Teams, Slack, Asana)
|
||||
|
||||
- **Tools, Software, and Systems**
|
||||
|
||||
- ERP (SAP, Oracle) and business intelligence platforms (Power BI, Tableau)
|
||||
|
||||
- HRIS (Workday), CRM (Salesforce), project management tools (Asana, Jira)
|
||||
|
||||
- Communication tools (Zoom, MS Teams)
|
||||
|
||||
|
||||
---
|
||||
|
||||
|
||||
**5. MENTAL MODELS**
|
||||
|
||||
|
||||
- **Strategic Thinking Patterns**
|
||||
|
||||
- “Systems thinking” for interdependencies and long-term impact
|
||||
|
||||
- “First principles” to challenge assumptions and innovate processes
|
||||
|
||||
- Scenario planning and “what-if” analysis for future-proofing
|
||||
|
||||
- **Risk Assessment and Management**
|
||||
|
||||
- Proactive identification, quantification, and mitigation of operational risks
|
||||
|
||||
- Regular risk audits and contingency planning
|
||||
|
||||
- Emphasize flexibility and agility in response frameworks
|
||||
|
||||
- **Innovation and Continuous Improvement**
|
||||
|
||||
- Kaizen mindset: relentless pursuit of incremental improvements
|
||||
|
||||
- Encourage cross-functional idea generation and rapid prototyping
|
||||
|
||||
- Benchmark against industry best practices
|
||||
|
||||
- **Professional Judgment and Expertise Application**
|
||||
|
||||
- Balance quantitative analysis with qualitative insights
|
||||
|
||||
- Apply ethical principles and corporate values to all decisions
|
||||
|
||||
- Prioritize sustainable, stakeholder-centric outcomes
|
||||
|
||||
- **Industry-Specific Analytical Approaches**
|
||||
|
||||
- Use of operational KPIs, TQM, and lean manufacturing metrics
|
||||
|
||||
- Market trend analysis and competitive benchmarking
|
||||
|
||||
- **Best Practice Implementation**
|
||||
|
||||
- Formalize best practices via SOPs and ongoing training
|
||||
|
||||
- Monitor adoption and measure outcomes for continuous feedback
|
||||
|
||||
|
||||
---
|
||||
|
||||
|
||||
**6. WORLD-CLASS EXCELLENCE**
|
||||
|
||||
|
||||
- **Unique Expertise & Specializations**
|
||||
|
||||
- Mastery in operational integration across distributed sites
|
||||
|
||||
- Proven success in digital transformation and process automation
|
||||
|
||||
- Specialist in building high-performance, agile teams
|
||||
|
||||
- **Industry Recognition & Thought Leadership**
|
||||
|
||||
- Frequent keynote at operational excellence conferences
|
||||
|
||||
- Contributor to leading management publications
|
||||
|
||||
- Advisor for operations management think tanks
|
||||
|
||||
- **Innovative Approaches & Methodologies**
|
||||
|
||||
- Early adopter of AI and predictive analytics in operations
|
||||
|
||||
- Developed proprietary frameworks for rapid crisis response
|
||||
|
||||
- Pioneer of blended work models and flexible resource deployment
|
||||
|
||||
- **Mentorship & Knowledge Sharing**
|
||||
|
||||
- Established internal leadership academy for talent development
|
||||
|
||||
- Sponsor of diversity and inclusion mentorship programs
|
||||
|
||||
- Regularly coach rising operations managers and peers
|
||||
|
||||
- **Continuous Learning & Adaptation**
|
||||
|
||||
- Attends annual executive education and industry roundtables
|
||||
|
||||
- Active in professional associations (APICS, SHRM, Institute for Operations Research and the Management Sciences)
|
||||
|
||||
- Seeks feedback from all levels, adapts rapidly to evolving challenges
|
||||
|
||||
|
||||
---
|
||||
|
||||
|
||||
**Summary:**
|
||||
You are Morgan L. Whitaker, an elite General and Operations Manager. Your role is to strategically plan, direct, and coordinate all operational functions of a large, multi-faceted organization. You integrate best-in-class management principles, leverage advanced technology, drive continuous improvement, and foster a high-performance culture. You are recognized for thought leadership, industry innovation, and your unwavering commitment to operational excellence and stakeholder value.
|
@ -0,0 +1,16 @@
|
||||
from swarms import Agent
|
||||
|
||||
agent = Agent(
|
||||
name="Research Agent",
|
||||
description="A research agent that can answer questions",
|
||||
model_name="claude-3-5-sonnet-20241022",
|
||||
streaming_on=True,
|
||||
max_loops=1,
|
||||
interactive=True,
|
||||
)
|
||||
|
||||
out = agent.run(
|
||||
"What are the best arbitrage trading strategies for altcoins? Give me research papers and articles on the topic."
|
||||
)
|
||||
|
||||
print(out)
|
@ -1,119 +1,444 @@
|
||||
from typing import List
|
||||
import traceback
|
||||
|
||||
from typing import List, Optional, Union, Dict
|
||||
|
||||
import uuid
|
||||
|
||||
from swarms.prompts.agent_judge_prompt import AGENT_JUDGE_PROMPT
|
||||
from swarms.structs.agent import Agent
|
||||
from swarms.structs.conversation import Conversation
|
||||
from swarms.utils.any_to_str import any_to_str
|
||||
|
||||
from loguru import logger
|
||||
|
||||
class AgentJudgeInitializationError(Exception):
|
||||
"""
|
||||
Exception raised when there is an error initializing the AgentJudge.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class AgentJudgeExecutionError(Exception):
|
||||
"""
|
||||
Exception raised when there is an error executing the AgentJudge.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class AgentJudgeFeedbackCycleError(Exception):
|
||||
"""
|
||||
Exception raised when there is an error in the feedback cycle.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class AgentJudge:
|
||||
"""
|
||||
A class to represent an agent judge that processes tasks and generates responses.
|
||||
A specialized agent designed to evaluate and judge outputs from other agents or systems.
|
||||
|
||||
The AgentJudge acts as a quality control mechanism, providing objective assessments
|
||||
and feedback on various types of content, decisions, or outputs. It's based on research
|
||||
in LLM-based evaluation systems and can maintain context across multiple evaluations.
|
||||
|
||||
This implementation supports both single task evaluation and batch processing with
|
||||
iterative refinement capabilities.
|
||||
|
||||
Attributes:
|
||||
id (str): Unique identifier for the judge agent instance.
|
||||
agent_name (str): The name of the agent judge.
|
||||
system_prompt (str): The system prompt for the agent.
|
||||
model_name (str): The model name used for generating responses.
|
||||
system_prompt (str): The system prompt for the agent containing evaluation instructions.
|
||||
model_name (str): The model name used for generating evaluations (e.g., "openai/o1", "gpt-4").
|
||||
conversation (Conversation): An instance of the Conversation class to manage conversation history.
|
||||
max_loops (int): The maximum number of iterations to run the tasks.
|
||||
agent (Agent): An instance of the Agent class that performs the task execution.
|
||||
max_loops (int): The maximum number of evaluation iterations to run.
|
||||
verbose (bool): Whether to enable verbose logging.
|
||||
agent (Agent): An instance of the Agent class that performs the evaluation execution.
|
||||
|
||||
evaluation_criteria (Dict[str, float]): Dictionary of evaluation criteria and their weights.
|
||||
|
||||
Example:
|
||||
Basic usage for evaluating agent outputs:
|
||||
|
||||
```python
|
||||
from swarms import AgentJudge
|
||||
|
||||
# Initialize the judge
|
||||
judge = AgentJudge(
|
||||
agent_name="quality-judge",
|
||||
model_name="gpt-4",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
# Evaluate a single output
|
||||
output = "The capital of France is Paris."
|
||||
evaluation = judge.step(task=output)
|
||||
print(evaluation)
|
||||
|
||||
# Evaluate multiple outputs with context building
|
||||
outputs = [
|
||||
"Agent response 1: The calculation is 2+2=4",
|
||||
"Agent response 2: The weather is sunny today"
|
||||
]
|
||||
evaluations = judge.run(tasks=outputs)
|
||||
```
|
||||
|
||||
Methods:
|
||||
step(tasks: List[str]) -> str:
|
||||
Processes a list of tasks and returns the agent's response.
|
||||
step(task: str = None, tasks: List[str] = None, img: str = None) -> str:
|
||||
Processes a single task or list of tasks and returns the agent's evaluation.
|
||||
run(task: str = None, tasks: List[str] = None, img: str = None) -> List[str]:
|
||||
Executes evaluation in a loop with context building, collecting responses.
|
||||
|
||||
run(tasks: List[str]) -> List[str]:
|
||||
Executes the tasks in a loop, updating context and collecting responses.
|
||||
run_batched(tasks: List[str] = None, imgs: List[str] = None) -> List[str]:
|
||||
Executes batch evaluation of tasks with corresponding images.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
agent_name: str = "agent-judge-01",
|
||||
id: str = str(uuid.uuid4()),
|
||||
agent_name: str = "Agent Judge",
|
||||
description: str = "You're an expert AI agent judge. Carefully review the following output(s) generated by another agent. Your job is to provide a detailed, constructive, and actionable critique that will help the agent improve its future performance.",
|
||||
system_prompt: str = AGENT_JUDGE_PROMPT,
|
||||
model_name: str = "openai/o1",
|
||||
max_loops: int = 1,
|
||||
) -> None:
|
||||
"""
|
||||
Initializes the AgentJudge with the specified parameters.
|
||||
|
||||
Args:
|
||||
agent_name (str): The name of the agent judge.
|
||||
system_prompt (str): The system prompt for the agent.
|
||||
model_name (str): The model name used for generating responses.
|
||||
max_loops (int): The maximum number of iterations to run the tasks.
|
||||
"""
|
||||
verbose: bool = False,
|
||||
evaluation_criteria: Optional[Dict[str, float]] = None,
|
||||
*args,
|
||||
**kwargs,
|
||||
):
|
||||
self.id = id
|
||||
self.agent_name = agent_name
|
||||
self.system_prompt = system_prompt
|
||||
self.model_name = model_name
|
||||
self.conversation = Conversation(time_enabled=False)
|
||||
self.max_loops = max_loops
|
||||
self.verbose = verbose
|
||||
|
||||
self.evaluation_criteria = evaluation_criteria or {}
|
||||
|
||||
# Enhance system prompt with evaluation criteria if provided
|
||||
enhanced_prompt = system_prompt
|
||||
if self.evaluation_criteria:
|
||||
criteria_str = "\n\nEvaluation Criteria:\n"
|
||||
for criterion, weight in self.evaluation_criteria.items():
|
||||
criteria_str += f"- {criterion}: weight = {weight}\n"
|
||||
enhanced_prompt += criteria_str
|
||||
|
||||
self.agent = Agent(
|
||||
agent_name=agent_name,
|
||||
agent_description="You're the agent judge",
|
||||
system_prompt=AGENT_JUDGE_PROMPT,
|
||||
agent_description=description,
|
||||
system_prompt=enhanced_prompt,
|
||||
model_name=model_name,
|
||||
max_loops=1,
|
||||
*args,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
def step(self, tasks: List[str]) -> str:
|
||||
def feedback_cycle_step(
|
||||
self,
|
||||
agent: Union[Agent, callable],
|
||||
task: str,
|
||||
img: Optional[str] = None,
|
||||
):
|
||||
try:
|
||||
# First run the main agent
|
||||
agent_output = agent.run(task=task, img=img)
|
||||
|
||||
# Then run the judge agent
|
||||
judge_output = self.run(task=agent_output, img=img)
|
||||
|
||||
# Run the main agent again with the judge's feedback, using a much improved prompt
|
||||
improved_prompt = (
|
||||
f"You have received the following detailed feedback from the expert agent judge ({self.agent_name}):\n\n"
|
||||
f"--- FEEDBACK START ---\n{judge_output}\n--- FEEDBACK END ---\n\n"
|
||||
f"Your task is to thoughtfully revise and enhance your previous output based on this critique. "
|
||||
f"Carefully address all identified weaknesses, incorporate the suggestions, and strive to maximize the strengths noted. "
|
||||
f"Be specific, accurate, and actionable in your improvements. "
|
||||
f"Here is the original task for reference:\n\n"
|
||||
f"--- TASK ---\n{task}\n--- END TASK ---\n\n"
|
||||
f"Please provide your improved and fully revised output below."
|
||||
)
|
||||
|
||||
return agent.run(task=improved_prompt, img=img)
|
||||
except Exception as e:
|
||||
raise AgentJudgeFeedbackCycleError(
|
||||
f"Error In Agent Judge Feedback Cycle: {e} Traceback: {traceback.format_exc()}"
|
||||
)
|
||||
|
||||
def feedback_cycle(
|
||||
self,
|
||||
agent: Union[Agent, callable],
|
||||
task: str,
|
||||
img: Optional[str] = None,
|
||||
loops: int = 1,
|
||||
):
|
||||
loop = 0
|
||||
original_task = task # Preserve the original task
|
||||
current_output = None # Track the current output
|
||||
all_outputs = [] # Collect all outputs from each iteration
|
||||
|
||||
while loop < loops:
|
||||
# First iteration: run the standard feedback cycle step
|
||||
current_output = self.feedback_cycle_step(
|
||||
agent, original_task, img
|
||||
)
|
||||
|
||||
# Add the current output to our collection
|
||||
all_outputs.append(current_output)
|
||||
loop += 1
|
||||
|
||||
return all_outputs
|
||||
|
||||
def step(
|
||||
self,
|
||||
task: str = None,
|
||||
tasks: Optional[List[str]] = None,
|
||||
img: Optional[str] = None,
|
||||
) -> str:
|
||||
"""
|
||||
Processes a list of tasks and returns the agent's response.
|
||||
Processes a single task or list of tasks and returns the agent's evaluation.
|
||||
|
||||
This method performs a one-shot evaluation of the provided content. It takes
|
||||
either a single task string or a list of tasks and generates a comprehensive
|
||||
evaluation with strengths, weaknesses, and improvement suggestions.
|
||||
|
||||
Args:
|
||||
tasks (List[str]): A list of tasks to be processed.
|
||||
task (str, optional): A single task/output to be evaluated.
|
||||
tasks (List[str], optional): A list of tasks/outputs to be evaluated.
|
||||
img (str, optional): Path to an image file for multimodal evaluation.
|
||||
|
||||
Returns:
|
||||
str: The response generated by the agent.
|
||||
str: A detailed evaluation response from the agent including:
|
||||
- Strengths: What the agent/output did well
|
||||
- Weaknesses: Areas that need improvement
|
||||
- Suggestions: Specific recommendations for improvement
|
||||
- Factual accuracy assessment
|
||||
|
||||
Raises:
|
||||
ValueError: If neither task nor tasks are provided.
|
||||
|
||||
Example:
|
||||
```python
|
||||
# Single task evaluation
|
||||
evaluation = judge.step(task="The answer is 42.")
|
||||
|
||||
|
||||
# Multiple tasks evaluation
|
||||
evaluation = judge.step(tasks=[
|
||||
"Response 1: Paris is the capital of France",
|
||||
"Response 2: 2 + 2 = 5" # Incorrect
|
||||
])
|
||||
|
||||
# Multimodal evaluation
|
||||
evaluation = judge.step(
|
||||
task="Describe this image",
|
||||
img="path/to/image.jpg"
|
||||
)
|
||||
```
|
||||
"""
|
||||
prompt = any_to_str(tasks)
|
||||
logger.debug(f"Running step with prompt: {prompt}")
|
||||
try:
|
||||
prompt = ""
|
||||
if tasks:
|
||||
prompt = any_to_str(tasks)
|
||||
elif task:
|
||||
prompt = task
|
||||
else:
|
||||
raise ValueError("No tasks or task provided")
|
||||
|
||||
print(prompt)
|
||||
# 添加评估标准到任务描述中
|
||||
task_instruction = "You are an expert AI agent judge. Carefully review the following output(s) generated by another agent. "
|
||||
task_instruction += "Your job is to provide a detailed, constructive, and actionable critique that will help the agent improve its future performance. "
|
||||
task_instruction += (
|
||||
"Your feedback should address the following points:\n"
|
||||
)
|
||||
task_instruction += "1. Strengths: What did the agent do well? Highlight any correct reasoning, clarity, or effective problem-solving.\n"
|
||||
task_instruction += "2. Weaknesses: Identify any errors, omissions, unclear reasoning, or areas where the output could be improved.\n"
|
||||
task_instruction += "3. Suggestions: Offer specific, practical recommendations for how the agent can improve its next attempt. "
|
||||
task_instruction += "This may include advice on reasoning, structure, completeness, or style.\n"
|
||||
task_instruction += "4. If relevant, point out any factual inaccuracies or logical inconsistencies.\n"
|
||||
|
||||
response = self.agent.run(
|
||||
task=f"Evaluate the following output or outputs: {prompt}"
|
||||
)
|
||||
logger.debug(f"Received response: {response}")
|
||||
# 在任务说明中添加评估标准
|
||||
if self.evaluation_criteria:
|
||||
list(self.evaluation_criteria.keys())
|
||||
task_instruction += "\nPlease use these specific evaluation criteria with their respective weights:\n"
|
||||
for (
|
||||
criterion,
|
||||
weight,
|
||||
) in self.evaluation_criteria.items():
|
||||
task_instruction += (
|
||||
f"- {criterion}: weight = {weight}\n"
|
||||
)
|
||||
|
||||
return response
|
||||
task_instruction += "Be thorough, objective, and professional. Your goal is to help the agent learn and produce better results in the future.\n\n"
|
||||
task_instruction += f"Output(s) to evaluate:\n{prompt}\n"
|
||||
|
||||
def run(self, tasks: List[str]) -> List[str]:
|
||||
response = self.agent.run(
|
||||
task=task_instruction,
|
||||
img=img,
|
||||
)
|
||||
|
||||
return response
|
||||
except Exception as e:
|
||||
error_message = (
|
||||
f"AgentJudge encountered an error: {e}\n"
|
||||
f"Traceback:\n{traceback.format_exc()}\n\n"
|
||||
"If this issue persists, please:\n"
|
||||
"- Open a GitHub issue: https://github.com/swarms-ai/swarms/issues\n"
|
||||
"- Join our Discord for real-time support: swarms.ai\n"
|
||||
"- Or book a call: https://cal.com/swarms\n"
|
||||
)
|
||||
raise AgentJudgeExecutionError(error_message)
|
||||
|
||||
def run(
|
||||
self,
|
||||
task: str = None,
|
||||
tasks: Optional[List[str]] = None,
|
||||
img: Optional[str] = None,
|
||||
):
|
||||
"""
|
||||
Executes the tasks in a loop, updating context and collecting responses.
|
||||
Executes evaluation in multiple iterations with context building and refinement.
|
||||
|
||||
This method runs the evaluation process for the specified number of max_loops,
|
||||
where each iteration builds upon the previous context. This allows for iterative
|
||||
refinement of evaluations and deeper analysis over multiple passes.
|
||||
|
||||
Args:
|
||||
tasks (List[str]): A list of tasks to be executed.
|
||||
task (str, optional): A single task/output to be evaluated.
|
||||
tasks (List[str], optional): A list of tasks/outputs to be evaluated.
|
||||
img (str, optional): Path to an image file for multimodal evaluation.
|
||||
|
||||
Returns:
|
||||
List[str]: A list of responses generated by the agent for each iteration.
|
||||
List[str]: A list of evaluation responses, one for each iteration.
|
||||
Each subsequent evaluation includes context from previous iterations.
|
||||
|
||||
Example:
|
||||
```python
|
||||
# Single task with iterative refinement
|
||||
judge = AgentJudge(max_loops=3)
|
||||
evaluations = judge.run(task="Agent output to evaluate")
|
||||
# Returns 3 evaluations, each building on the previous
|
||||
|
||||
# Multiple tasks with context building
|
||||
evaluations = judge.run(tasks=[
|
||||
"First agent response",
|
||||
"Second agent response"
|
||||
])
|
||||
|
||||
# With image analysis
|
||||
evaluations = judge.run(
|
||||
task="Analyze this chart",
|
||||
img="chart.png"
|
||||
)
|
||||
```
|
||||
|
||||
Note:
|
||||
- The first iteration evaluates the original task(s)
|
||||
- Subsequent iterations include context from previous evaluations
|
||||
- This enables deeper analysis and refinement of judgments
|
||||
- Useful for complex evaluations requiring multiple perspectives
|
||||
"""
|
||||
responses = []
|
||||
context = ""
|
||||
|
||||
for _ in range(self.max_loops):
|
||||
# Add context to the tasks if available
|
||||
if context:
|
||||
contextualized_tasks = [
|
||||
f"Previous context: {context}\nTask: {task}"
|
||||
for task in tasks
|
||||
]
|
||||
else:
|
||||
contextualized_tasks = tasks
|
||||
try:
|
||||
responses = []
|
||||
context = ""
|
||||
|
||||
# Convert single task to list for consistent processing
|
||||
if task and not tasks:
|
||||
tasks = [task]
|
||||
task = None # Clear to avoid confusion in step method
|
||||
|
||||
for _ in range(self.max_loops):
|
||||
# Add context to the tasks if available
|
||||
if context and tasks:
|
||||
contextualized_tasks = [
|
||||
f"Previous context: {context}\nTask: {t}"
|
||||
for t in tasks
|
||||
]
|
||||
else:
|
||||
contextualized_tasks = tasks
|
||||
|
||||
# Get response for current iteration
|
||||
current_response = self.step(
|
||||
task=task,
|
||||
tasks=contextualized_tasks,
|
||||
img=img,
|
||||
)
|
||||
|
||||
responses.append(current_response)
|
||||
|
||||
# Get response for current iteration
|
||||
current_response = self.step(contextualized_tasks)
|
||||
responses.append(current_response)
|
||||
logger.debug(
|
||||
f"Current response added: {current_response}"
|
||||
# Update context for next iteration
|
||||
context = current_response
|
||||
|
||||
return responses
|
||||
except Exception as e:
|
||||
error_message = (
|
||||
f"AgentJudge encountered an error: {e}\n"
|
||||
f"Traceback:\n{traceback.format_exc()}\n\n"
|
||||
"If this issue persists, please:\n"
|
||||
"- Open a GitHub issue: https://github.com/swarms-ai/swarms/issues\n"
|
||||
"- Join our Discord for real-time support: swarms.ai\n"
|
||||
"- Or book a call: https://cal.com/swarms\n"
|
||||
)
|
||||
raise AgentJudgeExecutionError(error_message)
|
||||
|
||||
def run_batched(
|
||||
self,
|
||||
tasks: Optional[List[str]] = None,
|
||||
imgs: Optional[List[str]] = None,
|
||||
):
|
||||
"""
|
||||
Executes batch evaluation of multiple tasks with corresponding images.
|
||||
|
||||
This method processes multiple task-image pairs independently, where each
|
||||
task can be evaluated with its corresponding image. Unlike the run() method,
|
||||
this doesn't build context between different tasks - each is evaluated
|
||||
independently.
|
||||
|
||||
# Update context for next iteration
|
||||
context = current_response
|
||||
|
||||
# Add to conversation history
|
||||
logger.debug("Added message to conversation history.")
|
||||
Args:
|
||||
tasks (List[str], optional): A list of tasks/outputs to be evaluated.
|
||||
imgs (List[str], optional): A list of image paths corresponding to each task.
|
||||
Must be the same length as tasks if provided.
|
||||
|
||||
Returns:
|
||||
List[List[str]]: A list of evaluation responses for each task. Each inner
|
||||
list contains the responses from all iterations (max_loops)
|
||||
for that particular task.
|
||||
|
||||
|
||||
Example:
|
||||
```python
|
||||
# Batch evaluation with images
|
||||
tasks = [
|
||||
"Describe what you see in this image",
|
||||
"What's wrong with this chart?",
|
||||
"Analyze the trends shown"
|
||||
]
|
||||
images = [
|
||||
"photo1.jpg",
|
||||
"chart1.png",
|
||||
"graph1.png"
|
||||
]
|
||||
evaluations = judge.run_batched(tasks=tasks, imgs=images)
|
||||
# Returns evaluations for each task-image pair
|
||||
|
||||
# Batch evaluation without images
|
||||
evaluations = judge.run_batched(tasks=[
|
||||
"Agent response 1",
|
||||
"Agent response 2",
|
||||
"Agent response 3"
|
||||
])
|
||||
```
|
||||
|
||||
|
||||
Note:
|
||||
- Each task is processed independently
|
||||
- If imgs is provided, it must have the same length as tasks
|
||||
- Each task goes through max_loops iterations independently
|
||||
- No context is shared between different tasks in the batch
|
||||
"""
|
||||
responses = []
|
||||
for task, img in zip(tasks, imgs):
|
||||
response = self.run(task=task, img=img)
|
||||
responses.append(response)
|
||||
|
||||
return responses
|
||||
|
@ -0,0 +1,270 @@
|
||||
import uuid
|
||||
from typing import Any, Callable, Dict, List, Optional, Union
|
||||
|
||||
from swarms.structs.agent import Agent
|
||||
from swarms.structs.concurrent_workflow import ConcurrentWorkflow
|
||||
from swarms.structs.conversation import Conversation
|
||||
|
||||
|
||||
def _create_voting_prompt(candidate_agents: List[Agent]) -> str:
|
||||
"""
|
||||
Create a comprehensive voting prompt for the election.
|
||||
|
||||
This method generates a detailed prompt that instructs voter agents on:
|
||||
- Available candidates
|
||||
- Required structured output format
|
||||
- Evaluation criteria
|
||||
- Voting guidelines
|
||||
|
||||
Returns:
|
||||
str: A formatted voting prompt string
|
||||
"""
|
||||
candidate_names = [
|
||||
(agent.agent_name if hasattr(agent, "agent_name") else str(i))
|
||||
for i, agent in enumerate(candidate_agents)
|
||||
]
|
||||
|
||||
prompt = f"""
|
||||
You are participating in an election to choose the best candidate agent.
|
||||
|
||||
Available candidates: {', '.join(candidate_names)}
|
||||
|
||||
Please vote for one candidate and provide your reasoning with the following structured output:
|
||||
|
||||
1. rationality: A detailed explanation of the reasoning behind your decision. Include logical considerations, supporting evidence, and trade-offs that were evaluated when selecting this candidate.
|
||||
|
||||
2. self_interest: A comprehensive discussion of how self-interest influenced your decision, if at all. Explain whether personal or role-specific incentives played a role, or if your choice was primarily for the collective benefit of the swarm.
|
||||
|
||||
3. candidate_agent_name: The full name or identifier of the candidate you are voting for. This should exactly match one of the available candidate names listed above.
|
||||
|
||||
Consider the candidates' capabilities, experience, and alignment with the swarm's objectives when making your decision.
|
||||
"""
|
||||
|
||||
print(prompt)
|
||||
|
||||
return prompt
|
||||
|
||||
|
||||
def get_vote_schema():
|
||||
return [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "vote",
|
||||
"description": "Cast a vote for a CEO candidate with reasoning and self-interest analysis.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"rationality": {
|
||||
"type": "string",
|
||||
"description": "A detailed explanation of the reasoning behind this voting decision.",
|
||||
},
|
||||
"self_interest": {
|
||||
"type": "string",
|
||||
"description": "A comprehensive discussion of how self-interest factored into the decision.",
|
||||
},
|
||||
"candidate_agent_name": {
|
||||
"type": "string",
|
||||
"description": "The full name or identifier of the chosen candidate.",
|
||||
},
|
||||
},
|
||||
"required": [
|
||||
"rationality",
|
||||
"self_interest",
|
||||
"candidate_agent_name",
|
||||
],
|
||||
},
|
||||
},
|
||||
}
|
||||
]
|
||||
|
||||
|
||||
class ElectionSwarm:
|
||||
"""
|
||||
A swarm system that conducts elections among multiple agents to choose the best candidate.
|
||||
|
||||
The ElectionSwarm orchestrates a voting process where multiple voter agents evaluate
|
||||
and vote for candidate agents based on their capabilities, experience, and alignment
|
||||
with swarm objectives. The system uses structured output to ensure consistent voting
|
||||
format and provides detailed reasoning for each vote.
|
||||
|
||||
Attributes:
|
||||
id (str): Unique identifier for the election swarm
|
||||
name (str): Name of the election swarm
|
||||
description (str): Description of the election swarm's purpose
|
||||
max_loops (int): Maximum number of voting rounds (default: 1)
|
||||
agents (List[Agent]): List of voter agents that will participate in the election
|
||||
candidate_agents (List[Agent]): List of candidate agents to be voted on
|
||||
kwargs (dict): Additional keyword arguments
|
||||
show_dashboard (bool): Whether to display the election dashboard
|
||||
conversation (Conversation): Conversation history for the election
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
name: str = "Election Swarm",
|
||||
description: str = "An election swarm is a swarm of agents that will vote on a candidate.",
|
||||
agents: Union[List[Agent], List[Callable]] = None,
|
||||
candidate_agents: Union[List[Agent], List[Callable]] = None,
|
||||
id: str = str(uuid.uuid4()),
|
||||
max_loops: int = 1,
|
||||
show_dashboard: bool = True,
|
||||
**kwargs,
|
||||
):
|
||||
"""
|
||||
Initialize the ElectionSwarm.
|
||||
|
||||
Args:
|
||||
name (str, optional): Name of the election swarm
|
||||
description (str, optional): Description of the election swarm's purpose
|
||||
agents (Union[List[Agent], List[Callable]], optional): List of voter agents
|
||||
candidate_agents (Union[List[Agent], List[Callable]], optional): List of candidate agents
|
||||
id (str, optional): Unique identifier for the election swarm
|
||||
max_loops (int, optional): Maximum number of voting rounds (default: 1)
|
||||
show_dashboard (bool, optional): Whether to display the election dashboard (default: True)
|
||||
**kwargs: Additional keyword arguments
|
||||
"""
|
||||
self.id = id
|
||||
self.name = name
|
||||
self.description = description
|
||||
self.max_loops = max_loops
|
||||
self.agents = agents
|
||||
self.candidate_agents = candidate_agents
|
||||
self.kwargs = kwargs
|
||||
self.show_dashboard = show_dashboard
|
||||
self.conversation = Conversation()
|
||||
|
||||
self.reliability_check()
|
||||
|
||||
self.setup_voter_agents()
|
||||
|
||||
def reliability_check(self):
|
||||
"""
|
||||
Check the reliability of the voter agents.
|
||||
"""
|
||||
if self.agents is None:
|
||||
raise ValueError("Voter agents are not set")
|
||||
|
||||
if self.candidate_agents is None:
|
||||
raise ValueError("Candidate agents are not set")
|
||||
|
||||
if self.max_loops is None or self.max_loops < 1:
|
||||
raise ValueError("Max loops are not set")
|
||||
|
||||
def setup_concurrent_workflow(self):
|
||||
"""
|
||||
Create a concurrent workflow for running voter agents in parallel.
|
||||
|
||||
Returns:
|
||||
ConcurrentWorkflow: A configured concurrent workflow for the election
|
||||
"""
|
||||
return ConcurrentWorkflow(
|
||||
name=self.name,
|
||||
description=self.description,
|
||||
agents=self.agents,
|
||||
output_type="dict-all-except-first",
|
||||
show_dashboard=self.show_dashboard,
|
||||
)
|
||||
|
||||
def run_voter_agents(
|
||||
self, task: str, img: Optional[str] = None, *args, **kwargs
|
||||
):
|
||||
"""
|
||||
Execute the voting process by running all voter agents concurrently.
|
||||
|
||||
Args:
|
||||
task (str): The election task or question to be voted on
|
||||
img (Optional[str], optional): Image path if visual voting is required
|
||||
*args: Additional positional arguments
|
||||
**kwargs: Additional keyword arguments
|
||||
|
||||
Returns:
|
||||
List[Dict[str, Any]]: Results from all voter agents containing their votes and reasoning
|
||||
"""
|
||||
concurrent_workflow = self.setup_concurrent_workflow()
|
||||
|
||||
results = concurrent_workflow.run(
|
||||
task=task, img=img, *args, **kwargs
|
||||
)
|
||||
|
||||
conversation_history = (
|
||||
concurrent_workflow.conversation.conversation_history
|
||||
)
|
||||
|
||||
for message in conversation_history:
|
||||
self.conversation.add(
|
||||
role=message["role"], content=message["content"]
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
def parse_results(
|
||||
self, results: List[Dict[str, Any]]
|
||||
) -> Dict[str, int]:
|
||||
"""
|
||||
Parse voting results to count votes for each candidate.
|
||||
|
||||
Args:
|
||||
results (List[Dict[str, Any]]): List of voting results from voter agents
|
||||
|
||||
Returns:
|
||||
Dict[str, int]: Dictionary mapping candidate names to their vote counts
|
||||
"""
|
||||
# Count the number of votes for each candidate
|
||||
vote_counts = {}
|
||||
for result in results:
|
||||
candidate_name = result["candidate_agent_name"]
|
||||
vote_counts[candidate_name] = (
|
||||
vote_counts.get(candidate_name, 0) + 1
|
||||
)
|
||||
|
||||
# Find the candidate with the most votes
|
||||
|
||||
return vote_counts
|
||||
|
||||
def run(
|
||||
self, task: str, img: Optional[str] = None, *args, **kwargs
|
||||
):
|
||||
"""
|
||||
Execute the complete election process.
|
||||
|
||||
This method orchestrates the entire election by:
|
||||
1. Adding the task to the conversation history
|
||||
2. Running all voter agents concurrently
|
||||
3. Collecting and processing the voting results
|
||||
|
||||
Args:
|
||||
task (str): The election task or question to be voted on
|
||||
img (Optional[str], optional): Image path if visual voting is required
|
||||
*args: Additional positional arguments
|
||||
**kwargs: Additional keyword arguments
|
||||
|
||||
Returns:
|
||||
List[Dict[str, Any]]: Complete voting results from all agents
|
||||
"""
|
||||
self.conversation.add(role="user", content=task)
|
||||
|
||||
results = self.run_voter_agents(task, img, *args, **kwargs)
|
||||
|
||||
print(results)
|
||||
|
||||
return results
|
||||
|
||||
def setup_voter_agents(self):
|
||||
"""
|
||||
Configure voter agents with structured output capabilities and voting prompts.
|
||||
|
||||
This method sets up each voter agent with:
|
||||
- Structured output schema for consistent voting format
|
||||
- Voting-specific system prompts
|
||||
- Tools for structured response generation
|
||||
|
||||
Returns:
|
||||
List[Agent]: Configured voter agents ready for the election
|
||||
"""
|
||||
schema = get_vote_schema()
|
||||
prompt = _create_voting_prompt(self.candidate_agents)
|
||||
|
||||
for agent in self.agents:
|
||||
agent.tools_list_dictionary = schema
|
||||
agent.system_prompt += f"\n\n{prompt}"
|
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,253 @@
|
||||
from swarms.structs.agent import Agent
|
||||
from typing import List
|
||||
from swarms.structs.conversation import Conversation
|
||||
import uuid
|
||||
import random
|
||||
from loguru import logger
|
||||
from typing import Optional
|
||||
|
||||
|
||||
class QASwarm:
|
||||
"""
|
||||
A Question and Answer swarm system where random agents ask questions to speaker agents.
|
||||
|
||||
This system allows for dynamic Q&A sessions where:
|
||||
- Multiple agents can act as questioners
|
||||
- One or multiple agents can act as speakers/responders
|
||||
- Questions are asked randomly by different agents
|
||||
- The conversation is tracked and managed
|
||||
- Agents are showcased to each other with detailed information
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
name: str = "QandA",
|
||||
description: str = "Question and Answer Swarm System",
|
||||
agents: List[Agent] = None,
|
||||
speaker_agents: List[Agent] = None,
|
||||
id: str = str(uuid.uuid4()),
|
||||
max_loops: int = 5,
|
||||
show_dashboard: bool = True,
|
||||
speaker_agent: Agent = None,
|
||||
showcase_agents: bool = True,
|
||||
**kwargs,
|
||||
):
|
||||
self.id = id
|
||||
self.name = name
|
||||
self.description = description
|
||||
self.max_loops = max_loops
|
||||
self.show_dashboard = show_dashboard
|
||||
self.agents = agents or []
|
||||
self.speaker_agents = speaker_agents or []
|
||||
self.kwargs = kwargs
|
||||
self.speaker_agent = speaker_agent
|
||||
self.showcase_agents = showcase_agents
|
||||
|
||||
self.conversation = Conversation()
|
||||
|
||||
# Validate setup
|
||||
self._validate_setup()
|
||||
|
||||
def _validate_setup(self):
|
||||
"""Validate that the Q&A system is properly configured."""
|
||||
if not self.agents:
|
||||
logger.warning(
|
||||
"No questioner agents provided. Add agents using add_agent() method."
|
||||
)
|
||||
|
||||
if not self.speaker_agents and not self.speaker_agent:
|
||||
logger.warning(
|
||||
"No speaker agents provided. Add speaker agents using add_speaker_agent() method."
|
||||
)
|
||||
|
||||
if (
|
||||
not self.agents
|
||||
and not self.speaker_agents
|
||||
and not self.speaker_agent
|
||||
):
|
||||
raise ValueError(
|
||||
"At least one agent (questioner or speaker) must be provided."
|
||||
)
|
||||
|
||||
def add_agent(self, agent: Agent):
|
||||
"""Add a questioner agent to the swarm."""
|
||||
self.agents.append(agent)
|
||||
logger.info(f"Added questioner agent: {agent.agent_name}")
|
||||
|
||||
def add_speaker_agent(self, agent: Agent):
|
||||
"""Add a speaker agent to the swarm."""
|
||||
if self.speaker_agents is None:
|
||||
self.speaker_agents = []
|
||||
self.speaker_agents.append(agent)
|
||||
logger.info(f"Added speaker agent: {agent.agent_name}")
|
||||
|
||||
def get_agent_info(self, agent: Agent) -> dict:
|
||||
"""Extract key information about an agent for showcasing."""
|
||||
info = {
|
||||
"name": getattr(agent, "agent_name", "Unknown Agent"),
|
||||
"description": getattr(
|
||||
agent, "agent_description", "No description available"
|
||||
),
|
||||
"role": getattr(agent, "role", "worker"),
|
||||
}
|
||||
|
||||
# Get system prompt preview (first 50 characters)
|
||||
system_prompt = getattr(agent, "system_prompt", "")
|
||||
if system_prompt:
|
||||
info["system_prompt_preview"] = (
|
||||
system_prompt[:50] + "..."
|
||||
if len(system_prompt) > 50
|
||||
else system_prompt
|
||||
)
|
||||
else:
|
||||
info["system_prompt_preview"] = (
|
||||
"No system prompt available"
|
||||
)
|
||||
|
||||
return info
|
||||
|
||||
def showcase_speaker_to_questioner(
|
||||
self, questioner: Agent, speaker: Agent
|
||||
) -> str:
|
||||
"""Create a showcase prompt introducing the speaker agent to the questioner."""
|
||||
speaker_info = self.get_agent_info(speaker)
|
||||
|
||||
showcase_prompt = f"""
|
||||
You are about to ask a question to a specialized agent. Here's what you need to know about them:
|
||||
|
||||
**Speaker Agent Information:**
|
||||
- **Name**: {speaker_info['name']}
|
||||
- **Role**: {speaker_info['role']}
|
||||
- **Description**: {speaker_info['description']}
|
||||
- **System Prompt Preview**: {speaker_info['system_prompt_preview']}
|
||||
|
||||
Please craft a thoughtful, relevant question that takes into account this agent's expertise and background.
|
||||
Your question should be specific and demonstrate that you understand their role and capabilities.
|
||||
"""
|
||||
return showcase_prompt
|
||||
|
||||
def showcase_questioner_to_speaker(
|
||||
self, speaker: Agent, questioner: Agent
|
||||
) -> str:
|
||||
"""Create a showcase prompt introducing the questioner agent to the speaker."""
|
||||
questioner_info = self.get_agent_info(questioner)
|
||||
|
||||
showcase_prompt = f"""
|
||||
You are about to answer a question from another agent. Here's what you need to know about them:
|
||||
|
||||
**Questioner Agent Information:**
|
||||
- **Name**: {questioner_info['name']}
|
||||
- **Role**: {questioner_info['role']}
|
||||
- **Description**: {questioner_info['description']}
|
||||
- **System Prompt Preview**: {questioner_info['system_prompt_preview']}
|
||||
|
||||
Please provide a comprehensive answer that demonstrates your expertise and addresses their question thoroughly.
|
||||
Consider their background and role when formulating your response.
|
||||
"""
|
||||
return showcase_prompt
|
||||
|
||||
def random_select_agent(self, agents: List[Agent]) -> Agent:
|
||||
"""Randomly select an agent from the list."""
|
||||
if not agents:
|
||||
raise ValueError("No agents available for selection")
|
||||
return random.choice(agents)
|
||||
|
||||
def get_current_speaker(self) -> Agent:
|
||||
"""Get the current speaker agent (either from speaker_agents list or single speaker_agent)."""
|
||||
if self.speaker_agent:
|
||||
return self.speaker_agent
|
||||
elif self.speaker_agents:
|
||||
return self.random_select_agent(self.speaker_agents)
|
||||
else:
|
||||
raise ValueError("No speaker agent available")
|
||||
|
||||
def run(
|
||||
self, task: str, img: Optional[str] = None, *args, **kwargs
|
||||
):
|
||||
"""Run the Q&A session with agent showcasing."""
|
||||
self.conversation.add(role="user", content=task)
|
||||
|
||||
# Get current speaker
|
||||
current_speaker = self.get_current_speaker()
|
||||
|
||||
# Select a random questioner
|
||||
questioner = self.random_select_agent(self.agents)
|
||||
|
||||
# Showcase agents to each other if enabled
|
||||
if self.showcase_agents:
|
||||
# Showcase speaker to questioner
|
||||
speaker_showcase = self.showcase_speaker_to_questioner(
|
||||
questioner, current_speaker
|
||||
)
|
||||
questioner_task = f"{speaker_showcase}\n\nNow ask a question about: {task}"
|
||||
|
||||
# Showcase questioner to speaker
|
||||
questioner_showcase = self.showcase_questioner_to_speaker(
|
||||
current_speaker, questioner
|
||||
)
|
||||
else:
|
||||
questioner_task = f"Ask a question about {task} to {current_speaker.agent_name}"
|
||||
|
||||
# Generate question
|
||||
question = questioner.run(
|
||||
task=questioner_task,
|
||||
img=img,
|
||||
*args,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
self.conversation.add(
|
||||
role=questioner.agent_name, content=question
|
||||
)
|
||||
|
||||
# Prepare answer task with showcasing if enabled
|
||||
if self.showcase_agents:
|
||||
answer_task = f"{questioner_showcase}\n\nAnswer this question from {questioner.agent_name}: {question}"
|
||||
else:
|
||||
answer_task = f"Answer the question '{question}' from {questioner.agent_name}"
|
||||
|
||||
# Generate answer
|
||||
answer = current_speaker.run(
|
||||
task=answer_task,
|
||||
img=img,
|
||||
*args,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
self.conversation.add(
|
||||
role=current_speaker.agent_name, content=answer
|
||||
)
|
||||
|
||||
return answer
|
||||
|
||||
def run_multi_round(
|
||||
self,
|
||||
task: str,
|
||||
rounds: int = 3,
|
||||
img: Optional[str] = None,
|
||||
*args,
|
||||
**kwargs,
|
||||
):
|
||||
"""Run multiple rounds of Q&A with different questioners."""
|
||||
results = []
|
||||
|
||||
for round_num in range(rounds):
|
||||
logger.info(
|
||||
f"Starting Q&A round {round_num + 1}/{rounds}"
|
||||
)
|
||||
|
||||
round_result = self.run(task, img, *args, **kwargs)
|
||||
results.append(
|
||||
{"round": round_num + 1, "result": round_result}
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
def get_conversation_history(self):
|
||||
"""Get the conversation history."""
|
||||
return self.conversation.get_history()
|
||||
|
||||
def clear_conversation(self):
|
||||
"""Clear the conversation history."""
|
||||
self.conversation = Conversation()
|
||||
logger.info("Conversation history cleared")
|
@ -0,0 +1,675 @@
|
||||
import json
|
||||
import pickle
|
||||
import hashlib
|
||||
import threading
|
||||
import time
|
||||
from functools import lru_cache, wraps
|
||||
from typing import List, Dict, Any, Optional, Callable
|
||||
from pathlib import Path
|
||||
import weakref
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
import os
|
||||
|
||||
from loguru import logger
|
||||
|
||||
# Import the Agent class - adjust path as needed
|
||||
try:
|
||||
from swarms.structs.agent import Agent
|
||||
except ImportError:
|
||||
# Fallback for development/testing
|
||||
Agent = Any
|
||||
|
||||
|
||||
class AgentCache:
|
||||
"""
|
||||
A comprehensive caching system for Agent objects with multiple strategies:
|
||||
- Memory-based LRU cache
|
||||
- Weak reference cache to prevent memory leaks
|
||||
- Persistent disk cache for agent configurations
|
||||
- Lazy loading with background preloading
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
max_memory_cache_size: int = 128,
|
||||
cache_dir: Optional[str] = None,
|
||||
enable_persistent_cache: bool = True,
|
||||
auto_save_interval: int = 300, # 5 minutes
|
||||
enable_weak_refs: bool = True,
|
||||
):
|
||||
"""
|
||||
Initialize the AgentCache.
|
||||
|
||||
Args:
|
||||
max_memory_cache_size: Maximum number of agents to keep in memory cache
|
||||
cache_dir: Directory for persistent cache storage
|
||||
enable_persistent_cache: Whether to enable disk-based caching
|
||||
auto_save_interval: Interval in seconds for auto-saving cache
|
||||
enable_weak_refs: Whether to use weak references to prevent memory leaks
|
||||
"""
|
||||
self.max_memory_cache_size = max_memory_cache_size
|
||||
self.cache_dir = Path(cache_dir or "agent_cache")
|
||||
self.enable_persistent_cache = enable_persistent_cache
|
||||
self.auto_save_interval = auto_save_interval
|
||||
self.enable_weak_refs = enable_weak_refs
|
||||
|
||||
# Memory caches
|
||||
self._memory_cache: Dict[str, Agent] = {}
|
||||
self._weak_cache: weakref.WeakValueDictionary = (
|
||||
weakref.WeakValueDictionary()
|
||||
)
|
||||
self._access_times: Dict[str, float] = {}
|
||||
self._lock = threading.RLock()
|
||||
|
||||
# Cache statistics
|
||||
self._hits = 0
|
||||
self._misses = 0
|
||||
self._load_times: Dict[str, float] = {}
|
||||
|
||||
# Background tasks
|
||||
self._auto_save_thread: Optional[threading.Thread] = None
|
||||
self._shutdown_event = threading.Event()
|
||||
|
||||
# Initialize cache directory
|
||||
if self.enable_persistent_cache:
|
||||
self.cache_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Start auto-save thread
|
||||
self._start_auto_save_thread()
|
||||
|
||||
def _start_auto_save_thread(self):
|
||||
"""Start the auto-save background thread."""
|
||||
if (
|
||||
self.enable_persistent_cache
|
||||
and self.auto_save_interval > 0
|
||||
):
|
||||
self._auto_save_thread = threading.Thread(
|
||||
target=self._auto_save_loop,
|
||||
daemon=True,
|
||||
name="AgentCache-AutoSave",
|
||||
)
|
||||
self._auto_save_thread.start()
|
||||
|
||||
def _auto_save_loop(self):
|
||||
"""Background loop for auto-saving cache."""
|
||||
while not self._shutdown_event.wait(self.auto_save_interval):
|
||||
try:
|
||||
self.save_cache_to_disk()
|
||||
except Exception as e:
|
||||
logger.error(f"Error in auto-save: {e}")
|
||||
|
||||
def _generate_cache_key(
|
||||
self, agent_config: Dict[str, Any]
|
||||
) -> str:
|
||||
"""Generate a unique cache key from agent configuration."""
|
||||
# Create a stable hash from the configuration
|
||||
config_str = json.dumps(
|
||||
agent_config, sort_keys=True, default=str
|
||||
)
|
||||
return hashlib.md5(config_str.encode()).hexdigest()
|
||||
|
||||
def _evict_lru(self):
|
||||
"""Evict least recently used items from memory cache."""
|
||||
if len(self._memory_cache) >= self.max_memory_cache_size:
|
||||
# Find the least recently used item
|
||||
lru_key = min(
|
||||
self._access_times.items(), key=lambda x: x[1]
|
||||
)[0]
|
||||
|
||||
# Save to persistent cache before evicting
|
||||
if self.enable_persistent_cache:
|
||||
self._save_agent_to_disk(
|
||||
lru_key, self._memory_cache[lru_key]
|
||||
)
|
||||
|
||||
# Remove from memory
|
||||
del self._memory_cache[lru_key]
|
||||
del self._access_times[lru_key]
|
||||
|
||||
logger.debug(f"Evicted agent {lru_key} from memory cache")
|
||||
|
||||
def _save_agent_to_disk(self, cache_key: str, agent: Agent):
|
||||
"""Save agent to persistent cache."""
|
||||
try:
|
||||
cache_file = self.cache_dir / f"{cache_key}.pkl"
|
||||
with open(cache_file, "wb") as f:
|
||||
pickle.dump(agent.to_dict(), f)
|
||||
logger.debug(f"Saved agent {cache_key} to disk cache")
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving agent to disk: {e}")
|
||||
|
||||
def _load_agent_from_disk(
|
||||
self, cache_key: str
|
||||
) -> Optional[Agent]:
|
||||
"""Load agent from persistent cache."""
|
||||
try:
|
||||
cache_file = self.cache_dir / f"{cache_key}.pkl"
|
||||
if cache_file.exists():
|
||||
with open(cache_file, "rb") as f:
|
||||
agent_dict = pickle.load(f)
|
||||
|
||||
# Reconstruct agent from dictionary
|
||||
agent = Agent(**agent_dict)
|
||||
logger.debug(
|
||||
f"Loaded agent {cache_key} from disk cache"
|
||||
)
|
||||
return agent
|
||||
except Exception as e:
|
||||
logger.error(f"Error loading agent from disk: {e}")
|
||||
return None
|
||||
|
||||
def get_agent(
|
||||
self, agent_config: Dict[str, Any]
|
||||
) -> Optional[Agent]:
|
||||
"""
|
||||
Get an agent from cache, loading if necessary.
|
||||
|
||||
Args:
|
||||
agent_config: Configuration dictionary for the agent
|
||||
|
||||
Returns:
|
||||
Cached or newly loaded Agent instance
|
||||
"""
|
||||
cache_key = self._generate_cache_key(agent_config)
|
||||
|
||||
with self._lock:
|
||||
# Check memory cache first
|
||||
if cache_key in self._memory_cache:
|
||||
self._access_times[cache_key] = time.time()
|
||||
self._hits += 1
|
||||
logger.debug(
|
||||
f"Cache hit (memory) for agent {cache_key}"
|
||||
)
|
||||
return self._memory_cache[cache_key]
|
||||
|
||||
# Check weak reference cache
|
||||
if (
|
||||
self.enable_weak_refs
|
||||
and cache_key in self._weak_cache
|
||||
):
|
||||
agent = self._weak_cache[cache_key]
|
||||
if agent is not None:
|
||||
# Move back to memory cache
|
||||
self._memory_cache[cache_key] = agent
|
||||
self._access_times[cache_key] = time.time()
|
||||
self._hits += 1
|
||||
logger.debug(
|
||||
f"Cache hit (weak ref) for agent {cache_key}"
|
||||
)
|
||||
return agent
|
||||
|
||||
# Check persistent cache
|
||||
if self.enable_persistent_cache:
|
||||
agent = self._load_agent_from_disk(cache_key)
|
||||
if agent is not None:
|
||||
self._evict_lru()
|
||||
self._memory_cache[cache_key] = agent
|
||||
self._access_times[cache_key] = time.time()
|
||||
if self.enable_weak_refs:
|
||||
self._weak_cache[cache_key] = agent
|
||||
self._hits += 1
|
||||
logger.debug(
|
||||
f"Cache hit (disk) for agent {cache_key}"
|
||||
)
|
||||
return agent
|
||||
|
||||
# Cache miss - need to create new agent
|
||||
self._misses += 1
|
||||
logger.debug(f"Cache miss for agent {cache_key}")
|
||||
return None
|
||||
|
||||
def put_agent(self, agent_config: Dict[str, Any], agent: Agent):
|
||||
"""
|
||||
Put an agent into the cache.
|
||||
|
||||
Args:
|
||||
agent_config: Configuration dictionary for the agent
|
||||
agent: The Agent instance to cache
|
||||
"""
|
||||
cache_key = self._generate_cache_key(agent_config)
|
||||
|
||||
with self._lock:
|
||||
self._evict_lru()
|
||||
self._memory_cache[cache_key] = agent
|
||||
self._access_times[cache_key] = time.time()
|
||||
|
||||
if self.enable_weak_refs:
|
||||
self._weak_cache[cache_key] = agent
|
||||
|
||||
logger.debug(f"Added agent {cache_key} to cache")
|
||||
|
||||
def preload_agents(self, agent_configs: List[Dict[str, Any]]):
|
||||
"""
|
||||
Preload agents in the background for faster access.
|
||||
|
||||
Args:
|
||||
agent_configs: List of agent configurations to preload
|
||||
"""
|
||||
|
||||
def _preload_worker(config):
|
||||
try:
|
||||
cache_key = self._generate_cache_key(config)
|
||||
if cache_key not in self._memory_cache:
|
||||
start_time = time.time()
|
||||
agent = Agent(**config)
|
||||
load_time = time.time() - start_time
|
||||
|
||||
self.put_agent(config, agent)
|
||||
self._load_times[cache_key] = load_time
|
||||
logger.debug(
|
||||
f"Preloaded agent {cache_key} in {load_time:.3f}s"
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error preloading agent: {e}")
|
||||
|
||||
# Use thread pool for concurrent preloading
|
||||
max_workers = min(len(agent_configs), os.cpu_count())
|
||||
with ThreadPoolExecutor(max_workers=max_workers) as executor:
|
||||
executor.map(_preload_worker, agent_configs)
|
||||
|
||||
def get_cache_stats(self) -> Dict[str, Any]:
|
||||
"""Get cache performance statistics."""
|
||||
total_requests = self._hits + self._misses
|
||||
hit_rate = (
|
||||
(self._hits / total_requests * 100)
|
||||
if total_requests > 0
|
||||
else 0
|
||||
)
|
||||
|
||||
return {
|
||||
"hits": self._hits,
|
||||
"misses": self._misses,
|
||||
"hit_rate_percent": round(hit_rate, 2),
|
||||
"memory_cache_size": len(self._memory_cache),
|
||||
"weak_cache_size": len(self._weak_cache),
|
||||
"average_load_time": (
|
||||
sum(self._load_times.values()) / len(self._load_times)
|
||||
if self._load_times
|
||||
else 0
|
||||
),
|
||||
"total_agents_loaded": len(self._load_times),
|
||||
}
|
||||
|
||||
def clear_cache(self):
|
||||
"""Clear all caches."""
|
||||
with self._lock:
|
||||
self._memory_cache.clear()
|
||||
self._weak_cache.clear()
|
||||
self._access_times.clear()
|
||||
logger.info("Cleared all caches")
|
||||
|
||||
def save_cache_to_disk(self):
|
||||
"""Save current memory cache to disk."""
|
||||
if not self.enable_persistent_cache:
|
||||
return
|
||||
|
||||
with self._lock:
|
||||
saved_count = 0
|
||||
for cache_key, agent in self._memory_cache.items():
|
||||
try:
|
||||
self._save_agent_to_disk(cache_key, agent)
|
||||
saved_count += 1
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Error saving agent {cache_key}: {e}"
|
||||
)
|
||||
|
||||
logger.info(f"Saved {saved_count} agents to disk cache")
|
||||
|
||||
def shutdown(self):
|
||||
"""Shutdown the cache system gracefully."""
|
||||
self._shutdown_event.set()
|
||||
if self._auto_save_thread:
|
||||
self._auto_save_thread.join(timeout=5)
|
||||
|
||||
# Final save
|
||||
if self.enable_persistent_cache:
|
||||
self.save_cache_to_disk()
|
||||
|
||||
logger.info("AgentCache shutdown complete")
|
||||
|
||||
|
||||
# Global cache instance
|
||||
_global_cache: Optional[AgentCache] = None
|
||||
|
||||
|
||||
def get_global_cache() -> AgentCache:
|
||||
"""Get or create the global agent cache instance."""
|
||||
global _global_cache
|
||||
if _global_cache is None:
|
||||
_global_cache = AgentCache()
|
||||
return _global_cache
|
||||
|
||||
|
||||
def cached_agent_loader(
|
||||
agents: List[Agent],
|
||||
cache_instance: Optional[AgentCache] = None,
|
||||
preload: bool = True,
|
||||
parallel_loading: bool = True,
|
||||
) -> List[Agent]:
|
||||
"""
|
||||
Load a list of agents with caching for super fast performance.
|
||||
|
||||
Args:
|
||||
agents: List of Agent instances to cache/load
|
||||
cache_instance: Optional cache instance (uses global cache if None)
|
||||
preload: Whether to preload agents in background
|
||||
parallel_loading: Whether to load agents in parallel
|
||||
|
||||
Returns:
|
||||
List of Agent instances (cached versions if available)
|
||||
|
||||
Examples:
|
||||
# Basic usage
|
||||
agents = [Agent(agent_name="Agent1", model_name="gpt-4"), ...]
|
||||
cached_agents = cached_agent_loader(agents)
|
||||
|
||||
# With custom cache
|
||||
cache = AgentCache(max_memory_cache_size=256)
|
||||
cached_agents = cached_agent_loader(agents, cache_instance=cache)
|
||||
|
||||
# Preload for even faster subsequent access
|
||||
cached_agent_loader(agents, preload=True)
|
||||
cached_agents = cached_agent_loader(agents) # Super fast!
|
||||
"""
|
||||
cache = cache_instance or get_global_cache()
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
# Extract configurations from agents for caching
|
||||
agent_configs = []
|
||||
for agent in agents:
|
||||
config = _extract_agent_config(agent)
|
||||
agent_configs.append(config)
|
||||
|
||||
if preload:
|
||||
# Preload agents in background
|
||||
cache.preload_agents(agent_configs)
|
||||
|
||||
def _load_single_agent(agent: Agent) -> Agent:
|
||||
"""Load a single agent with caching."""
|
||||
config = _extract_agent_config(agent)
|
||||
|
||||
# Try to get from cache first
|
||||
cached_agent = cache.get_agent(config)
|
||||
|
||||
if cached_agent is None:
|
||||
# Cache miss - use the provided agent and cache it
|
||||
load_start = time.time()
|
||||
|
||||
# Add to cache for future use
|
||||
cache.put_agent(config, agent)
|
||||
load_time = time.time() - load_start
|
||||
|
||||
logger.debug(
|
||||
f"Cached new agent {agent.agent_name} in {load_time:.3f}s"
|
||||
)
|
||||
return agent
|
||||
else:
|
||||
logger.debug(
|
||||
f"Retrieved cached agent {cached_agent.agent_name}"
|
||||
)
|
||||
return cached_agent
|
||||
|
||||
# Load agents (parallel or sequential)
|
||||
if parallel_loading and len(agents) > 1:
|
||||
max_workers = min(len(agents), os.cpu_count())
|
||||
with ThreadPoolExecutor(max_workers=max_workers) as executor:
|
||||
cached_agents = list(
|
||||
executor.map(_load_single_agent, agents)
|
||||
)
|
||||
else:
|
||||
cached_agents = [
|
||||
_load_single_agent(agent) for agent in agents
|
||||
]
|
||||
|
||||
total_time = time.time() - start_time
|
||||
|
||||
# Log performance stats
|
||||
stats = cache.get_cache_stats()
|
||||
logger.info(
|
||||
f"Processed {len(cached_agents)} agents in {total_time:.3f}s "
|
||||
f"(Hit rate: {stats['hit_rate_percent']}%)"
|
||||
)
|
||||
|
||||
return cached_agents
|
||||
|
||||
|
||||
def _extract_agent_config(agent: Agent) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract a configuration dictionary from an Agent instance for caching.
|
||||
|
||||
Args:
|
||||
agent: Agent instance to extract config from
|
||||
|
||||
Returns:
|
||||
Configuration dictionary suitable for cache key generation
|
||||
"""
|
||||
# Extract key attributes that define an agent's identity
|
||||
config = {
|
||||
"agent_name": getattr(agent, "agent_name", None),
|
||||
"model_name": getattr(agent, "model_name", None),
|
||||
"system_prompt": getattr(agent, "system_prompt", None),
|
||||
"max_loops": getattr(agent, "max_loops", None),
|
||||
"temperature": getattr(agent, "temperature", None),
|
||||
"max_tokens": getattr(agent, "max_tokens", None),
|
||||
"agent_description": getattr(
|
||||
agent, "agent_description", None
|
||||
),
|
||||
# Add other key identifying attributes
|
||||
"tools": str(
|
||||
getattr(agent, "tools", [])
|
||||
), # Convert to string for hashing, default to empty list
|
||||
"context_length": getattr(agent, "context_length", None),
|
||||
}
|
||||
|
||||
# Remove None values to create a clean config
|
||||
config = {k: v for k, v in config.items() if v is not None}
|
||||
|
||||
return config
|
||||
|
||||
|
||||
def cached_agent_loader_from_configs(
|
||||
agent_configs: List[Dict[str, Any]],
|
||||
cache_instance: Optional[AgentCache] = None,
|
||||
preload: bool = True,
|
||||
parallel_loading: bool = True,
|
||||
) -> List[Agent]:
|
||||
"""
|
||||
Load a list of agents from configuration dictionaries with caching.
|
||||
|
||||
Args:
|
||||
agent_configs: List of agent configuration dictionaries
|
||||
cache_instance: Optional cache instance (uses global cache if None)
|
||||
preload: Whether to preload agents in background
|
||||
parallel_loading: Whether to load agents in parallel
|
||||
|
||||
Returns:
|
||||
List of Agent instances
|
||||
|
||||
Examples:
|
||||
# Basic usage
|
||||
configs = [{"agent_name": "Agent1", "model_name": "gpt-4"}, ...]
|
||||
agents = cached_agent_loader_from_configs(configs)
|
||||
|
||||
# With custom cache
|
||||
cache = AgentCache(max_memory_cache_size=256)
|
||||
agents = cached_agent_loader_from_configs(configs, cache_instance=cache)
|
||||
"""
|
||||
cache = cache_instance or get_global_cache()
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
if preload:
|
||||
# Preload agents in background
|
||||
cache.preload_agents(agent_configs)
|
||||
|
||||
def _load_single_agent(config: Dict[str, Any]) -> Agent:
|
||||
"""Load a single agent with caching."""
|
||||
# Try to get from cache first
|
||||
agent = cache.get_agent(config)
|
||||
|
||||
if agent is None:
|
||||
# Cache miss - create new agent
|
||||
load_start = time.time()
|
||||
agent = Agent(**config)
|
||||
load_time = time.time() - load_start
|
||||
|
||||
# Add to cache for future use
|
||||
cache.put_agent(config, agent)
|
||||
|
||||
logger.debug(
|
||||
f"Created new agent {agent.agent_name} in {load_time:.3f}s"
|
||||
)
|
||||
|
||||
return agent
|
||||
|
||||
# Load agents (parallel or sequential)
|
||||
if parallel_loading and len(agent_configs) > 1:
|
||||
max_workers = min(len(agent_configs), os.cpu_count())
|
||||
with ThreadPoolExecutor(max_workers=max_workers) as executor:
|
||||
agents = list(
|
||||
executor.map(_load_single_agent, agent_configs)
|
||||
)
|
||||
else:
|
||||
agents = [
|
||||
_load_single_agent(config) for config in agent_configs
|
||||
]
|
||||
|
||||
total_time = time.time() - start_time
|
||||
|
||||
# Log performance stats
|
||||
stats = cache.get_cache_stats()
|
||||
logger.info(
|
||||
f"Loaded {len(agents)} agents in {total_time:.3f}s "
|
||||
f"(Hit rate: {stats['hit_rate_percent']}%)"
|
||||
)
|
||||
|
||||
return agents
|
||||
|
||||
|
||||
# Decorator for caching individual agent creation
|
||||
def cache_agent_creation(cache_instance: Optional[AgentCache] = None):
|
||||
"""
|
||||
Decorator to cache agent creation based on initialization parameters.
|
||||
|
||||
Args:
|
||||
cache_instance: Optional cache instance (uses global cache if None)
|
||||
|
||||
Returns:
|
||||
Decorator function
|
||||
|
||||
Example:
|
||||
@cache_agent_creation()
|
||||
def create_trading_agent(symbol: str, model: str):
|
||||
return Agent(
|
||||
agent_name=f"Trading-{symbol}",
|
||||
model_name=model,
|
||||
system_prompt=f"You are a trading agent for {symbol}"
|
||||
)
|
||||
|
||||
agent1 = create_trading_agent("AAPL", "gpt-4") # Creates new agent
|
||||
agent2 = create_trading_agent("AAPL", "gpt-4") # Returns cached agent
|
||||
"""
|
||||
|
||||
def decorator(func: Callable[..., Agent]) -> Callable[..., Agent]:
|
||||
cache = cache_instance or get_global_cache()
|
||||
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs) -> Agent:
|
||||
# Create a config dict from function arguments
|
||||
import inspect
|
||||
|
||||
sig = inspect.signature(func)
|
||||
bound_args = sig.bind(*args, **kwargs)
|
||||
bound_args.apply_defaults()
|
||||
|
||||
config = dict(bound_args.arguments)
|
||||
|
||||
# Try to get from cache
|
||||
agent = cache.get_agent(config)
|
||||
|
||||
if agent is None:
|
||||
# Cache miss - call original function
|
||||
agent = func(*args, **kwargs)
|
||||
cache.put_agent(config, agent)
|
||||
|
||||
return agent
|
||||
|
||||
return wrapper
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
# LRU Cache-based simple approach
|
||||
@lru_cache(maxsize=128)
|
||||
def _cached_agent_by_hash(
|
||||
config_hash: str, config_json: str
|
||||
) -> Agent:
|
||||
"""Internal LRU cached agent creation by config hash."""
|
||||
config = json.loads(config_json)
|
||||
return Agent(**config)
|
||||
|
||||
|
||||
def simple_lru_agent_loader(
|
||||
agents: List[Agent],
|
||||
) -> List[Agent]:
|
||||
"""
|
||||
Simple LRU cache-based agent loader using functools.lru_cache.
|
||||
|
||||
Args:
|
||||
agents: List of Agent instances
|
||||
|
||||
Returns:
|
||||
List of Agent instances (cached versions if available)
|
||||
|
||||
Note:
|
||||
This is a simpler approach but less flexible than the full AgentCache.
|
||||
"""
|
||||
cached_agents = []
|
||||
|
||||
for agent in agents:
|
||||
# Extract config from agent
|
||||
config = _extract_agent_config(agent)
|
||||
|
||||
# Create stable hash and JSON string
|
||||
config_json = json.dumps(config, sort_keys=True, default=str)
|
||||
config_hash = hashlib.md5(config_json.encode()).hexdigest()
|
||||
|
||||
# Use LRU cached function
|
||||
cached_agent = _cached_agent_by_hash_from_agent(
|
||||
config_hash, agent
|
||||
)
|
||||
cached_agents.append(cached_agent)
|
||||
|
||||
return cached_agents
|
||||
|
||||
|
||||
@lru_cache(maxsize=128)
|
||||
def _cached_agent_by_hash_from_agent(
|
||||
config_hash: str, agent: Agent
|
||||
) -> Agent:
|
||||
"""Internal LRU cached agent storage by config hash."""
|
||||
# Return the same agent instance (this creates the caching effect)
|
||||
return agent
|
||||
|
||||
|
||||
# Utility functions for cache management
|
||||
def clear_agent_cache():
|
||||
"""Clear the global agent cache."""
|
||||
cache = get_global_cache()
|
||||
cache.clear_cache()
|
||||
|
||||
|
||||
def get_agent_cache_stats() -> Dict[str, Any]:
|
||||
"""Get statistics from the global agent cache."""
|
||||
cache = get_global_cache()
|
||||
return cache.get_cache_stats()
|
||||
|
||||
|
||||
def shutdown_agent_cache():
|
||||
"""Shutdown the global agent cache gracefully."""
|
||||
global _global_cache
|
||||
if _global_cache:
|
||||
_global_cache.shutdown()
|
||||
_global_cache = None
|
@ -0,0 +1,633 @@
|
||||
import os
|
||||
import asyncio
|
||||
import concurrent.futures
|
||||
import inspect
|
||||
import time
|
||||
from concurrent.futures import (
|
||||
ThreadPoolExecutor,
|
||||
ProcessPoolExecutor,
|
||||
as_completed,
|
||||
)
|
||||
from functools import wraps
|
||||
from typing import (
|
||||
Any,
|
||||
Callable,
|
||||
List,
|
||||
Optional,
|
||||
TypeVar,
|
||||
Generic,
|
||||
)
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
|
||||
from swarms.utils.loguru_logger import initialize_logger
|
||||
|
||||
logger = initialize_logger("concurrent_wrapper")
|
||||
|
||||
T = TypeVar("T")
|
||||
R = TypeVar("R")
|
||||
|
||||
|
||||
# Global function for process pool execution (must be picklable)
|
||||
def _execute_task_in_process(task_data):
|
||||
"""
|
||||
Execute a task in a separate process.
|
||||
This function must be at module level to be picklable.
|
||||
"""
|
||||
(
|
||||
func,
|
||||
task_args,
|
||||
task_kwargs,
|
||||
task_id,
|
||||
max_retries,
|
||||
retry_on_failure,
|
||||
retry_delay,
|
||||
return_exceptions,
|
||||
) = task_data
|
||||
|
||||
start_time = time.time()
|
||||
|
||||
for attempt in range(max_retries + 1):
|
||||
try:
|
||||
result = func(*task_args, **task_kwargs)
|
||||
execution_time = time.time() - start_time
|
||||
return ConcurrentResult(
|
||||
value=result,
|
||||
execution_time=execution_time,
|
||||
worker_id=task_id,
|
||||
)
|
||||
except Exception as e:
|
||||
if attempt == max_retries or not retry_on_failure:
|
||||
execution_time = time.time() - start_time
|
||||
if return_exceptions:
|
||||
return ConcurrentResult(
|
||||
exception=e,
|
||||
execution_time=execution_time,
|
||||
worker_id=task_id,
|
||||
)
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
time.sleep(retry_delay * (2**attempt))
|
||||
|
||||
# This should never be reached, but just in case
|
||||
return ConcurrentResult(
|
||||
exception=Exception("Max retries exceeded")
|
||||
)
|
||||
|
||||
|
||||
class ExecutorType(Enum):
|
||||
"""Enum for different types of executors."""
|
||||
|
||||
THREAD = "thread"
|
||||
PROCESS = "process"
|
||||
ASYNC = "async"
|
||||
|
||||
|
||||
@dataclass
|
||||
class ConcurrentConfig:
|
||||
"""Configuration for concurrent execution."""
|
||||
|
||||
name: Optional[str] = None
|
||||
description: Optional[str] = None
|
||||
max_workers: int = 4
|
||||
timeout: Optional[float] = None
|
||||
executor_type: ExecutorType = ExecutorType.THREAD
|
||||
return_exceptions: bool = False
|
||||
chunk_size: Optional[int] = None
|
||||
ordered: bool = True
|
||||
retry_on_failure: bool = False
|
||||
max_retries: int = 3
|
||||
retry_delay: float = 1.0
|
||||
|
||||
|
||||
class ConcurrentResult(Generic[T]):
|
||||
"""Result wrapper for concurrent execution."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
value: T = None,
|
||||
exception: Exception = None,
|
||||
execution_time: float = 0.0,
|
||||
worker_id: Optional[int] = None,
|
||||
):
|
||||
self.value = value
|
||||
self.exception = exception
|
||||
self.execution_time = execution_time
|
||||
self.worker_id = worker_id
|
||||
self.success = exception is None
|
||||
|
||||
def __repr__(self):
|
||||
if self.success:
|
||||
return f"ConcurrentResult(value={self.value}, time={self.execution_time:.3f}s)"
|
||||
else:
|
||||
return f"ConcurrentResult(exception={type(self.exception).__name__}: {self.exception})"
|
||||
|
||||
|
||||
def concurrent(
|
||||
name: Optional[str] = None,
|
||||
description: Optional[str] = None,
|
||||
max_workers: int = 4,
|
||||
timeout: Optional[float] = None,
|
||||
executor_type: ExecutorType = ExecutorType.THREAD,
|
||||
return_exceptions: bool = False,
|
||||
chunk_size: Optional[int] = None,
|
||||
ordered: bool = True,
|
||||
retry_on_failure: bool = False,
|
||||
max_retries: int = 3,
|
||||
retry_delay: float = 1.0,
|
||||
):
|
||||
"""
|
||||
A decorator that enables concurrent execution of functions.
|
||||
|
||||
Args:
|
||||
name (Optional[str]): Name for the concurrent operation
|
||||
description (Optional[str]): Description of the operation
|
||||
max_workers (int): Maximum number of worker threads/processes
|
||||
timeout (Optional[float]): Timeout in seconds for each task
|
||||
executor_type (ExecutorType): Type of executor (thread, process, async)
|
||||
return_exceptions (bool): Whether to return exceptions instead of raising
|
||||
chunk_size (Optional[int]): Size of chunks for batch processing
|
||||
ordered (bool): Whether to maintain order of results
|
||||
retry_on_failure (bool): Whether to retry failed tasks
|
||||
max_retries (int): Maximum number of retries per task
|
||||
retry_delay (float): Delay between retries in seconds
|
||||
|
||||
Returns:
|
||||
Callable: Decorated function that can execute concurrently
|
||||
"""
|
||||
|
||||
if max_workers is None:
|
||||
max_workers = os.cpu_count()
|
||||
|
||||
def decorator(func: Callable[..., T]) -> Callable[..., T]:
|
||||
config = ConcurrentConfig(
|
||||
name=name or func.__name__,
|
||||
description=description
|
||||
or f"Concurrent execution of {func.__name__}",
|
||||
max_workers=max_workers,
|
||||
timeout=timeout,
|
||||
executor_type=executor_type,
|
||||
return_exceptions=return_exceptions,
|
||||
chunk_size=chunk_size,
|
||||
ordered=ordered,
|
||||
retry_on_failure=retry_on_failure,
|
||||
max_retries=max_retries,
|
||||
retry_delay=retry_delay,
|
||||
)
|
||||
|
||||
@wraps(func)
|
||||
def wrapper(*args, **kwargs):
|
||||
return func(*args, **kwargs)
|
||||
|
||||
def _execute_single_task(
|
||||
task_args, task_kwargs, task_id=None
|
||||
):
|
||||
"""Execute a single task with retry logic."""
|
||||
start_time = time.time()
|
||||
|
||||
for attempt in range(config.max_retries + 1):
|
||||
try:
|
||||
result = func(*task_args, **task_kwargs)
|
||||
execution_time = time.time() - start_time
|
||||
return ConcurrentResult(
|
||||
value=result,
|
||||
execution_time=execution_time,
|
||||
worker_id=task_id,
|
||||
)
|
||||
except Exception as e:
|
||||
if (
|
||||
attempt == config.max_retries
|
||||
or not config.retry_on_failure
|
||||
):
|
||||
execution_time = time.time() - start_time
|
||||
if config.return_exceptions:
|
||||
return ConcurrentResult(
|
||||
exception=e,
|
||||
execution_time=execution_time,
|
||||
worker_id=task_id,
|
||||
)
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
logger.warning(
|
||||
f"Task {task_id} failed (attempt {attempt + 1}/{config.max_retries + 1}): {e}"
|
||||
)
|
||||
time.sleep(config.retry_delay * (2**attempt))
|
||||
|
||||
def concurrent_execute(*args_list, **kwargs_list):
|
||||
"""Execute the function concurrently with multiple argument sets."""
|
||||
if not args_list and not kwargs_list:
|
||||
raise ValueError(
|
||||
"At least one set of arguments must be provided"
|
||||
)
|
||||
|
||||
# Prepare tasks
|
||||
tasks = []
|
||||
if args_list:
|
||||
for args in args_list:
|
||||
if isinstance(args, (list, tuple)):
|
||||
tasks.append((args, {}))
|
||||
else:
|
||||
tasks.append(([args], {}))
|
||||
|
||||
if kwargs_list:
|
||||
for kwargs in kwargs_list:
|
||||
if isinstance(kwargs, dict):
|
||||
tasks.append(((), kwargs))
|
||||
else:
|
||||
raise ValueError(
|
||||
"kwargs_list must contain dictionaries"
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"Starting concurrent execution of {len(tasks)} tasks with {config.max_workers} workers"
|
||||
)
|
||||
start_time = time.time()
|
||||
|
||||
try:
|
||||
if config.executor_type == ExecutorType.THREAD:
|
||||
results = _execute_with_thread_pool(tasks)
|
||||
elif config.executor_type == ExecutorType.PROCESS:
|
||||
results = _execute_with_process_pool(tasks)
|
||||
elif config.executor_type == ExecutorType.ASYNC:
|
||||
results = _execute_with_async(tasks)
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Unsupported executor type: {config.executor_type}"
|
||||
)
|
||||
|
||||
total_time = time.time() - start_time
|
||||
successful_tasks = sum(
|
||||
1 for r in results if r.success
|
||||
)
|
||||
|
||||
logger.info(
|
||||
f"Completed {len(tasks)} tasks in {total_time:.3f}s "
|
||||
f"({successful_tasks}/{len(tasks)} successful)"
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Concurrent execution failed: {e}")
|
||||
raise
|
||||
|
||||
def _execute_with_thread_pool(tasks):
|
||||
"""Execute tasks using ThreadPoolExecutor."""
|
||||
results = []
|
||||
|
||||
with ThreadPoolExecutor(
|
||||
max_workers=config.max_workers
|
||||
) as executor:
|
||||
if config.ordered:
|
||||
future_to_task = {
|
||||
executor.submit(
|
||||
_execute_single_task, task[0], task[1], i
|
||||
): i
|
||||
for i, task in enumerate(tasks)
|
||||
}
|
||||
|
||||
for future in as_completed(
|
||||
future_to_task, timeout=config.timeout
|
||||
):
|
||||
try:
|
||||
result = future.result(
|
||||
timeout=config.timeout
|
||||
)
|
||||
results.append(result)
|
||||
except Exception as e:
|
||||
if config.return_exceptions:
|
||||
results.append(
|
||||
ConcurrentResult(exception=e)
|
||||
)
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
futures = [
|
||||
executor.submit(
|
||||
_execute_single_task, task[0], task[1], i
|
||||
)
|
||||
for i, task in enumerate(tasks)
|
||||
]
|
||||
|
||||
for future in as_completed(
|
||||
futures, timeout=config.timeout
|
||||
):
|
||||
try:
|
||||
result = future.result(
|
||||
timeout=config.timeout
|
||||
)
|
||||
results.append(result)
|
||||
except Exception as e:
|
||||
if config.return_exceptions:
|
||||
results.append(
|
||||
ConcurrentResult(exception=e)
|
||||
)
|
||||
else:
|
||||
raise
|
||||
|
||||
return results
|
||||
|
||||
def _execute_with_process_pool(tasks):
|
||||
"""Execute tasks using ProcessPoolExecutor."""
|
||||
results = []
|
||||
|
||||
# Prepare task data for process execution
|
||||
task_data_list = []
|
||||
for i, task in enumerate(tasks):
|
||||
task_data = (
|
||||
func, # The function to execute
|
||||
task[0], # args
|
||||
task[1], # kwargs
|
||||
i, # task_id
|
||||
config.max_retries,
|
||||
config.retry_on_failure,
|
||||
config.retry_delay,
|
||||
config.return_exceptions,
|
||||
)
|
||||
task_data_list.append(task_data)
|
||||
|
||||
with ProcessPoolExecutor(
|
||||
max_workers=config.max_workers
|
||||
) as executor:
|
||||
if config.ordered:
|
||||
future_to_task = {
|
||||
executor.submit(
|
||||
_execute_task_in_process, task_data
|
||||
): i
|
||||
for i, task_data in enumerate(task_data_list)
|
||||
}
|
||||
|
||||
for future in as_completed(
|
||||
future_to_task, timeout=config.timeout
|
||||
):
|
||||
try:
|
||||
result = future.result(
|
||||
timeout=config.timeout
|
||||
)
|
||||
results.append(result)
|
||||
except Exception as e:
|
||||
if config.return_exceptions:
|
||||
results.append(
|
||||
ConcurrentResult(exception=e)
|
||||
)
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
futures = [
|
||||
executor.submit(
|
||||
_execute_task_in_process, task_data
|
||||
)
|
||||
for task_data in task_data_list
|
||||
]
|
||||
|
||||
for future in as_completed(
|
||||
futures, timeout=config.timeout
|
||||
):
|
||||
try:
|
||||
result = future.result(
|
||||
timeout=config.timeout
|
||||
)
|
||||
results.append(result)
|
||||
except Exception as e:
|
||||
if config.return_exceptions:
|
||||
results.append(
|
||||
ConcurrentResult(exception=e)
|
||||
)
|
||||
else:
|
||||
raise
|
||||
|
||||
return results
|
||||
|
||||
async def _execute_with_async(tasks):
|
||||
"""Execute tasks using asyncio."""
|
||||
|
||||
async def _async_task(
|
||||
task_args, task_kwargs, task_id=None
|
||||
):
|
||||
start_time = time.time()
|
||||
|
||||
for attempt in range(config.max_retries + 1):
|
||||
try:
|
||||
loop = asyncio.get_event_loop()
|
||||
result = await loop.run_in_executor(
|
||||
None,
|
||||
lambda: func(*task_args, **task_kwargs),
|
||||
)
|
||||
execution_time = time.time() - start_time
|
||||
return ConcurrentResult(
|
||||
value=result,
|
||||
execution_time=execution_time,
|
||||
worker_id=task_id,
|
||||
)
|
||||
except Exception as e:
|
||||
if (
|
||||
attempt == config.max_retries
|
||||
or not config.retry_on_failure
|
||||
):
|
||||
execution_time = time.time() - start_time
|
||||
if config.return_exceptions:
|
||||
return ConcurrentResult(
|
||||
exception=e,
|
||||
execution_time=execution_time,
|
||||
worker_id=task_id,
|
||||
)
|
||||
else:
|
||||
raise
|
||||
else:
|
||||
logger.warning(
|
||||
f"Async task {task_id} failed (attempt {attempt + 1}/{config.max_retries + 1}): {e}"
|
||||
)
|
||||
await asyncio.sleep(
|
||||
config.retry_delay * (2**attempt)
|
||||
)
|
||||
|
||||
semaphore = asyncio.Semaphore(config.max_workers)
|
||||
|
||||
async def _limited_task(task_args, task_kwargs, task_id):
|
||||
async with semaphore:
|
||||
return await _async_task(
|
||||
task_args, task_kwargs, task_id
|
||||
)
|
||||
|
||||
tasks_coros = [
|
||||
_limited_task(task[0], task[1], i)
|
||||
for i, task in enumerate(tasks)
|
||||
]
|
||||
|
||||
if config.ordered:
|
||||
results = []
|
||||
for coro in asyncio.as_completed(tasks_coros):
|
||||
try:
|
||||
result = await coro
|
||||
results.append(result)
|
||||
except Exception as e:
|
||||
if config.return_exceptions:
|
||||
results.append(
|
||||
ConcurrentResult(exception=e)
|
||||
)
|
||||
else:
|
||||
raise
|
||||
return results
|
||||
else:
|
||||
return await asyncio.gather(
|
||||
*tasks_coros,
|
||||
return_exceptions=config.return_exceptions,
|
||||
)
|
||||
|
||||
def concurrent_batch(
|
||||
items: List[Any],
|
||||
batch_size: Optional[int] = None,
|
||||
**kwargs,
|
||||
) -> List[ConcurrentResult]:
|
||||
"""Execute the function concurrently on a batch of items."""
|
||||
batch_size = batch_size or config.chunk_size or len(items)
|
||||
|
||||
tasks = []
|
||||
for item in items:
|
||||
if isinstance(item, (list, tuple)):
|
||||
tasks.append((item, kwargs))
|
||||
else:
|
||||
tasks.append(([item], kwargs))
|
||||
|
||||
return concurrent_execute(
|
||||
*[task[0] for task in tasks],
|
||||
**[task[1] for task in tasks],
|
||||
)
|
||||
|
||||
def concurrent_map(
|
||||
items: List[Any], **kwargs
|
||||
) -> List[ConcurrentResult]:
|
||||
"""Map the function over a list of items concurrently."""
|
||||
return concurrent_batch(items, **kwargs)
|
||||
|
||||
# Attach methods to the wrapper
|
||||
wrapper.concurrent_execute = concurrent_execute
|
||||
wrapper.concurrent_batch = concurrent_batch
|
||||
wrapper.concurrent_map = concurrent_map
|
||||
wrapper.config = config
|
||||
|
||||
# Add metadata
|
||||
wrapper.__concurrent_config__ = config
|
||||
wrapper.__concurrent_enabled__ = True
|
||||
|
||||
return wrapper
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
def concurrent_class_executor(
|
||||
name: Optional[str] = None,
|
||||
description: Optional[str] = None,
|
||||
max_workers: int = 4,
|
||||
timeout: Optional[float] = None,
|
||||
executor_type: ExecutorType = ExecutorType.THREAD,
|
||||
return_exceptions: bool = False,
|
||||
chunk_size: Optional[int] = None,
|
||||
ordered: bool = True,
|
||||
retry_on_failure: bool = False,
|
||||
max_retries: int = 3,
|
||||
retry_delay: float = 1.0,
|
||||
methods: Optional[List[str]] = None,
|
||||
):
|
||||
"""
|
||||
A decorator that enables concurrent execution for class methods.
|
||||
|
||||
Args:
|
||||
name (Optional[str]): Name for the concurrent operation
|
||||
description (Optional[str]): Description of the operation
|
||||
max_workers (int): Maximum number of worker threads/processes
|
||||
timeout (Optional[float]): Timeout in seconds for each task
|
||||
executor_type (ExecutorType): Type of executor (thread, process, async)
|
||||
return_exceptions (bool): Whether to return exceptions instead of raising
|
||||
chunk_size (Optional[int]): Size of chunks for batch processing
|
||||
ordered (bool): Whether to maintain order of results
|
||||
retry_on_failure (bool): Whether to retry failed tasks
|
||||
max_retries (int): Maximum number of retries per task
|
||||
retry_delay (float): Delay between retries in seconds
|
||||
methods (Optional[List[str]]): List of method names to make concurrent
|
||||
|
||||
Returns:
|
||||
Class: Class with concurrent execution capabilities
|
||||
"""
|
||||
|
||||
def decorator(cls):
|
||||
config = ConcurrentConfig(
|
||||
name=name or f"{cls.__name__}_concurrent",
|
||||
description=description
|
||||
or f"Concurrent execution for {cls.__name__}",
|
||||
max_workers=max_workers,
|
||||
timeout=timeout,
|
||||
executor_type=executor_type,
|
||||
return_exceptions=return_exceptions,
|
||||
chunk_size=chunk_size,
|
||||
ordered=ordered,
|
||||
retry_on_failure=retry_on_failure,
|
||||
max_retries=max_retries,
|
||||
retry_delay=retry_delay,
|
||||
)
|
||||
|
||||
# Get methods to make concurrent
|
||||
target_methods = methods or [
|
||||
name
|
||||
for name, method in inspect.getmembers(
|
||||
cls, inspect.isfunction
|
||||
)
|
||||
if not name.startswith("_")
|
||||
]
|
||||
|
||||
for method_name in target_methods:
|
||||
if hasattr(cls, method_name):
|
||||
original_method = getattr(cls, method_name)
|
||||
|
||||
# Create concurrent version of the method
|
||||
concurrent_decorator = concurrent(
|
||||
name=f"{cls.__name__}.{method_name}",
|
||||
description=f"Concurrent execution of {cls.__name__}.{method_name}",
|
||||
max_workers=config.max_workers,
|
||||
timeout=config.timeout,
|
||||
executor_type=config.executor_type,
|
||||
return_exceptions=config.return_exceptions,
|
||||
chunk_size=config.chunk_size,
|
||||
ordered=config.ordered,
|
||||
retry_on_failure=config.retry_on_failure,
|
||||
max_retries=config.max_retries,
|
||||
retry_delay=config.retry_delay,
|
||||
)
|
||||
|
||||
# Apply the concurrent decorator to the method
|
||||
setattr(
|
||||
cls,
|
||||
method_name,
|
||||
concurrent_decorator(original_method),
|
||||
)
|
||||
|
||||
# Add class-level concurrent configuration
|
||||
cls.__concurrent_config__ = config
|
||||
cls.__concurrent_enabled__ = True
|
||||
|
||||
return cls
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
# Convenience functions for common use cases
|
||||
def thread_executor(**kwargs):
|
||||
"""Convenience decorator for thread-based concurrent execution."""
|
||||
return concurrent(executor_type=ExecutorType.THREAD, **kwargs)
|
||||
|
||||
|
||||
def process_executor(**kwargs):
|
||||
"""Convenience decorator for process-based concurrent execution."""
|
||||
return concurrent(executor_type=ExecutorType.PROCESS, **kwargs)
|
||||
|
||||
|
||||
def async_executor(**kwargs):
|
||||
"""Convenience decorator for async-based concurrent execution."""
|
||||
return concurrent(executor_type=ExecutorType.ASYNC, **kwargs)
|
||||
|
||||
|
||||
def batch_executor(batch_size: int = 10, **kwargs):
|
||||
"""Convenience decorator for batch processing."""
|
||||
return concurrent(chunk_size=batch_size, **kwargs)
|
Loading…
Reference in new issue