Merge branch 'master' into Fix/stream-issues

pull/938/head
harshalmore31 4 months ago committed by GitHub
commit f0d71ae6e0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

1
.gitignore vendored

@ -18,6 +18,7 @@ next_swarms_update.txt
runs
Financial-Analysis-Agent_state.json
conversations/
models/
evolved_gpt2_models/
experimental
ffn_alternatives

@ -8,92 +8,115 @@
</p>
<p align="center">
<a href="https://pypi.org/project/swarms/" target="_blank">
<img alt="Python" src="https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54" />
<img alt="Version" src="https://img.shields.io/pypi/v/swarms?style=for-the-badge&color=3670A0">
</a>
</p>
<p align="center">
<a href="https://twitter.com/swarms_corp/">🐦 Twitter</a>
<span>&nbsp;&nbsp;&nbsp;&nbsp;</span>
<a href="https://discord.gg/EamjgSaEQf">📢 Discord</a>
<span>&nbsp;&nbsp;&nbsp;&nbsp;</span>
<a href="https://swarms.ai">Swarms Website</a>
<span>&nbsp;&nbsp;&nbsp;&nbsp;</span>
<a href="https://docs.swarms.world">📙 Documentation</a>
<span>&nbsp;&nbsp;&nbsp;&nbsp;</span>
<a href="https://swarms.world"> Swarms Marketplace</a>
<a href="https://pypi.org/project/swarms/" target="_blank">
<picture>
<source srcset="https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54" media="(prefers-color-scheme: dark)">
<img alt="Python" src="https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54" />
</picture>
<picture>
<source srcset="https://img.shields.io/pypi/v/swarms?style=for-the-badge&color=3670A0" media="(prefers-color-scheme: dark)">
<img alt="Version" src="https://img.shields.io/pypi/v/swarms?style=for-the-badge&color=3670A0">
</picture>
</a>
</p>
<p align="center">
<!-- Social Media -->
<a href="https://discord.gg/jHnrkH5y">
<img src="https://img.shields.io/badge/Discord-Join%20our%20server-5865F2?style=for-the-badge&logo=discord&logoColor=white" alt="Discord">
</a>
<a href="https://www.youtube.com/@kyegomez3242">
<img src="https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white" alt="YouTube">
</a>
<a href="https://www.linkedin.com/in/kye-g-38759a207/">
<img src="https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white" alt="LinkedIn">
</a>
<a href="https://x.com/swarms_corp">
<img src="https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white" alt="X.com">
</a>
<!-- Project Stats - Most Important First -->
<a href="https://github.com/kyegomez/swarms/stargazers">
<picture>
<source srcset="https://img.shields.io/github/stars/kyegomez/swarms?style=for-the-badge" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/github/stars/kyegomez/swarms?style=for-the-badge" alt="GitHub stars">
</picture>
</a>
<a href="https://github.com/kyegomez/swarms/network">
<picture>
<source srcset="https://img.shields.io/github/forks/kyegomez/swarms?style=for-the-badge" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/github/forks/kyegomez/swarms?style=for-the-badge" alt="GitHub forks">
</picture>
</a>
<a href="https://github.com/kyegomez/swarms/issues">
<picture>
<source srcset="https://img.shields.io/github/issues/kyegomez/swarms?style=for-the-badge" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/github/issues/kyegomez/swarms?style=for-the-badge" alt="GitHub issues">
</picture>
</a>
<a href="https://github.com/kyegomez/swarms/blob/main/LICENSE">
<picture>
<source srcset="https://img.shields.io/github/license/kyegomez/swarms?style=for-the-badge" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/github/license/kyegomez/swarms?style=for-the-badge" alt="GitHub license">
</picture>
</a>
<a href="https://pepy.tech/project/swarms">
<picture>
<source srcset="https://static.pepy.tech/badge/swarms/month" media="(prefers-color-scheme: dark)">
<img src="https://static.pepy.tech/badge/swarms/month" alt="Downloads">
</picture>
</a>
<a href="https://libraries.io/github/kyegomez/swarms">
<picture>
<source srcset="https://img.shields.io/librariesio/github/kyegomez/swarms?style=for-the-badge" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/librariesio/github/kyegomez/swarms?style=for-the-badge" alt="Dependency Status">
</picture>
</a>
</p>
<p align="center">
<!-- Project Stats -->
<a href="https://github.com/kyegomez/swarms/issues">
<img src="https://img.shields.io/github/issues/kyegomez/swarms" alt="GitHub issues">
</a>
<a href="https://github.com/kyegomez/swarms/network">
<img src="https://img.shields.io/github/forks/kyegomez/swarms" alt="GitHub forks">
</a>
<a href="https://github.com/kyegomez/swarms/stargazers">
<img src="https://img.shields.io/github/stars/kyegomez/swarms" alt="GitHub stars">
</a>
<a href="https://github.com/kyegomez/swarms/blob/main/LICENSE">
<img src="https://img.shields.io/github/license/kyegomez/swarms" alt="GitHub license">
</a>
<a href="https://star-history.com/#kyegomez/swarms">
<img src="https://img.shields.io/github/stars/kyegomez/swarms?style=social" alt="GitHub star chart">
</a>
<a href="https://libraries.io/github/kyegomez/swarms">
<img src="https://img.shields.io/librariesio/github/kyegomez/swarms" alt="Dependency Status">
</a>
<a href="https://pepy.tech/project/swarms">
<img src="https://static.pepy.tech/badge/swarms/month" alt="Downloads">
</a>
<!-- Social Media -->
<a href="https://twitter.com/swarms_corp/">
<picture>
<source srcset="https://img.shields.io/badge/Twitter-Follow-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/badge/Twitter-Follow-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Twitter">
</picture>
</a>
<a href="https://discord.gg/EamjgSaEQf">
<picture>
<source srcset="https://img.shields.io/badge/Discord-Join-5865F2?style=for-the-badge&logo=discord&logoColor=white" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/badge/Discord-Join-5865F2?style=for-the-badge&logo=discord&logoColor=white" alt="Discord">
</picture>
</a>
<a href="https://www.youtube.com/@kyegomez3242">
<picture>
<source srcset="https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/badge/YouTube-Subscribe-red?style=for-the-badge&logo=youtube&logoColor=white" alt="YouTube">
</picture>
</a>
<a href="https://www.linkedin.com/in/kye-g-38759a207/">
<picture>
<source srcset="https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/badge/LinkedIn-Connect-blue?style=for-the-badge&logo=linkedin&logoColor=white" alt="LinkedIn">
</picture>
</a>
<a href="https://x.com/swarms_corp">
<picture>
<source srcset="https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/badge/X.com-Follow-1DA1F2?style=for-the-badge&logo=x&logoColor=white" alt="X.com">
</picture>
</a>
</p>
<p align="center">
<!-- Share Buttons -->
<a href="https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms">
<img src="https://img.shields.io/twitter/url/https/twitter.com/cloudposse.svg?style=social&label=Share%20%40kyegomez/swarms" alt="Share on Twitter">
</a>
<a href="https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms">
<img src="https://img.shields.io/badge/Share-%20facebook-blue" alt="Share on Facebook">
</a>
<a href="https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=&summary=&source=">
<img src="https://img.shields.io/badge/Share-%20linkedin-blue" alt="Share on LinkedIn">
</a>
<!-- Main Navigation Links -->
<a href="https://swarms.ai">🏠 Swarms Website</a>
<span>&nbsp;&nbsp;&nbsp;&nbsp;</span>
<a href="https://docs.swarms.world">📙 Documentation</a>
<span>&nbsp;&nbsp;&nbsp;&nbsp;</span>
<a href="https://swarms.world">🛒 Swarms Marketplace</a>
</p>
<p align="center">
<!-- Additional Share Buttons -->
<a href="https://www.reddit.com/submit?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=Swarms%20-%20the%20future%20of%20AI">
<img src="https://img.shields.io/badge/-Share%20on%20Reddit-orange" alt="Share on Reddit">
</a>
<a href="https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&t=Swarms%20-%20the%20future%20of%20AI">
<img src="https://img.shields.io/badge/-Share%20on%20Hacker%20News-orange" alt="Share on Hacker News">
</a>
<a href="https://pinterest.com/pin/create/button/?url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&media=https%3A%2F%2Fexample.com%2Fimage.jpg&description=Swarms%20-%20the%20future%20of%20AI">
<img src="https://img.shields.io/badge/-Share%20on%20Pinterest-red" alt="Share on Pinterest">
</a>
<a href="https://api.whatsapp.com/send?text=Check%20out%20Swarms%20-%20the%20future%20of%20AI%20%23swarms%20%23AI%0A%0Ahttps%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms">
<img src="https://img.shields.io/badge/-Share%20on%20WhatsApp-green" alt="Share on WhatsApp">
</a>
<!-- Share Buttons -->
<a href="https://twitter.com/intent/tweet?text=Check%20out%20this%20amazing%20AI%20project:%20&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms">
<picture>
<source srcset="https://img.shields.io/badge/Share%20on%20Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/badge/Share%20on%20Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Share on Twitter">
</picture>
</a>
<a href="https://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fgithub.com%2Fkyegomez%2Fswarms&title=&summary=&source=">
<picture>
<source srcset="https://img.shields.io/badge/Share%20on%20LinkedIn-blue?style=for-the-badge" media="(prefers-color-scheme: dark)">
<img src="https://img.shields.io/badge/Share%20on%20LinkedIn-blue?style=for-the-badge" alt="Share on LinkedIn">
</picture>
</a>
</p>
## ✨ Features
@ -112,31 +135,27 @@ Swarms delivers a comprehensive, enterprise-grade multi-agent infrastructure pla
## Install 💻
### Using pip
```bash
$ pip3 install -U swarms
```
### Using uv (Recommended)
[uv](https://github.com/astral-sh/uv) is a fast Python package installer and resolver, written in Rust.
```bash
# Install uv
$ curl -LsSf https://astral.sh/uv/install.sh | sh
# Install swarms using uv
$ uv pip install swarms
```
### Using poetry
```bash
# Install poetry if you haven't already
$ curl -sSL https://install.python-poetry.org | python3 -
# Add swarms to your project
```bash
$ poetry add swarms
```
### From source
```bash
# Clone the repository
$ git clone https://github.com/kyegomez/swarms.git
@ -171,7 +190,7 @@ from swarms import Agent
# Initialize a new agent
agent = Agent(
model_name="gpt-4o-mini", # Specify the LLM
max_loops=1, # Set the number of interactions
max_loops="auto", # Set the number of interactions
interactive=True, # Enable interactive mode for real-time feedback
)
@ -211,23 +230,64 @@ print(final_post)
-----
### 🤖 AutoSwarmBuilder: Autonomous Agent Generation
The `AutoSwarmBuilder` automatically generates specialized agents and their workflows based on your task description. Simply describe what you need, and it will create a complete multi-agent system with detailed prompts and optimal agent configurations. [Learn more about AutoSwarmBuilder](https://docs.swarms.world/en/latest/swarms/structs/auto_swarm_builder/)
```python
from swarms.structs.auto_swarm_builder import AutoSwarmBuilder
import json
# Initialize the AutoSwarmBuilder
swarm = AutoSwarmBuilder(
name="My Swarm",
description="A swarm of agents",
verbose=True,
max_loops=1,
return_agents=True,
model_name="gpt-4o-mini",
)
# Let the builder automatically create agents and workflows
result = swarm.run(
task="Create an accounting team to analyze crypto transactions, "
"there must be 5 agents in the team with extremely extensive prompts. "
"Make the prompts extremely detailed and specific and long and comprehensive. "
"Make sure to include all the details of the task in the prompts."
)
# The result contains the generated agents and their configurations
print(json.dumps(result, indent=4))
```
The `AutoSwarmBuilder` provides:
- **Automatic Agent Generation**: Creates specialized agents based on task requirements
- **Intelligent Prompt Engineering**: Generates comprehensive, detailed prompts for each agent
- **Optimal Workflow Design**: Determines the best agent interactions and workflow structure
- **Production-Ready Configurations**: Returns fully configured agents ready for deployment
- **Flexible Architecture**: Supports various swarm types and agent specializations
This feature is perfect for rapid prototyping, complex task decomposition, and creating specialized agent teams without manual configuration.
-----
## 🏗️ Multi-Agent Architectures For Production Deployments
`swarms` provides a variety of powerful, pre-built multi-agent architectures enabling you to orchestrate agents in various ways. Choose the right structure for your specific problem to build efficient and reliable production systems.
| **Architecture** | **Description** | **Best For** |
|---|---|---|
| **[SequentialWorkflow](https://docs.swarms.world/en/latest/swarms/structs/sequential_workflow/)** | Agents execute tasks in a linear chain; one agent's output is the next one's input. | Step-by-step processes like data transformation pipelines, report generation. |
| **[ConcurrentWorkflow](https://docs.swarms.world/en/latest/swarms/structs/concurrent_workflow/)** | Agents run tasks simultaneously for maximum efficiency. | High-throughput tasks like batch processing, parallel data analysis. |
| **[AgentRearrange](https://docs.swarms.world/en/latest/swarms/structs/agent_rearrange/)** | Dynamically maps complex relationships (e.g., `a -> b, c`) between agents. | Flexible and adaptive workflows, task distribution, dynamic routing. |
| **[GraphWorkflow](https://docs.swarms.world/en/latest/swarms/structs/graph_workflow/)** | Orchestrates agents as nodes in a Directed Acyclic Graph (DAG). | Complex projects with intricate dependencies, like software builds. |
| **[MixtureOfAgents (MoA)](https://docs.swarms.world/en/latest/swarms/structs/moa/)** | Utilizes multiple expert agents in parallel and synthesizes their outputs. | Complex problem-solving, achieving state-of-the-art performance through collaboration. |
| **[GroupChat](https://docs.swarms.world/en/latest/swarms/structs/group_chat/)** | Agents collaborate and make decisions through a conversational interface. | Real-time collaborative decision-making, negotiations, brainstorming. |
| **[ForestSwarm](https://docs.swarms.world/en/latest/swarms/structs/forest_swarm/)** | Dynamically selects the most suitable agent or tree of agents for a given task. | Task routing, optimizing for expertise, complex decision-making trees. |
| **[HierarchicalSwarm](https://docs.swarms.world/en/latest/swarms/structs/hiearchical_swarm/)** | Orchestrates agents with a director that creates plans and distributes tasks to specialized worker agents. | Complex project management, team coordination, hierarchical decision-making with feedback loops. |
| **[HeavySwarm](https://docs.swarms.world/en/latest/swarms/structs/heavy_swarm/)** | Implements a 5-phase workflow with specialized agents (Research, Analysis, Alternatives, Verification) for comprehensive task analysis. | Complex research and analysis tasks, financial analysis, strategic planning, comprehensive reporting. |
| **[SwarmRouter](https://docs.swarms.world/en/latest/swarms/structs/swarm_router/)** | Universal orchestrator that provides a single interface to run any type of swarm with dynamic selection. | Simplifying complex workflows, switching between swarm strategies, unified multi-agent management. |
| **[SequentialWorkflow](https://docs.swarms.world/en/latest/swarms/structs/sequential_workflow/)** | Agents execute tasks in a linear chain; the output of one agent becomes the input for the next. | Step-by-step processes such as data transformation pipelines and report generation. |
| **[ConcurrentWorkflow](https://docs.swarms.world/en/latest/swarms/structs/concurrent_workflow/)** | Agents run tasks simultaneously for maximum efficiency. | High-throughput tasks such as batch processing and parallel data analysis. |
| **[AgentRearrange](https://docs.swarms.world/en/latest/swarms/structs/agent_rearrange/)** | Dynamically maps complex relationships (e.g., `a -> b, c`) between agents. | Flexible and adaptive workflows, task distribution, and dynamic routing. |
| **[GraphWorkflow](https://docs.swarms.world/en/latest/swarms/structs/graph_workflow/)** | Orchestrates agents as nodes in a Directed Acyclic Graph (DAG). | Complex projects with intricate dependencies, such as software builds. |
| **[MixtureOfAgents (MoA)](https://docs.swarms.world/en/latest/swarms/structs/moa/)** | Utilizes multiple expert agents in parallel and synthesizes their outputs. | Complex problem-solving and achieving state-of-the-art performance through collaboration. |
| **[GroupChat](https://docs.swarms.world/en/latest/swarms/structs/group_chat/)** | Agents collaborate and make decisions through a conversational interface. | Real-time collaborative decision-making, negotiations, and brainstorming. |
| **[ForestSwarm](https://docs.swarms.world/en/latest/swarms/structs/forest_swarm/)** | Dynamically selects the most suitable agent or tree of agents for a given task. | Task routing, optimizing for expertise, and complex decision-making trees. |
| **[HierarchicalSwarm](https://docs.swarms.world/en/latest/swarms/structs/hiearchical_swarm/)** | Orchestrates agents with a director who creates plans and distributes tasks to specialized worker agents. | Complex project management, team coordination, and hierarchical decision-making with feedback loops. |
| **[HeavySwarm](https://docs.swarms.world/en/latest/swarms/structs/heavy_swarm/)** | Implements a five-phase workflow with specialized agents (Research, Analysis, Alternatives, Verification) for comprehensive task analysis. | Complex research and analysis tasks, financial analysis, strategic planning, and comprehensive reporting. |
| **[SwarmRouter](https://docs.swarms.world/en/latest/swarms/structs/swarm_router/)** | A universal orchestrator that provides a single interface to run any type of swarm with dynamic selection. | Simplifying complex workflows, switching between swarm strategies, and unified multi-agent management. |
-----
@ -310,7 +370,7 @@ print(results)
### AgentRearrange
Inspired by `einsum`, `AgentRearrange` lets you define complex, non-linear relationships between agents using a simple string-based syntax. [Learn more](https://docs.swarms.world/en/latest/swarms/structs/agent_rearrange/). This architecture is Perfect for orchestrating dynamic workflows where agents might work in parallel, sequence, or a combination of both.
Inspired by `einsum`, `AgentRearrange` lets you define complex, non-linear relationships between agents using a simple string-based syntax. [Learn more](https://docs.swarms.world/en/latest/swarms/structs/agent_rearrange/). This architecture is perfect for orchestrating dynamic workflows where agents might work in parallel, in sequence, or in any combination you choose.
```python
from swarms import Agent, AgentRearrange
@ -683,7 +743,7 @@ By joining us, you have the opportunity to:
* **Work on the Frontier of Agents:** Shape the future of autonomous agent technology and help build a production-grade, open-source framework.
* **Join a Vibrant Community:** Collaborate with a passionate and growing group of agent developers, researchers, and AI enthusiasts.
* **Join a Vibrant Community:** Collaborate with a passionate and growing group of agent developers, researchers, and agent enthusasits.
* **Make a Tangible Impact:** Whether you're fixing a bug, adding a new feature, or improving documentation, your work will be used in real-world applications.

@ -1,54 +0,0 @@
from swarms import Agent, CronJob
from loguru import logger
# Example usage
if __name__ == "__main__":
# Initialize the agent
agent = Agent(
agent_name="Quantitative-Trading-Agent",
agent_description="Advanced quantitative trading and algorithmic analysis agent",
system_prompt="""You are an expert quantitative trading agent with deep expertise in:
- Algorithmic trading strategies and implementation
- Statistical arbitrage and market making
- Risk management and portfolio optimization
- High-frequency trading systems
- Market microstructure analysis
- Quantitative research methodologies
- Financial mathematics and stochastic processes
- Machine learning applications in trading
Your core responsibilities include:
1. Developing and backtesting trading strategies
2. Analyzing market data and identifying alpha opportunities
3. Implementing risk management frameworks
4. Optimizing portfolio allocations
5. Conducting quantitative research
6. Monitoring market microstructure
7. Evaluating trading system performance
You maintain strict adherence to:
- Mathematical rigor in all analyses
- Statistical significance in strategy development
- Risk-adjusted return optimization
- Market impact minimization
- Regulatory compliance
- Transaction cost analysis
- Performance attribution
You communicate in precise, technical terms while maintaining clarity for stakeholders.""",
max_loops=1,
model_name="gpt-4.1",
dynamic_temperature_enabled=True,
output_type="str-all-except-first",
streaming_on=True,
print_on=True,
telemetry_enable=False,
)
# Example 1: Basic usage with just a task
logger.info("Starting example cron job")
cron_job = CronJob(agent=agent, interval="10seconds")
cron_job.run(
task="What are the best top 3 etfs for gold coverage?"
)

@ -55,74 +55,73 @@ extra:
link: https://www.linkedin.com/company/swarms-corp/
footer_links:
"Getting Started":
"Quick Start":
- title: "Installation"
url: "https://docs.swarms.world/en/latest/swarms/install/install/"
- title: "Quickstart"
- title: "Quickstart Guide"
url: "https://docs.swarms.world/en/latest/quickstart/"
- title: "Environment Setup"
url: "https://docs.swarms.world/en/latest/swarms/install/env/"
- title: "Basic Agent Example"
url: "https://docs.swarms.world/en/latest/swarms/examples/basic_agent/"
"Core Capabilities":
- title: "Agents"
url: "https://docs.swarms.world/en/latest/swarms/structs/agent/"
- title: "Tools and MCP"
url: "https://docs.swarms.world/en/latest/swarms/tools/tools_examples/"
- title: "Multi-Agent Architectures"
url: "https://docs.swarms.world/en/latest/swarms/concept/swarm_architectures/"
- title: "Sequential Workflow"
url: "https://docs.swarms.world/en/latest/swarms/structs/sequential_workflow/"
- title: "Concurrent Workflow"
url: "https://docs.swarms.world/en/latest/swarms/structs/concurrentworkflow/"
- title: "Hierarchical Swarm"
url: "https://docs.swarms.world/en/latest/swarms/structs/hierarchical_swarm/"
- title: "LLM Providers"
url: "https://docs.swarms.world/en/latest/swarms/examples/model_providers/"
- title: "Swarm Router"
url: "https://docs.swarms.world/en/latest/swarms/structs/swarm_router/"
"Templates & Applications":
"Advanced Concepts":
- title: "MALT (Multi-Agent Learning Task)"
url: "https://github.com/kyegomez/swarms/blob/master/examples/single_agent/reasoning_agent_examples/malt_example.py"
- title: "MAI-DxO (Medical AI Diagnosis)"
url: "https://github.com/The-Swarm-Corporation/Open-MAI-Dx-Orchestrator"
- title: "AI-CoScientist Research Framework"
url: "https://github.com/The-Swarm-Corporation/AI-CoScientist"
- title: "Agent-as-a-Judge Evaluation"
url: "https://github.com/kyegomez/swarms/blob/master/examples/single_agent/reasoning_agent_examples/agent_judge_example.py"
- title: "Research Papers Collection"
url: "https://github.com/kyegomez/awesome-multi-agent-papers"
"Tools":
- title: "Tools and MCP"
url: "https://docs.swarms.world/en/latest/swarms/tools/tools_examples/"
- title: "MCP (Model Context Protocol)"
url: "https://docs.swarms.world/en/latest/swarms/examples/agent_with_mcp/"
- title: "OpenAI Tools & Function Calling"
url: "https://docs.swarms.world/en/latest/swarms/examples/agent_structured_outputs/"
- title: "Web Search (Exa, Serper)"
url: "https://docs.swarms.world/en/latest/swarms_tools/search/"
- title: "Vision & Image Processing"
url: "https://docs.swarms.world/en/latest/swarms/examples/vision_processing/"
- title: "Browser Automation"
url: "https://docs.swarms.world/en/latest/swarms/examples/swarms_of_browser_agents/"
- title: "Crypto APIs (CoinGecko, HTX)"
url: "https://docs.swarms.world/en/latest/swarms/examples/agent_with_tools/"
- title: "Yahoo Finance"
url: "https://docs.swarms.world/en/latest/swarms/examples/yahoo_finance/"
"Use Cases":
- title: "Examples Overview"
url: "https://docs.swarms.world/en/latest/examples/index/"
- title: "Cookbook"
url: "https://docs.swarms.world/en/latest/examples/cookbook_index/"
- title: "Templates"
- title: "Templates & Applications"
url: "https://docs.swarms.world/en/latest/examples/templates/"
- title: "Paper Implementations"
url: "https://docs.swarms.world/en/latest/examples/paper_implementations/"
"Contributors":
- title: "Contributing"
url: "https://docs.swarms.world/en/latest/contributors/main/"
- title: "Code Style Guide"
url: "https://docs.swarms.world/en/latest/swarms/framework/code_cleanliness/"
- title: "Adding Documentation"
url: "https://docs.swarms.world/en/latest/contributors/docs/"
- title: "Bounty Program"
url: "https://docs.swarms.world/en/latest/governance/bounty_program/"
- title: "Support"
url: "https://docs.swarms.world/en/latest/swarms/support/"
"Community":
- title: "Twitter"
url: "https://twitter.com/swarms_corp"
- title: "Discord"
url: "https://discord.gg/jM3Z6M9uMq"
- title: "YouTube"
url: "https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ"
- title: "LinkedIn"
url: "https://www.linkedin.com/company/the-swarm-corporation"
- title: "Blog"
url: "https://medium.com/@kyeg"
- title: "Events"
url: "https://lu.ma/5p2jnc2v"
- title: "Onboarding Session"
url: "https://cal.com/swarms/swarms-onboarding-session"
- title: "Financial Analysis Swarms"
url: "https://docs.swarms.world/en/latest/swarms/examples/swarms_api_finance/"
- title: "Deep Research Swarm"
url: "https://docs.swarms.world/en/latest/swarms/structs/deep_research_swarm/"
- title: "Medical Diagnosis Systems"
url: "https://docs.swarms.world/en/latest/swarms/examples/swarms_api_medical/"
- title: "DAO Governance"
url: "https://docs.swarms.world/en/latest/swarms/examples/swarms_dao/"
- title: "All Examples Repository"
url: "https://github.com/kyegomez/swarms/tree/master/examples"
analytics:
provider: google
@ -277,7 +276,6 @@ nav:
- Overview: "swarms/structs/overview.md"
- Custom Multi Agent Architectures: "swarms/structs/custom_swarm.md"
- Debate Multi-Agent Architectures: "swarms/structs/orchestration_methods.md"
- MajorityVoting: "swarms/structs/majorityvoting.md"
- RoundRobin: "swarms/structs/round_robin_swarm.md"
- Mixture of Agents: "swarms/structs/moa.md"
@ -296,6 +294,7 @@ nav:
- Hybrid Hierarchical-Cluster Swarm: "swarms/structs/hhcs.md"
- Auto Swarm Builder: "swarms/structs/auto_swarm_builder.md"
- Swarm Matcher: "swarms/structs/swarm_matcher.md"
- Board of Directors: "swarms/structs/BoardOfDirectors.md"
# - Multi-Agent Multi-Modal Structures:
# - ImageAgentBatchProcessor: "swarms/structs/image_batch_agent.md"
@ -342,10 +341,12 @@ nav:
# - Faiss: "swarms_memory/faiss.md"
- Deployment Solutions:
- Deploy on Google Cloud Run: "swarms_cloud/cloud_run.md"
- Deploy on Phala: "swarms_cloud/phala_deploy.md"
# - Overview: "swarms_cloud/overview.md"
- CronJob: "swarms/structs/cron_job.md"
# - Deploy on FastAPI: "swarms_cloud/fastapi_deploy.md"
- Providers:
- Deploy on Google Cloud Run: "swarms_cloud/cloud_run.md"
- Deploy on Phala: "swarms_cloud/phala_deploy.md"
- Deploy on Cloudflare Workers: "swarms_cloud/cloudflare_workers.md"
- Examples:
@ -446,6 +447,17 @@ nav:
- Tools: "swarms_cloud/swarms_api_tools.md"
- Multi-Agent:
- Multi Agent Architectures Available: "swarms_cloud/swarm_types.md"
- Swarm Types:
- AgentRearrange: "swarms_cloud/agent_rearrange.md"
- MixtureOfAgents: "swarms_cloud/mixture_of_agents.md"
- SequentialWorkflow: "swarms_cloud/sequential_workflow.md"
- ConcurrentWorkflow: "swarms_cloud/concurrent_workflow.md"
- GroupChat: "swarms_cloud/group_chat.md"
- MultiAgentRouter: "swarms_cloud/multi_agent_router.md"
- HierarchicalSwarm: "swarms_cloud/hierarchical_swarm.md"
- MajorityVoting: "swarms_cloud/majority_voting.md"
# - AutoSwarmBuilder: "swarms_cloud/auto_swarm_builder.md"
# - Auto: "swarms_cloud/auto.md"
- Examples:
- Medical Swarm: "swarms/examples/swarms_api_medical.md"
- Finance Swarm: "swarms/examples/swarms_api_finance.md"

@ -19,13 +19,22 @@
<h4 class="md-footer-links__title">{{ section_name }}</h4>
<ul class="md-footer-links__list">
{% for link in links %}
<li class="md-footer-links__item">
<li class="md-footer-links__item{% if loop.index > 4 %} md-footer-links__item--hidden{% endif %}">
<a href="{{ link.url }}" class="md-footer-links__link">
{{ link.title }}
</a>
</li>
{% endfor %}
</ul>
{% if links|length > 4 %}
<button type="button" class="md-footer-links__toggle"
onclick="toggleFooterLinks(this)"
data-text-more="Show {{ links|length - 4 }} more"
data-text-less="Show less">
<span class="md-footer-links__toggle-text">Show {{ links|length - 4 }} more</span>
<span class="md-footer-links__toggle-icon"></span>
</button>
{% endif %}
</div>
{% endfor %}
</div>
@ -70,8 +79,8 @@
.md-footer-links {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(240px, 1fr));
gap: 2rem;
grid-template-columns: repeat(5, 1fr);
gap: 1.2rem;
max-width: 1220px;
margin: 0 auto;
}
@ -81,12 +90,12 @@
}
.md-footer-links__title {
font-size: 0.64rem;
font-size: 0.6rem;
font-weight: 700;
margin: 0 0 1rem;
margin: 0 0 0.8rem;
text-transform: uppercase;
letter-spacing: 0.1em;
padding-bottom: 0.4rem;
padding-bottom: 0.3rem;
}
.md-footer-links__list {
@ -97,14 +106,14 @@
.md-footer-links__item {
margin: 0;
line-height: 1.8;
line-height: 1.6;
}
.md-footer-links__link {
text-decoration: none;
font-size: 0.7rem;
font-size: 0.65rem;
display: block;
padding: 0.1rem 0;
padding: 0.08rem 0;
transition: color 125ms;
border-radius: 0.1rem;
}
@ -114,6 +123,45 @@
color: var(--md-accent-fg-color);
}
/* Hidden footer items */
.md-footer-links__item--hidden {
display: none;
}
/* Toggle button styles */
.md-footer-links__toggle {
background: none;
border: 0.05rem solid;
border-radius: 0.15rem;
cursor: pointer;
display: flex;
align-items: center;
gap: 0.25rem;
font-size: 0.58rem;
font-weight: 500;
margin-top: 0.6rem;
padding: 0.3rem 0.6rem;
text-transform: uppercase;
letter-spacing: 0.05em;
transition: all 150ms ease;
width: auto;
min-width: fit-content;
}
.md-footer-links__toggle:hover {
transform: translateY(-1px);
}
.md-footer-links__toggle-icon {
font-size: 0.45rem;
transition: transform 200ms ease;
line-height: 1;
}
.md-footer-links__toggle--expanded .md-footer-links__toggle-icon {
transform: rotate(180deg);
}
/* Light Mode (Default) */
[data-md-color-scheme="default"] .md-footer-custom {
background: #ffffff;
@ -134,6 +182,18 @@
color: #1976d2;
}
[data-md-color-scheme="default"] .md-footer-links__toggle {
border-color: #e1e5e9;
color: #636c76;
background: #ffffff;
}
[data-md-color-scheme="default"] .md-footer-links__toggle:hover {
border-color: #1976d2;
color: #1976d2;
background: #f8f9fa;
}
/* Dark Mode (Slate) */
[data-md-color-scheme="slate"] .md-footer-custom {
background: #1F2129;
@ -154,6 +214,18 @@
color: #42a5f5;
}
[data-md-color-scheme="slate"] .md-footer-links__toggle {
border-color: #404040;
color: #9ca3af;
background: #1F2129;
}
[data-md-color-scheme="slate"] .md-footer-links__toggle:hover {
border-color: #42a5f5;
color: #42a5f5;
background: #2a2d38;
}
/* Company Information Section - Base */
.md-footer-company {
padding: 1.5rem 0;
@ -240,28 +312,45 @@
}
/* Responsive Design */
@media screen and (max-width: 76.1875em) {
@media screen and (min-width: 90em) {
.md-footer-links {
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
max-width: 1400px;
gap: 1.5rem;
}
}
@media screen and (max-width: 76.1875em) {
.md-footer-links {
grid-template-columns: repeat(3, 1fr);
gap: 1rem;
}
.md-footer-custom {
padding: 2rem 0 1rem;
padding: 1.8rem 0 1rem;
}
}
@media screen and (max-width: 59.9375em) {
.md-footer-links {
grid-template-columns: repeat(2, 1fr);
gap: 1.5rem;
gap: 1rem;
}
.md-footer-links__title {
font-size: 0.62rem;
margin: 0 0 0.9rem;
}
.md-footer-links__link {
font-size: 0.68rem;
padding: 0.1rem 0;
}
}
@media screen and (max-width: 44.9375em) {
.md-footer-links {
grid-template-columns: 1fr;
gap: 1.5rem;
gap: 1.2rem;
}
.md-footer-custom {
@ -272,6 +361,16 @@
padding: 0 1rem;
}
.md-footer-links__title {
font-size: 0.65rem;
margin: 0 0 1rem;
}
.md-footer-links__link {
font-size: 0.7rem;
padding: 0.12rem 0;
}
/* Company section mobile styles */
.md-footer-company__content {
flex-direction: column;
@ -292,4 +391,30 @@
}
}
</style>
<script>
function toggleFooterLinks(button) {
// Find the parent section
const section = button.closest('.md-footer-links__section');
const hiddenItems = section.querySelectorAll('.md-footer-links__item--hidden');
const toggleText = button.querySelector('.md-footer-links__toggle-text');
const isExpanded = button.classList.contains('md-footer-links__toggle--expanded');
if (isExpanded) {
// Hide items
hiddenItems.forEach(item => {
item.style.display = 'none';
});
toggleText.textContent = button.getAttribute('data-text-more');
button.classList.remove('md-footer-links__toggle--expanded');
} else {
// Show items
hiddenItems.forEach(item => {
item.style.display = 'block';
});
toggleText.textContent = button.getAttribute('data-text-less');
button.classList.add('md-footer-links__toggle--expanded');
}
}
</script>
{% endblock %}

@ -0,0 +1,903 @@
# Board of Directors - Multi-Agent Architecture
The Board of Directors is a sophisticated multi-agent architecture that implements collective decision-making through democratic processes, voting mechanisms, and role-based leadership. This architecture provides an alternative to single-director patterns by enabling collaborative intelligence through structured governance.
## 🏛️ Overview
The Board of Directors architecture follows a democratic workflow pattern:
1. **Task Reception**: User provides a task to the swarm
2. **Board Meeting**: Board of Directors convenes to discuss and create a plan
3. **Voting & Consensus**: Board members vote and reach consensus on task distribution
4. **Order Distribution**: Board distributes orders to specialized worker agents
5. **Execution**: Individual agents execute their assigned tasks
6. **Feedback Loop**: Board evaluates results and issues new orders if needed (up to `max_loops`)
7. **Context Preservation**: All conversation history and context is maintained throughout the process
## 🏗️ Architecture Components
### Core Components
| Component | Description | Purpose |
|-----------|-------------|---------|
| **BoardOfDirectorsSwarm** | Main orchestration class | Manages the entire board workflow and agent coordination |
| **Board Member Roles** | Role definitions and hierarchy | Defines responsibilities and voting weights for each board member |
| **Decision Making Process** | Voting and consensus mechanisms | Implements democratic decision-making with weighted voting |
| **Workflow Management** | Process orchestration | Manages the complete lifecycle from task reception to final delivery |
### Board Member Interaction Flow
```mermaid
sequenceDiagram
participant User
participant Chairman
participant ViceChair
participant Secretary
participant Treasurer
participant ExecDir
participant Agents
User->>Chairman: Submit Task
Chairman->>ViceChair: Notify Board Meeting
Chairman->>Secretary: Request Meeting Setup
Chairman->>Treasurer: Resource Assessment
Chairman->>ExecDir: Strategic Planning
Note over Chairman,ExecDir: Board Discussion Phase
Chairman->>ViceChair: Lead Discussion
ViceChair->>Secretary: Document Decisions
Secretary->>Treasurer: Budget Considerations
Treasurer->>ExecDir: Resource Allocation
ExecDir->>Chairman: Strategic Recommendations
Note over Chairman,ExecDir: Voting & Consensus
Chairman->>ViceChair: Call for Vote
ViceChair->>Secretary: Record Votes
Secretary->>Treasurer: Financial Approval
Treasurer->>ExecDir: Resource Approval
ExecDir->>Chairman: Final Decision
Note over Chairman,Agents: Execution Phase
Chairman->>Agents: Distribute Orders
Agents->>Chairman: Execute Tasks
Agents->>ViceChair: Progress Reports
Agents->>Secretary: Documentation
Agents->>Treasurer: Resource Usage
Agents->>ExecDir: Strategic Updates
Note over Chairman,ExecDir: Review & Feedback
Chairman->>User: Deliver Results
```
## 👥 Board Member Roles
The Board of Directors supports various roles with different responsibilities and voting weights:
| Role | Description | Voting Weight | Responsibilities |
|------|-------------|---------------|------------------|
| `CHAIRMAN` | Primary leader responsible for board meetings and final decisions | 1.5 | Leading meetings, facilitating consensus, making final decisions |
| `VICE_CHAIRMAN` | Secondary leader who supports the chairman | 1.2 | Supporting chairman, coordinating operations |
| `SECRETARY` | Responsible for documentation and meeting minutes | 1.0 | Documenting meetings, maintaining records |
| `TREASURER` | Manages financial aspects and resource allocation | 1.0 | Financial oversight, resource management |
| `EXECUTIVE_DIRECTOR` | Executive-level board member with operational authority | 1.5 | Strategic planning, operational oversight |
| `MEMBER` | General board member with specific expertise | 1.0 | Contributing expertise, participating in decisions |
### Role Hierarchy and Authority
```python
# Example: Role hierarchy implementation
class BoardRoleHierarchy:
def __init__(self):
self.roles = {
"CHAIRMAN": {
"voting_weight": 1.5,
"authority_level": "FINAL",
"supervises": ["VICE_CHAIRMAN", "EXECUTIVE_DIRECTOR", "SECRETARY", "TREASURER", "MEMBER"],
"responsibilities": ["leadership", "final_decision", "consensus_facilitation"],
"override_capability": True
},
"VICE_CHAIRMAN": {
"voting_weight": 1.2,
"authority_level": "SENIOR",
"supervises": ["MEMBER"],
"responsibilities": ["operational_support", "coordination", "implementation"],
"backup_for": "CHAIRMAN"
},
"EXECUTIVE_DIRECTOR": {
"voting_weight": 1.5,
"authority_level": "SENIOR",
"supervises": ["MEMBER"],
"responsibilities": ["strategic_planning", "execution_oversight", "performance_management"],
"strategic_authority": True
},
"SECRETARY": {
"voting_weight": 1.0,
"authority_level": "STANDARD",
"supervises": [],
"responsibilities": ["documentation", "record_keeping", "communication"],
"administrative_authority": True
},
"TREASURER": {
"voting_weight": 1.0,
"authority_level": "STANDARD",
"supervises": [],
"responsibilities": ["financial_oversight", "resource_management", "budget_control"],
"financial_authority": True
},
"MEMBER": {
"voting_weight": 1.0,
"authority_level": "STANDARD",
"supervises": [],
"responsibilities": ["expertise_contribution", "analysis", "voting"],
"specialized_expertise": True
}
}
```
## 🚀 Quick Start
### Basic Setup
```python
from swarms import Agent
from swarms.structs.board_of_directors_swarm import (
BoardOfDirectorsSwarm,
BoardMember,
BoardMemberRole
)
from swarms.config.board_config import enable_board_feature
# Enable the Board of Directors feature
enable_board_feature()
# Create board members with specific roles
chairman = Agent(
agent_name="Chairman",
agent_description="Chairman of the Board responsible for leading meetings",
model_name="gpt-4o-mini",
system_prompt="You are the Chairman of the Board..."
)
vice_chairman = Agent(
agent_name="Vice-Chairman",
agent_description="Vice Chairman who supports the Chairman",
model_name="gpt-4o-mini",
system_prompt="You are the Vice Chairman..."
)
# Create BoardMember objects with roles and expertise
board_members = [
BoardMember(chairman, BoardMemberRole.CHAIRMAN, 1.5, ["leadership", "strategy"]),
BoardMember(vice_chairman, BoardMemberRole.VICE_CHAIRMAN, 1.2, ["operations", "coordination"]),
]
# Create worker agents
research_agent = Agent(
agent_name="Research-Specialist",
agent_description="Expert in market research and analysis",
model_name="gpt-4o",
)
financial_agent = Agent(
agent_name="Financial-Analyst",
agent_description="Specialist in financial analysis and valuation",
model_name="gpt-4o",
)
# Initialize the Board of Directors swarm
board_swarm = BoardOfDirectorsSwarm(
name="Executive_Board_Swarm",
description="Executive board with specialized roles for strategic decision-making",
board_members=board_members,
agents=[research_agent, financial_agent],
max_loops=2,
verbose=True,
decision_threshold=0.6,
enable_voting=True,
enable_consensus=True,
)
# Execute a complex task with democratic decision-making
result = board_swarm.run(task="Analyze the market potential for Tesla (TSLA) stock")
print(result)
```
## 📋 Comprehensive Examples
### 1. Strategic Investment Analysis
```python
# Create specialized agents for investment analysis
market_research_agent = Agent(
agent_name="Market-Research-Specialist",
agent_description="Expert in market research, competitive analysis, and industry trends",
model_name="gpt-4o",
system_prompt="""You are a Market Research Specialist. Your responsibilities include:
1. Conducting comprehensive market research and analysis
2. Identifying market trends, opportunities, and risks
3. Analyzing competitive landscape and positioning
4. Providing market size and growth projections
5. Supporting strategic decision-making with research findings
You should be thorough, analytical, and objective in your research."""
)
financial_analyst_agent = Agent(
agent_name="Financial-Analyst",
agent_description="Specialist in financial analysis, valuation, and investment assessment",
model_name="gpt-4o",
system_prompt="""You are a Financial Analyst. Your responsibilities include:
1. Conducting financial analysis and valuation
2. Assessing investment opportunities and risks
3. Analyzing financial performance and metrics
4. Providing financial insights and recommendations
5. Supporting financial decision-making
You should be financially astute, analytical, and focused on value creation."""
)
technical_assessor_agent = Agent(
agent_name="Technical-Assessor",
agent_description="Expert in technical feasibility and implementation assessment",
model_name="gpt-4o",
system_prompt="""You are a Technical Assessor. Your responsibilities include:
1. Evaluating technical feasibility and requirements
2. Assessing implementation challenges and risks
3. Analyzing technology stack and architecture
4. Providing technical insights and recommendations
5. Supporting technical decision-making
You should be technically proficient, practical, and solution-oriented."""
)
# Create comprehensive board members
board_members = [
BoardMember(
chairman,
BoardMemberRole.CHAIRMAN,
1.5,
["leadership", "strategy", "governance", "decision_making"]
),
BoardMember(
vice_chairman,
BoardMemberRole.VICE_CHAIRMAN,
1.2,
["operations", "coordination", "communication", "implementation"]
),
BoardMember(
secretary,
BoardMemberRole.SECRETARY,
1.0,
["documentation", "compliance", "record_keeping", "communication"]
),
BoardMember(
treasurer,
BoardMemberRole.TREASURER,
1.0,
["finance", "budgeting", "risk_management", "resource_allocation"]
),
BoardMember(
executive_director,
BoardMemberRole.EXECUTIVE_DIRECTOR,
1.5,
["strategy", "operations", "innovation", "performance_management"]
)
]
# Initialize the investment analysis board
investment_board = BoardOfDirectorsSwarm(
name="Investment_Analysis_Board",
description="Specialized board for investment analysis and decision-making",
board_members=board_members,
agents=[market_research_agent, financial_analyst_agent, technical_assessor_agent],
max_loops=3,
verbose=True,
decision_threshold=0.75, # Higher threshold for investment decisions
enable_voting=True,
enable_consensus=True,
max_workers=3,
output_type="dict"
)
# Execute investment analysis
investment_task = """
Analyze the strategic investment opportunity for a $50M Series B funding round in a
fintech startup. Consider market conditions, competitive landscape, financial projections,
technical feasibility, and strategic fit. Provide comprehensive recommendations including:
1. Investment recommendation (proceed/hold/decline)
2. Valuation analysis and suggested terms
3. Risk assessment and mitigation strategies
4. Strategic value and synergies
5. Implementation timeline and milestones
"""
result = investment_board.run(task=investment_task)
print("Investment Analysis Results:")
print(json.dumps(result, indent=2))
```
### 2. Technology Strategy Development
```python
# Create technology-focused agents
tech_strategy_agent = Agent(
agent_name="Tech-Strategy-Specialist",
agent_description="Expert in technology strategy and digital transformation",
model_name="gpt-4o",
system_prompt="""You are a Technology Strategy Specialist. Your responsibilities include:
1. Developing technology roadmaps and strategies
2. Assessing digital transformation opportunities
3. Evaluating emerging technologies and trends
4. Planning technology investments and priorities
5. Supporting technology decision-making
You should be strategic, forward-thinking, and technology-savvy."""
)
implementation_planner_agent = Agent(
agent_name="Implementation-Planner",
agent_description="Expert in implementation planning and project management",
model_name="gpt-4o",
system_prompt="""You are an Implementation Planner. Your responsibilities include:
1. Creating detailed implementation plans
2. Assessing resource requirements and timelines
3. Identifying implementation risks and challenges
4. Planning change management strategies
5. Supporting implementation decision-making
You should be practical, organized, and execution-focused."""
)
# Technology strategy board configuration
tech_board = BoardOfDirectorsSwarm(
name="Technology_Strategy_Board",
description="Specialized board for technology strategy and digital transformation",
board_members=board_members,
agents=[tech_strategy_agent, implementation_planner_agent, technical_assessor_agent],
max_loops=4, # More loops for complex technology planning
verbose=True,
decision_threshold=0.7,
enable_voting=True,
enable_consensus=True,
max_workers=3,
output_type="dict"
)
# Execute technology strategy development
tech_strategy_task = """
Develop a comprehensive technology strategy for a mid-size manufacturing company
looking to digitize operations and implement Industry 4.0 technologies. Consider:
1. Current technology assessment and gaps
2. Technology roadmap and implementation plan
3. Investment requirements and ROI analysis
4. Risk assessment and mitigation strategies
5. Change management and training requirements
6. Competitive positioning and market advantages
"""
result = tech_board.run(task=tech_strategy_task)
print("Technology Strategy Results:")
print(json.dumps(result, indent=2))
```
### 3. Crisis Management and Response
```python
# Create crisis management agents
crisis_coordinator_agent = Agent(
agent_name="Crisis-Coordinator",
agent_description="Expert in crisis management and emergency response",
model_name="gpt-4o",
system_prompt="""You are a Crisis Coordinator. Your responsibilities include:
1. Coordinating crisis response efforts
2. Assessing crisis severity and impact
3. Developing immediate response plans
4. Managing stakeholder communications
5. Supporting crisis decision-making
You should be calm, decisive, and action-oriented."""
)
communications_specialist_agent = Agent(
agent_name="Communications-Specialist",
agent_description="Expert in crisis communications and stakeholder management",
model_name="gpt-4o",
system_prompt="""You are a Communications Specialist. Your responsibilities include:
1. Developing crisis communication strategies
2. Managing stakeholder communications
3. Coordinating public relations efforts
4. Ensuring message consistency and accuracy
5. Supporting communication decision-making
You should be clear, empathetic, and strategic in communications."""
)
# Crisis management board configuration
crisis_board = BoardOfDirectorsSwarm(
name="Crisis_Management_Board",
description="Specialized board for crisis management and emergency response",
board_members=board_members,
agents=[crisis_coordinator_agent, communications_specialist_agent, financial_analyst_agent],
max_loops=2, # Faster response needed
verbose=True,
decision_threshold=0.6, # Lower threshold for urgent decisions
enable_voting=True,
enable_consensus=True,
max_workers=3,
output_type="dict"
)
# Execute crisis management
crisis_task = """
Our company is facing a major data breach. Develop an immediate response plan.
Include:
1. Immediate containment and mitigation steps
2. Communication strategy for stakeholders
3. Legal and regulatory compliance requirements
4. Financial impact assessment
5. Long-term recovery and prevention measures
6. Timeline and resource allocation
"""
result = crisis_board.run(task=crisis_task)
print("Crisis Management Results:")
print(json.dumps(result, indent=2))
```
## ⚙️ Configuration and Parameters
### BoardOfDirectorsSwarm Parameters
```python
# Complete parameter reference
board_swarm = BoardOfDirectorsSwarm(
# Basic Configuration
name="Board_Name", # Name of the board
description="Board description", # Description of the board's purpose
# Board Members and Agents
board_members=board_members, # List of BoardMember objects
agents=worker_agents, # List of worker Agent objects
# Execution Control
max_loops=3, # Maximum number of refinement loops
max_workers=4, # Maximum parallel workers
# Decision Making
decision_threshold=0.7, # Consensus threshold (0.0-1.0)
enable_voting=True, # Enable voting mechanisms
enable_consensus=True, # Enable consensus building
# Advanced Features
auto_assign_roles=True, # Auto-assign roles based on expertise
role_mapping={ # Custom role mapping
"financial_analysis": ["Treasurer", "Financial_Member"],
"strategic_planning": ["Chairman", "Executive_Director"]
},
# Consensus Configuration
consensus_timeout=300, # Consensus timeout in seconds
min_participation_rate=0.8, # Minimum participation rate
auto_fallback_to_chairman=True, # Chairman can make final decisions
consensus_rounds=3, # Maximum consensus building rounds
# Output Configuration
output_type="dict", # Output format: "dict", "str", "list"
verbose=True, # Enable detailed logging
# Quality Control
quality_threshold=0.8, # Quality threshold for outputs
enable_quality_gates=True, # Enable quality checkpoints
enable_peer_review=True, # Enable peer review mechanisms
# Performance Optimization
parallel_execution=True, # Enable parallel execution
enable_agent_pooling=True, # Enable agent pooling
timeout_per_agent=300, # Timeout per agent in seconds
# Monitoring and Logging
enable_logging=True, # Enable detailed logging
log_level="INFO", # Logging level
enable_metrics=True, # Enable performance metrics
enable_tracing=True # Enable request tracing
)
```
### Voting Configuration
```python
# Voting system configuration
voting_config = {
"method": "weighted_majority", # Voting method
"threshold": 0.75, # Consensus threshold
"weights": { # Role-based voting weights
"CHAIRMAN": 1.5,
"VICE_CHAIRMAN": 1.2,
"SECRETARY": 1.0,
"TREASURER": 1.0,
"EXECUTIVE_DIRECTOR": 1.5
},
"tie_breaker": "CHAIRMAN", # Tie breaker role
"allow_abstention": True, # Allow board members to abstain
"secret_ballot": False, # Use secret ballot voting
"transparent_process": True # Transparent voting process
}
```
### Quality Control Configuration
```python
# Quality control configuration
quality_config = {
"quality_gates": True, # Enable quality checkpoints
"quality_threshold": 0.8, # Quality threshold
"enable_peer_review": True, # Enable peer review
"review_required": True, # Require peer review
"output_validation": True, # Validate outputs
"enable_metrics_tracking": True, # Track quality metrics
# Quality metrics
"quality_metrics": {
"completeness": {"weight": 0.2, "threshold": 0.8},
"accuracy": {"weight": 0.25, "threshold": 0.85},
"feasibility": {"weight": 0.2, "threshold": 0.8},
"risk": {"weight": 0.15, "threshold": 0.7},
"impact": {"weight": 0.2, "threshold": 0.8}
}
}
```
## 📊 Performance Monitoring and Analytics
### Board Performance Metrics
```python
# Get comprehensive board performance metrics
board_summary = board_swarm.get_board_summary()
print("Board Summary:")
print(f"Board Name: {board_summary['board_name']}")
print(f"Total Board Members: {board_summary['total_members']}")
print(f"Total Worker Agents: {board_summary['total_agents']}")
print(f"Decision Threshold: {board_summary['decision_threshold']}")
print(f"Max Loops: {board_summary['max_loops']}")
# Display board member details
print("\nBoard Members:")
for member in board_summary['members']:
print(f"- {member['name']} (Role: {member['role']}, Weight: {member['voting_weight']})")
print(f" Expertise: {', '.join(member['expertise_areas'])}")
# Display worker agent details
print("\nWorker Agents:")
for agent in board_summary['agents']:
print(f"- {agent['name']}: {agent['description']}")
```
### Decision Analysis
```python
# Analyze decision-making patterns
if hasattr(result, 'get') and callable(result.get):
conversation_history = result.get('conversation_history', [])
print(f"\nDecision Analysis:")
print(f"Total Messages: {len(conversation_history)}")
# Count board member contributions
board_contributions = {}
for msg in conversation_history:
if 'Board' in msg.get('role', ''):
member_name = msg.get('agent_name', 'Unknown')
board_contributions[member_name] = board_contributions.get(member_name, 0) + 1
print(f"Board Member Contributions:")
for member, count in board_contributions.items():
print(f"- {member}: {count} contributions")
# Count agent executions
agent_executions = {}
for msg in conversation_history:
if any(agent.agent_name in msg.get('role', '') for agent in worker_agents):
agent_name = msg.get('agent_name', 'Unknown')
agent_executions[agent_name] = agent_executions.get(agent_name, 0) + 1
print(f"\nAgent Executions:")
for agent, count in agent_executions.items():
print(f"- {agent}: {count} executions")
```
### Performance Monitoring System
```python
# Performance monitoring system
class PerformanceMonitor:
def __init__(self):
self.metrics = {
"execution_times": [],
"quality_scores": [],
"consensus_rounds": [],
"error_rates": []
}
def track_execution_time(self, phase, duration):
"""Track execution time for different phases"""
self.metrics["execution_times"].append({
"phase": phase,
"duration": duration,
"timestamp": datetime.now().isoformat()
})
def track_quality_score(self, score):
"""Track quality scores"""
self.metrics["quality_scores"].append({
"score": score,
"timestamp": datetime.now().isoformat()
})
def generate_performance_report(self):
"""Generate comprehensive performance report"""
return {
"average_execution_time": self.calculate_average_execution_time(),
"quality_trends": self.analyze_quality_trends(),
"consensus_efficiency": self.analyze_consensus_efficiency(),
"error_analysis": self.analyze_errors(),
"recommendations": self.generate_recommendations()
}
# Usage example
monitor = PerformanceMonitor()
# ... track metrics during execution ...
report = monitor.generate_performance_report()
print("Performance Report:")
print(json.dumps(report, indent=2))
```
## 🔧 Advanced Features and Customization
### Custom Board Templates
```python
from swarms.config.board_config import get_default_board_template
# Get pre-configured board templates
financial_board = get_default_board_template("financial_analysis")
strategic_board = get_default_board_template("strategic_planning")
tech_board = get_default_board_template("technology_assessment")
crisis_board = get_default_board_template("crisis_management")
# Custom board template
custom_template = {
"name": "Custom_Board",
"description": "Custom board for specific use case",
"board_members": [
{"role": "CHAIRMAN", "expertise": ["leadership", "strategy"]},
{"role": "VICE_CHAIRMAN", "expertise": ["operations", "coordination"]},
{"role": "SECRETARY", "expertise": ["documentation", "communication"]},
{"role": "TREASURER", "expertise": ["finance", "budgeting"]},
{"role": "EXECUTIVE_DIRECTOR", "expertise": ["strategy", "operations"]}
],
"agents": [
{"name": "Research_Agent", "expertise": ["research", "analysis"]},
{"name": "Technical_Agent", "expertise": ["technical", "implementation"]}
],
"config": {
"max_loops": 3,
"decision_threshold": 0.7,
"enable_voting": True,
"enable_consensus": True
}
}
```
### Dynamic Role Assignment
```python
# Automatically assign roles based on task requirements
board_swarm = BoardOfDirectorsSwarm(
board_members=board_members,
agents=agents,
auto_assign_roles=True,
role_mapping={
"financial_analysis": ["Treasurer", "Financial_Member"],
"strategic_planning": ["Chairman", "Executive_Director"],
"technical_assessment": ["Technical_Member", "Executive_Director"],
"research_analysis": ["Research_Member", "Secretary"],
"crisis_management": ["Chairman", "Vice_Chairman", "Communications_Member"]
}
)
```
### Consensus Optimization
```python
# Advanced consensus-building mechanisms
board_swarm = BoardOfDirectorsSwarm(
board_members=board_members,
agents=agents,
enable_consensus=True,
consensus_timeout=300, # 5 minutes timeout
min_participation_rate=0.8, # 80% minimum participation
auto_fallback_to_chairman=True, # Chairman can make final decisions
consensus_rounds=3, # Maximum consensus building rounds
consensus_method="weighted_majority", # Consensus method
enable_mediation=True, # Enable mediation for conflicts
mediation_timeout=120 # Mediation timeout in seconds
)
```
## 🛠️ Troubleshooting and Debugging
### Common Issues and Solutions
1. **Consensus Failures**
- **Issue**: Board cannot reach consensus within loop limit
- **Solution**: Lower voting threshold, increase max_loops, or adjust voting weights
```python
board_swarm = BoardOfDirectorsSwarm(
decision_threshold=0.6, # Lower threshold
max_loops=5, # More loops
consensus_timeout=600 # Longer timeout
)
```
2. **Agent Timeout**
- **Issue**: Individual agents take too long to respond
- **Solution**: Increase timeout settings or optimize agent prompts
```python
board_swarm = BoardOfDirectorsSwarm(
timeout_per_agent=600, # 10 minutes per agent
enable_agent_pooling=True # Use agent pooling
)
```
3. **Poor Quality Output**
- **Issue**: Final output doesn't meet quality standards
- **Solution**: Enable quality gates, increase max_loops, or improve agent prompts
```python
board_swarm = BoardOfDirectorsSwarm(
enable_quality_gates=True,
quality_threshold=0.8,
enable_peer_review=True,
max_loops=4
)
```
4. **Resource Exhaustion**
- **Issue**: System runs out of resources during execution
- **Solution**: Implement resource limits, use agent pooling, or optimize parallel execution
```python
board_swarm = BoardOfDirectorsSwarm(
max_workers=2, # Limit parallel workers
enable_agent_pooling=True,
parallel_execution=False # Disable parallel execution
)
```
### Debugging Techniques
```python
# Debugging configuration
debug_config = BoardConfig(
max_loops=1, # Limit loops for debugging
enable_logging=True,
log_level="DEBUG",
enable_tracing=True,
debug_mode=True
)
# Create debug swarm
debug_swarm = BoardOfDirectorsSwarm(
agents=agents,
config=debug_config
)
# Execute with debugging
try:
result = debug_swarm.run(task)
except Exception as e:
print(f"Error: {e}")
print(f"Debug info: {debug_swarm.get_debug_info()}")
# Enable detailed logging
import logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
# Create swarm with logging enabled
logging_swarm = BoardOfDirectorsSwarm(
agents=agents,
config=BoardConfig(
enable_logging=True,
log_level="DEBUG",
enable_metrics=True,
enable_tracing=True
)
)
```
## 📋 Use Cases
### Corporate Governance
- **Strategic Planning**: Long-term business strategy development
- **Risk Management**: Comprehensive risk assessment and mitigation
- **Resource Allocation**: Optimal distribution of company resources
- **Performance Oversight**: Monitoring and evaluating organizational performance
### Financial Analysis
- **Portfolio Management**: Investment portfolio optimization and rebalancing
- **Market Analysis**: Comprehensive market research and trend analysis
- **Risk Assessment**: Financial risk evaluation and management
- **Compliance Monitoring**: Regulatory compliance and audit preparation
### Research & Development
- **Technology Assessment**: Evaluation of emerging technologies
- **Product Development**: Strategic product planning and development
- **Innovation Management**: Managing innovation pipelines and initiatives
- **Quality Assurance**: Ensuring high standards across development processes
### Project Management
- **Complex Project Planning**: Multi-faceted project strategy development
- **Resource Optimization**: Efficient allocation of project resources
- **Stakeholder Management**: Coordinating diverse stakeholder interests
- **Risk Mitigation**: Identifying and addressing project risks
### Crisis Management
- **Emergency Response**: Rapid response to critical situations
- **Stakeholder Communication**: Managing communications during crises
- **Recovery Planning**: Developing recovery and prevention strategies
- **Legal Compliance**: Ensuring compliance during crisis situations
## 🎯 Success Criteria
A successful Board of Directors implementation should demonstrate:
- ✅ **Democratic Decision Making**: All board members contribute to decisions
- ✅ **Consensus Achievement**: Decisions reached through collaborative processes
- ✅ **Role Effectiveness**: Each board member fulfills their responsibilities
- ✅ **Agent Coordination**: Worker agents execute tasks efficiently
- ✅ **Quality Output**: High-quality results through collective intelligence
- ✅ **Process Transparency**: Clear visibility into decision-making processes
- ✅ **Performance Optimization**: Efficient resource utilization and execution
- ✅ **Continuous Improvement**: Learning from each execution cycle
## 📚 Best Practices
### 1. Role Definition
- Clearly define responsibilities for each board member
- Ensure expertise areas align with organizational needs
- Balance voting weights based on role importance
- Document role interactions and communication protocols
### 2. Task Formulation
- Provide clear, specific task descriptions
- Include relevant context and constraints
- Specify expected outputs and deliverables
- Define quality criteria and success metrics
### 3. Consensus Building
- Allow adequate time for discussion and consensus
- Encourage diverse perspectives and viewpoints
- Use structured decision-making processes
- Implement conflict resolution mechanisms
### 4. Performance Monitoring
- Track decision quality and outcomes
- Monitor board member participation
- Analyze agent utilization and effectiveness
- Implement continuous improvement processes
### 5. Resource Management
- Optimize agent allocation and utilization
- Implement parallel execution where appropriate
- Monitor resource usage and performance
- Scale resources based on task complexity
---
The Board of Directors architecture represents a sophisticated approach to multi-agent collaboration, enabling organizations to leverage collective intelligence through structured governance and democratic decision-making processes. This comprehensive implementation provides the tools and frameworks needed to build effective, scalable, and intelligent decision-making systems.

@ -6,96 +6,33 @@ The `Conversation` class is a powerful and flexible tool for managing conversati
### Key Features
- **Multiple Storage Backends**: Support for various storage solutions:
- In-memory: Fast, temporary storage for testing and development
- Supabase: PostgreSQL-based cloud storage with real-time capabilities
- Redis: High-performance caching and persistence
- SQLite: Local file-based storage
- DuckDB: Analytical workloads and columnar storage
- Pulsar: Event streaming for distributed systems
- Mem0: Memory-based storage with mem0 integration
- **Token Management**:
- Built-in token counting with configurable models
- Automatic token tracking for input/output messages
- Token usage analytics and reporting
- Context length management
- **Metadata and Categories**:
- Support for message metadata
- Message categorization (input/output)
- Role-based message tracking
- Custom message IDs
- **Data Export/Import**:
- JSON and YAML export formats
- Automatic saving and loading
- Conversation history management
- Batch operations support
- **Advanced Features**:
- Message search and filtering
- Conversation analytics
- Multi-agent support
- Error handling and fallbacks
- Type hints and validation
| Feature Category | Features / Description |
|----------------------------|-------------------------------------------------------------------------------------------------------------|
| **Multiple Storage Backends** | - In-memory: Fast, temporary storage for testing and development<br>- Supabase: PostgreSQL-based cloud storage with real-time capabilities<br>- Redis: High-performance caching and persistence<br>- SQLite: Local file-based storage<br>- DuckDB: Analytical workloads and columnar storage<br>- Pulsar: Event streaming for distributed systems<br>- Mem0: Memory-based storage with mem0 integration |
| **Token Management** | - Built-in token counting with configurable models<br>- Automatic token tracking for input/output messages<br>- Token usage analytics and reporting<br>- Context length management |
| **Metadata and Categories** | - Support for message metadata<br>- Message categorization (input/output)<br>- Role-based message tracking<br>- Custom message IDs |
| **Data Export/Import** | - JSON and YAML export formats<br>- Automatic saving and loading<br>- Conversation history management<br>- Batch operations support |
| **Advanced Features** | - Message search and filtering<br>- Conversation analytics<br>- Multi-agent support<br>- Error handling and fallbacks<br>- Type hints and validation |
### Use Cases
1. **Chatbot Development**:
- Store and manage conversation history
- Track token usage and context length
- Analyze conversation patterns
2. **Multi-Agent Systems**:
- Coordinate multiple AI agents
- Track agent interactions
- Store agent outputs and metadata
3. **Analytics Applications**:
- Track conversation metrics
- Generate usage reports
- Analyze user interactions
4. **Production Systems**:
- Persistent storage with various backends
- Error handling and recovery
- Scalable conversation management
5. **Development and Testing**:
- Fast in-memory storage
- Debugging support
- Easy export/import of test data
| Use Case | Features / Description |
|----------------------------|--------------------------------------------------------------------------------------------------------|
| **Chatbot Development** | - Store and manage conversation history<br>- Track token usage and context length<br>- Analyze conversation patterns |
| **Multi-Agent Systems** | - Coordinate multiple AI agents<br>- Track agent interactions<br>- Store agent outputs and metadata |
| **Analytics Applications** | - Track conversation metrics<br>- Generate usage reports<br>- Analyze user interactions |
| **Production Systems** | - Persistent storage with various backends<br>- Error handling and recovery<br>- Scalable conversation management |
| **Development and Testing**| - Fast in-memory storage<br>- Debugging support<br>- Easy export/import of test data |
### Best Practices
1. **Storage Selection**:
- Use in-memory for testing and development
- Choose Supabase for multi-user cloud applications
- Use Redis for high-performance requirements
- Select SQLite for single-user local applications
- Pick DuckDB for analytical workloads
- Opt for Pulsar in distributed systems
2. **Token Management**:
- Enable token counting for production use
- Set appropriate context lengths
- Monitor token usage with export_and_count_categories()
3. **Error Handling**:
- Implement proper fallback mechanisms
- Use type hints for better code reliability
- Monitor and log errors appropriately
4. **Data Management**:
- Use appropriate export formats (JSON/YAML)
- Implement regular backup strategies
- Clean up old conversations when needed
5. **Security**:
- Use environment variables for sensitive credentials
- Implement proper access controls
- Validate input data
| Category | Best Practices |
|---------------------|------------------------------------------------------------------------------------------------------------------------|
| **Storage Selection** | - Use in-memory for testing and development<br>- Choose Supabase for multi-user cloud applications<br>- Use Redis for high-performance requirements<br>- Select SQLite for single-user local applications<br>- Pick DuckDB for analytical workloads<br>- Opt for Pulsar in distributed systems |
| **Token Management** | - Enable token counting for production use<br>- Set appropriate context lengths<br>- Monitor token usage with `export_and_count_categories()` |
| **Error Handling** | - Implement proper fallback mechanisms<br>- Use type hints for better code reliability<br>- Monitor and log errors appropriately |
| **Data Management** | - Use appropriate export formats (JSON/YAML)<br>- Implement regular backup strategies<br>- Clean up old conversations when needed |
| **Security** | - Use environment variables for sensitive credentials<br>- Implement proper access controls<br>- Validate input data |
## Table of Contents
@ -113,13 +50,15 @@ The `Conversation` class is designed to manage conversations by keeping track of
**New in this version**: The class now supports multiple storage backends for persistent conversation storage:
- **"in-memory"**: Default memory-based storage (no persistence)
- **"mem0"**: Memory-based storage with mem0 integration (requires: `pip install mem0ai`)
- **"supabase"**: PostgreSQL-based storage using Supabase (requires: `pip install supabase`)
- **"redis"**: Redis-based storage (requires: `pip install redis`)
- **"sqlite"**: SQLite-based storage (built-in to Python)
- **"duckdb"**: DuckDB-based storage (requires: `pip install duckdb`)
- **"pulsar"**: Apache Pulsar messaging backend (requires: `pip install pulsar-client`)
| Backend | Description | Requirements |
|--------------|-------------------------------------------------------------------------------------------------------------|------------------------------------|
| **in-memory**| Default memory-based storage (no persistence) | None (built-in) |
| **mem0** | Memory-based storage with mem0 integration | `pip install mem0ai` |
| **supabase** | PostgreSQL-based storage using Supabase | `pip install supabase` |
| **redis** | Redis-based storage | `pip install redis` |
| **sqlite** | SQLite-based storage (local file) | None (built-in) |
| **duckdb** | DuckDB-based storage (analytical workloads, columnar storage) | `pip install duckdb` |
| **pulsar** | Apache Pulsar messaging backend | `pip install pulsar-client` |
All backends use **lazy loading** - database dependencies are only imported when the specific backend is instantiated. Each backend provides helpful error messages if required packages are not installed.
@ -132,7 +71,6 @@ All backends use **lazy loading** - database dependencies are only imported when
| system_prompt | Optional[str] | System prompt for the conversation |
| time_enabled | bool | Flag to enable time tracking for messages |
| autosave | bool | Flag to enable automatic saving |
| save_enabled | bool | Flag to control if saving is enabled |
| save_filepath | str | File path for saving conversation history |
| load_filepath | str | File path for loading conversation history |
| conversation_history | list | List storing conversation messages |

@ -122,6 +122,363 @@ cron_job = CronJob(
cron_job.run("Perform analysis")
```
### Cron Jobs With Multi-Agent Structures
You can also run Cron Jobs with multi-agent structures like `SequentialWorkflow`, `ConcurrentWorkflow`, `HiearchicalSwarm`, and other methods.
- Just initialize the class as the agent parameter in the `CronJob(agent=swarm)`
- Input your arguments into the `.run(task: str)` method
```python
"""
Cryptocurrency Concurrent Multi-Agent Cron Job Example
This example demonstrates how to use ConcurrentWorkflow with CronJob to create
a powerful cryptocurrency tracking system. Each specialized agent analyzes a
specific cryptocurrency concurrently every minute.
Features:
- ConcurrentWorkflow for parallel agent execution
- CronJob scheduling for automated runs every 1 minute
- Each agent specializes in analyzing one specific cryptocurrency
- Real-time data fetching from CoinGecko API
- Concurrent analysis of multiple cryptocurrencies
- Structured output with professional formatting
Architecture:
CronJob -> ConcurrentWorkflow -> [Bitcoin Agent, Ethereum Agent, Solana Agent, etc.] -> Parallel Analysis
"""
from typing import List
from loguru import logger
from swarms import Agent, CronJob, ConcurrentWorkflow
from swarms_tools import coin_gecko_coin_api
def create_crypto_specific_agents() -> List[Agent]:
"""
Creates agents that each specialize in analyzing a specific cryptocurrency.
Returns:
List[Agent]: List of cryptocurrency-specific Agent instances
"""
# Bitcoin Specialist Agent
bitcoin_agent = Agent(
agent_name="Bitcoin-Analyst",
agent_description="Expert analyst specializing exclusively in Bitcoin (BTC) analysis and market dynamics",
system_prompt="""You are a Bitcoin specialist and expert analyst. Your expertise includes:
BITCOIN SPECIALIZATION:
- Bitcoin's unique position as digital gold
- Bitcoin halving cycles and their market impact
- Bitcoin mining economics and hash rate analysis
- Lightning Network and Layer 2 developments
- Bitcoin adoption by institutions and countries
- Bitcoin's correlation with traditional markets
- Bitcoin technical analysis and on-chain metrics
- Bitcoin's role as a store of value and hedge against inflation
ANALYSIS FOCUS:
- Analyze ONLY Bitcoin data from the provided dataset
- Focus on Bitcoin-specific metrics and trends
- Consider Bitcoin's unique market dynamics
- Evaluate Bitcoin's dominance and market leadership
- Assess institutional adoption trends
- Monitor on-chain activity and network health
DELIVERABLES:
- Bitcoin-specific analysis and insights
- Price action assessment and predictions
- Market dominance analysis
- Institutional adoption impact
- Technical and fundamental outlook
- Risk factors specific to Bitcoin
Extract Bitcoin data from the provided dataset and provide comprehensive Bitcoin-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
# Ethereum Specialist Agent
ethereum_agent = Agent(
agent_name="Ethereum-Analyst",
agent_description="Expert analyst specializing exclusively in Ethereum (ETH) analysis and ecosystem development",
system_prompt="""You are an Ethereum specialist and expert analyst. Your expertise includes:
ETHEREUM SPECIALIZATION:
- Ethereum's smart contract platform and DeFi ecosystem
- Ethereum 2.0 transition and proof-of-stake mechanics
- Gas fees, network usage, and scalability solutions
- Layer 2 solutions (Arbitrum, Optimism, Polygon)
- DeFi protocols and TVL (Total Value Locked) analysis
- NFT markets and Ethereum's role in digital assets
- Developer activity and ecosystem growth
- EIP proposals and network upgrades
ANALYSIS FOCUS:
- Analyze ONLY Ethereum data from the provided dataset
- Focus on Ethereum's platform utility and network effects
- Evaluate DeFi ecosystem health and growth
- Assess Layer 2 adoption and scalability solutions
- Monitor network usage and gas fee trends
- Consider Ethereum's competitive position vs other smart contract platforms
DELIVERABLES:
- Ethereum-specific analysis and insights
- Platform utility and adoption metrics
- DeFi ecosystem impact assessment
- Network health and scalability evaluation
- Competitive positioning analysis
- Technical and fundamental outlook for ETH
Extract Ethereum data from the provided dataset and provide comprehensive Ethereum-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
# Solana Specialist Agent
solana_agent = Agent(
agent_name="Solana-Analyst",
agent_description="Expert analyst specializing exclusively in Solana (SOL) analysis and ecosystem development",
system_prompt="""You are a Solana specialist and expert analyst. Your expertise includes:
SOLANA SPECIALIZATION:
- Solana's high-performance blockchain architecture
- Proof-of-History consensus mechanism
- Solana's DeFi ecosystem and DEX platforms (Serum, Raydium)
- NFT marketplaces and creator economy on Solana
- Network outages and reliability concerns
- Developer ecosystem and Rust programming adoption
- Validator economics and network decentralization
- Cross-chain bridges and interoperability
ANALYSIS FOCUS:
- Analyze ONLY Solana data from the provided dataset
- Focus on Solana's performance and scalability advantages
- Evaluate network stability and uptime improvements
- Assess ecosystem growth and developer adoption
- Monitor DeFi and NFT activity on Solana
- Consider Solana's competitive position vs Ethereum
DELIVERABLES:
- Solana-specific analysis and insights
- Network performance and reliability assessment
- Ecosystem growth and adoption metrics
- DeFi and NFT market analysis
- Competitive advantages and challenges
- Technical and fundamental outlook for SOL
Extract Solana data from the provided dataset and provide comprehensive Solana-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
# Cardano Specialist Agent
cardano_agent = Agent(
agent_name="Cardano-Analyst",
agent_description="Expert analyst specializing exclusively in Cardano (ADA) analysis and research-driven development",
system_prompt="""You are a Cardano specialist and expert analyst. Your expertise includes:
CARDANO SPECIALIZATION:
- Cardano's research-driven development approach
- Ouroboros proof-of-stake consensus protocol
- Smart contract capabilities via Plutus and Marlowe
- Cardano's three-layer architecture (settlement, computation, control)
- Academic partnerships and peer-reviewed research
- Cardano ecosystem projects and DApp development
- Native tokens and Cardano's UTXO model
- Sustainability and treasury funding mechanisms
ANALYSIS FOCUS:
- Analyze ONLY Cardano data from the provided dataset
- Focus on Cardano's methodical development approach
- Evaluate smart contract adoption and ecosystem growth
- Assess academic partnerships and research contributions
- Monitor native token ecosystem development
- Consider Cardano's long-term roadmap and milestones
DELIVERABLES:
- Cardano-specific analysis and insights
- Development progress and milestone achievements
- Smart contract ecosystem evaluation
- Academic research impact assessment
- Native token and DApp adoption metrics
- Technical and fundamental outlook for ADA
Extract Cardano data from the provided dataset and provide comprehensive Cardano-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
# Binance Coin Specialist Agent
bnb_agent = Agent(
agent_name="BNB-Analyst",
agent_description="Expert analyst specializing exclusively in BNB analysis and Binance ecosystem dynamics",
system_prompt="""You are a BNB specialist and expert analyst. Your expertise includes:
BNB SPECIALIZATION:
- BNB's utility within the Binance ecosystem
- Binance Smart Chain (BSC) development and adoption
- BNB token burns and deflationary mechanics
- Binance exchange volume and market leadership
- BSC DeFi ecosystem and yield farming
- Cross-chain bridges and multi-chain strategies
- Regulatory challenges facing Binance globally
- BNB's role in transaction fee discounts and platform benefits
ANALYSIS FOCUS:
- Analyze ONLY BNB data from the provided dataset
- Focus on BNB's utility value and exchange benefits
- Evaluate BSC ecosystem growth and competition with Ethereum
- Assess token burn impact on supply and price
- Monitor Binance platform developments and regulations
- Consider BNB's centralized vs decentralized aspects
DELIVERABLES:
- BNB-specific analysis and insights
- Utility value and ecosystem benefits assessment
- BSC adoption and DeFi growth evaluation
- Token economics and burn mechanism impact
- Regulatory risk and compliance analysis
- Technical and fundamental outlook for BNB
Extract BNB data from the provided dataset and provide comprehensive BNB-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
# XRP Specialist Agent
xrp_agent = Agent(
agent_name="XRP-Analyst",
agent_description="Expert analyst specializing exclusively in XRP analysis and cross-border payment solutions",
system_prompt="""You are an XRP specialist and expert analyst. Your expertise includes:
XRP SPECIALIZATION:
- XRP's role in cross-border payments and remittances
- RippleNet adoption by financial institutions
- Central Bank Digital Currency (CBDC) partnerships
- Regulatory landscape and SEC lawsuit implications
- XRP Ledger's consensus mechanism and energy efficiency
- On-Demand Liquidity (ODL) usage and growth
- Competition with SWIFT and traditional payment rails
- Ripple's partnerships with banks and payment providers
ANALYSIS FOCUS:
- Analyze ONLY XRP data from the provided dataset
- Focus on XRP's utility in payments and remittances
- Evaluate RippleNet adoption and institutional partnerships
- Assess regulatory developments and legal clarity
- Monitor ODL usage and transaction volumes
- Consider XRP's competitive position in payments
DELIVERABLES:
- XRP-specific analysis and insights
- Payment utility and adoption assessment
- Regulatory landscape and legal developments
- Institutional partnership impact evaluation
- Cross-border payment market analysis
- Technical and fundamental outlook for XRP
Extract XRP data from the provided dataset and provide comprehensive XRP-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
return [
bitcoin_agent,
ethereum_agent,
solana_agent,
cardano_agent,
bnb_agent,
xrp_agent,
]
def create_crypto_workflow() -> ConcurrentWorkflow:
"""
Creates a ConcurrentWorkflow with cryptocurrency-specific analysis agents.
Returns:
ConcurrentWorkflow: Configured workflow for crypto analysis
"""
agents = create_crypto_specific_agents()
workflow = ConcurrentWorkflow(
name="Crypto-Specific-Analysis-Workflow",
description="Concurrent execution of cryptocurrency-specific analysis agents",
agents=agents,
max_loops=1,
)
return workflow
def create_crypto_cron_job() -> CronJob:
"""
Creates a CronJob that runs cryptocurrency-specific analysis every minute using ConcurrentWorkflow.
Returns:
CronJob: Configured cron job for automated crypto analysis
"""
# Create the concurrent workflow
workflow = create_crypto_workflow()
# Create the cron job
cron_job = CronJob(
agent=workflow, # Use the workflow as the agent
interval="5seconds", # Run every 1 minute
)
return cron_job
def main():
"""
Main function to run the cryptocurrency-specific concurrent analysis cron job.
"""
cron_job = create_crypto_cron_job()
prompt = """
Conduct a comprehensive analysis of your assigned cryptocurrency.
"""
# Start the cron job
logger.info("🔄 Starting automated analysis loop...")
logger.info("⏰ Press Ctrl+C to stop the cron job")
output = cron_job.run(task=prompt)
print(output)
if __name__ == "__main__":
main()
```
## Conclusion
The CronJob class provides a powerful way to schedule and automate tasks using Swarms Agents or custom functions. Key benefits include:

@ -0,0 +1,204 @@
# AgentRearrange
*Dynamically reorganizes agents to optimize task performance and efficiency*
**Swarm Type**: `AgentRearrange`
## Overview
The AgentRearrange swarm type dynamically reorganizes the workflow between agents based on task requirements and performance metrics. This architecture is particularly useful when the effectiveness of agents depends on their sequence or arrangement, allowing for optimal task distribution and execution flow.
Key features:
- **Dynamic Reorganization**: Automatically adjusts agent order based on task needs
- **Performance Optimization**: Optimizes workflow for maximum efficiency
- **Adaptive Sequencing**: Learns from execution patterns to improve arrangement
- **Flexible Task Distribution**: Distributes work based on agent capabilities
## Use Cases
- Complex workflows where task order matters
- Multi-step processes requiring optimization
- Tasks where agent performance varies by sequence
- Adaptive workflow management systems
## API Usage
### Basic AgentRearrange Example
=== "Shell (curl)"
```bash
curl -X POST "https://api.swarms.world/v1/swarm/completions" \
-H "x-api-key: $SWARMS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Document Processing Rearrange",
"description": "Process documents with dynamic agent reorganization",
"swarm_type": "AgentRearrange",
"task": "Analyze this legal document and extract key insights, then summarize findings and identify action items",
"agents": [
{
"agent_name": "Document Analyzer",
"description": "Analyzes document content and structure",
"system_prompt": "You are an expert document analyst. Extract key information, themes, and insights from documents.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Legal Expert",
"description": "Provides legal context and interpretation",
"system_prompt": "You are a legal expert. Analyze documents for legal implications, risks, and compliance issues.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
},
{
"agent_name": "Summarizer",
"description": "Creates concise summaries and action items",
"system_prompt": "You are an expert at creating clear, actionable summaries from complex information.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.4
}
],
"rearrange_flow": "Summarizer -> Legal Expert -> Document Analyzer",
"max_loops": 1
}'
```
=== "Python (requests)"
```python
import requests
import json
API_BASE_URL = "https://api.swarms.world"
API_KEY = "your_api_key_here"
headers = {
"x-api-key": API_KEY,
"Content-Type": "application/json"
}
swarm_config = {
"name": "Document Processing Rearrange",
"description": "Process documents with dynamic agent reorganization",
"swarm_type": "AgentRearrange",
"task": "Analyze this legal document and extract key insights, then summarize findings and identify action items",
"agents": [
{
"agent_name": "Document Analyzer",
"description": "Analyzes document content and structure",
"system_prompt": "You are an expert document analyst. Extract key information, themes, and insights from documents.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Legal Expert",
"description": "Provides legal context and interpretation",
"system_prompt": "You are a legal expert. Analyze documents for legal implications, risks, and compliance issues.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
},
{
"agent_name": "Summarizer",
"description": "Creates concise summaries and action items",
"system_prompt": "You are an expert at creating clear, actionable summaries from complex information.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.4
}
],
"rearrange_flow": "Summarizer -> Legal Expert -> Document Analyzer",
"max_loops": 1
}
response = requests.post(
f"{API_BASE_URL}/v1/swarm/completions",
headers=headers,
json=swarm_config
)
if response.status_code == 200:
result = response.json()
print("AgentRearrange swarm completed successfully!")
print(f"Cost: ${result['metadata']['billing_info']['total_cost']}")
print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds")
print(f"Output: {result['output']}")
else:
print(f"Error: {response.status_code} - {response.text}")
```
**Example Response**:
```json
{
"job_id": "swarms-Uc8R7UcepLmNNPwcU7JC6YPy5wiI",
"status": "success",
"swarm_name": "Document Processing Rearrange",
"description": "Process documents with dynamic agent reorganization",
"swarm_type": "AgentRearrange",
"output": [
{
"role": "Summarizer",
"content": "\"Of course! Please provide the legal document you would like me to analyze, and I'll help extract key insights, summarize findings, and identify any action items.\""
},
{
"role": "Legal Expert",
"content": "\"\"Absolutely! Please upload or describe the legal document you need assistance with, and I'll provide an analysis that highlights key insights, summarizes the findings, and identifies any action items that may be necessary.\"\""
},
{
"role": "Document Analyzer",
"content": "\"Of course! Please provide the legal document you would like me to analyze, and I'll help extract key insights, summarize findings, and identify any action items.\""
}
],
"number_of_agents": 3,
"service_tier": "standard",
"execution_time": 7.898931264877319,
"usage": {
"input_tokens": 22,
"output_tokens": 144,
"total_tokens": 166,
"billing_info": {
"cost_breakdown": {
"agent_cost": 0.03,
"input_token_cost": 0.000066,
"output_token_cost": 0.00216,
"token_counts": {
"total_input_tokens": 22,
"total_output_tokens": 144,
"total_tokens": 166
},
"num_agents": 3,
"service_tier": "standard",
"night_time_discount_applied": true
},
"total_cost": 0.032226,
"discount_active": true,
"discount_type": "night_time",
"discount_percentage": 75
}
}
}
```
## Configuration Options
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `rearrange_flow` | string | Instructions for how agents should be rearranged | None |
| `agents` | Array<AgentSpec> | List of agents to be dynamically arranged | Required |
| `max_loops` | integer | Maximum rearrangement iterations | 1 |
## Best Practices
- Provide clear `rearrange_flow` instructions for optimal reorganization
- Design agents with complementary but flexible roles
- Use when task complexity requires adaptive sequencing
- Monitor execution patterns to understand rearrangement decisions
## Related Swarm Types
- [SequentialWorkflow](sequential_workflow.md) - For fixed sequential processing
- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic swarm construction
- [HierarchicalSwarm](hierarchical_swarm.md) - For structured agent hierarchies

@ -0,0 +1,55 @@
# Auto
*Intelligently selects the most effective swarm architecture for a given task*
**Swarm Type**: `auto` (or `Auto`)
## Overview
The Auto swarm type intelligently selects the most effective swarm architecture for a given task based on context analysis and task requirements. This intelligent system evaluates the task description and automatically chooses the optimal swarm type from all available architectures, ensuring maximum efficiency and effectiveness.
Key features:
- **Intelligent Selection**: Automatically chooses the best swarm type for each task
- **Context Analysis**: Analyzes task requirements to make optimal decisions
- **Adaptive Architecture**: Adapts to different types of problems automatically
- **Zero Configuration**: No manual architecture selection required
## Use Cases
- When unsure about which swarm type to use
- General-purpose task automation
- Rapid prototyping and experimentation
- Simplified API usage for non-experts
## API Usage
## Selection Logic
The Auto swarm type analyzes various factors to make its selection:
| Factor | Consideration |
|--------|---------------|
| **Task Complexity** | Simple → Single agent, Complex → Multi-agent |
| **Sequential Dependencies** | Dependencies → SequentialWorkflow |
| **Parallel Opportunities** | Independent subtasks → ConcurrentWorkflow |
| **Collaboration Needs** | Discussion required → GroupChat |
| **Expertise Diversity** | Multiple domains → MixtureOfAgents |
| **Management Needs** | Oversight required → HierarchicalSwarm |
| **Routing Requirements** | Task distribution → MultiAgentRouter |
## Best Practices
- Provide detailed task descriptions for better selection
- Use `rules` parameter to guide selection criteria
- Review the selected architecture in response metadata
- Ideal for users new to swarm architectures
## Related Swarm Types
Since Auto can select any swarm type, it's related to all architectures:
- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic agent generation
- [SequentialWorkflow](sequential_workflow.md) - Often selected for linear tasks
- [ConcurrentWorkflow](concurrent_workflow.md) - For parallel processing needs
- [MixtureOfAgents](mixture_of_agents.md) - For diverse expertise requirements

@ -0,0 +1,46 @@
# AutoSwarmBuilder [ Needs an Fix ]
*Automatically configures optimal swarm architectures based on task requirements*
**Swarm Type**: `AutoSwarmBuilder`
## Overview
The AutoSwarmBuilder automatically configures optimal agent architectures based on task requirements and performance metrics, simplifying swarm creation. This intelligent system analyzes the given task and automatically generates the most suitable agent configuration, eliminating the need for manual swarm design.
Key features:
- **Intelligent Configuration**: Automatically designs optimal swarm structures
- **Task-Adaptive**: Adapts architecture based on specific task requirements
- **Performance Optimization**: Selects configurations for maximum efficiency
- **Simplified Setup**: Eliminates manual agent configuration complexity
## Use Cases
- Quick prototyping and experimentation
- Unknown or complex task requirements
- Automated swarm optimization
- Simplified swarm creation for non-experts
## API Usage
## Configuration Options
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `task` | string | Task description for automatic optimization | Required |
| `rules` | string | Additional constraints and guidelines | None |
| `max_loops` | integer | Maximum execution rounds | 1 |
## Best Practices
- Provide detailed, specific task descriptions for better optimization
- Use `rules` parameter to guide the automatic configuration
- Ideal for rapid prototyping and experimentation
- Review generated architecture in response metadata
## Related Swarm Types
- [Auto](auto.md) - For automatic swarm type selection
- [MixtureOfAgents](mixture_of_agents.md) - Often selected by AutoSwarmBuilder
- [HierarchicalSwarm](hierarchical_swarm.md) - For complex structured tasks

@ -0,0 +1,279 @@
# Deploy AI Agents with Swarms API on Cloudflare Workers
Deploy intelligent AI agents powered by Swarms API on Cloudflare Workers edge network. Build production-ready cron agents that run automatically, fetch real-time data, perform AI analysis, and execute actions across 330+ cities worldwide.
<!-- ## Demo Video
Watch the stock agent in action:
<iframe width="800" height="450"
src="https://www.youtube.com/embed/YOUR_VIDEO_ID"
frameborder="0"
allowfullscreen>
</iframe>
> **Note**: The demo video shows the complete workflow from data fetching to AI analysis and report generation. -->
## Overview
This integration demonstrates how to combine **Swarms API multi-agent intelligence** with **Cloudflare Workers edge computing** to create autonomous AI systems that:
- ⚡ **Execute automatically** on predefined schedules (cron jobs)
- 📊 **Fetch real-time data** from external APIs (Yahoo Finance, news feeds)
- 🤖 **Perform intelligent analysis** using specialized Swarms AI agents
- 📧 **Take automated actions** (email alerts, reports, notifications)
- 🌍 **Scale globally** on Cloudflare's edge network with sub-100ms latency
## Repository & Complete Implementation
For the **complete working implementation** with full source code, detailed setup instructions, and ready-to-deploy examples, visit:
**🔗 [Swarms-CloudFlare-Deployment Repository](https://github.com/The-Swarm-Corporation/Swarms-CloudFlare-Deployment)**
This repository provides:
- **Two complete implementations**: JavaScript and Python
- **Production-ready code** with error handling and monitoring
- **Step-by-step deployment guides** for both local and production environments
- **Real-world examples** including stock analysis agents
- **Configuration templates** and environment setup
## Available Implementations
The repository provides **two complete implementations** of stock analysis agents:
### 📂 `stock-agent/` - JavaScript Implementation
The original implementation using **JavaScript/TypeScript** on Cloudflare Workers.
### 📂 `python-stock-agent/` - Python Implementation
A **Python Workers** implementation using Cloudflare's beta Python runtime with Pyodide.
## Stock Analysis Agent Features
Both implementations demonstrate a complete system that:
1. **Automated Analysis**: Runs stock analysis every 3 hours using Cloudflare Workers cron
2. **Real-time Data**: Fetches market data from Yahoo Finance API (no API key needed)
3. **News Integration**: Collects market news from Financial Modeling Prep API (optional)
4. **Multi-Agent Analysis**: Deploys multiple Swarms AI agents for technical and fundamental analysis
5. **Email Reports**: Sends comprehensive reports via Mailgun
6. **Web Interface**: Provides monitoring dashboard for manual triggers and status tracking
## Implementation Comparison
| Feature | JavaScript (`stock-agent/`) | Python (`python-stock-agent/`) |
|---------|----------------------------|--------------------------------|
| **Runtime** | V8 JavaScript Engine | Pyodide Python Runtime |
| **Language** | JavaScript/TypeScript | Python 3.x |
| **Status** | Production Ready | Beta (Python Workers) |
| **Performance** | Optimized V8 execution | Good, with Python stdlib support |
| **Syntax** | `fetch()`, `JSON.stringify()` | `await fetch()`, `json.dumps()` |
| **Error Handling** | `try/catch` | `try/except` |
| **Libraries** | Built-in Web APIs | Python stdlib + select packages |
| **Development** | Mature tooling | Growing ecosystem |
## Architecture
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Cloudflare │ │ Data Sources │ │ Swarms API │
│ Workers Runtime │ │ │ │ │
│ "0 */3 * * *" │───▶│ Yahoo Finance │───▶│ Technical Agent │
│ JS | Python │ │ News APIs │ │ Fundamental │
│ scheduled() │ │ Market Data │ │ Agent Analysis │
│ Global Edge │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
```
## Quick Start Guide
Choose your preferred implementation:
### Option A: JavaScript Implementation
```bash
# Clone the repository
git clone https://github.com/The-Swarm-Corporation/Swarms-CloudFlare-Deployment.git
cd Swarms-CloudFlare-Deployment/stock-agent
# Install dependencies
npm install
```
### Option B: Python Implementation
```bash
# Clone the repository
git clone https://github.com/The-Swarm-Corporation/Swarms-CloudFlare-Deployment.git
cd Swarms-CloudFlare-Deployment/python-stock-agent
# Install dependencies (Wrangler CLI)
npm install
```
### 2. Environment Configuration
Create a `.dev.vars` file in your chosen directory:
```env
# Required: Swarms API key
SWARMS_API_KEY=your-swarms-api-key-here
# Optional: Market news (free tier available)
FMP_API_KEY=your-fmp-api-key
# Optional: Email notifications
MAILGUN_API_KEY=your-mailgun-api-key
MAILGUN_DOMAIN=your-domain.com
RECIPIENT_EMAIL=your-email@example.com
```
### 3. Cron Schedule Configuration
The cron schedule is configured in `wrangler.jsonc`:
```jsonc
{
"triggers": {
"crons": [
"0 */3 * * *" // Every 3 hours
]
}
}
```
Common cron patterns:
- `"0 9 * * 1-5"` - 9 AM weekdays only
- `"0 */6 * * *"` - Every 6 hours
- `"0 0 * * *"` - Daily at midnight
### 4. Local Development
```bash
# Start local development server
npm run dev
# Visit http://localhost:8787 to test
```
### 5. Deploy to Cloudflare Workers
```bash
# Deploy to production
npm run deploy
# Your agent will be live at: https://stock-agent.your-subdomain.workers.dev
```
## API Integration Details
### Swarms API Agents
The stock agent uses two specialized AI agents:
1. **Technical Analyst Agent**:
- Calculates technical indicators (RSI, MACD, Moving Averages)
- Identifies support/resistance levels
- Provides trading signals and price targets
2. **Fundamental Analyst Agent**:
- Analyzes market conditions and sentiment
- Evaluates news and economic indicators
- Provides investment recommendations
### Data Sources
- **Yahoo Finance API**: Free real-time stock data (no API key required)
- **Financial Modeling Prep**: Market news and additional data (free tier: 250 requests/day)
- **Mailgun**: Email delivery service (free tier: 5,000 emails/month)
## Features
### Web Interface
- Real-time status monitoring
- Manual analysis triggers
- Progress tracking with visual feedback
- Analysis results display
### Automated Execution
- Scheduled cron job execution
- Error handling and recovery
- Cost tracking and monitoring
- Email report generation
### Production Ready
- Comprehensive error handling
- Timeout protection
- Rate limiting compliance
- Security best practices
## Configuration Examples
### Custom Stock Symbols
Edit the symbols array in `src/index.js`:
```javascript
const symbols = ['SPY', 'QQQ', 'AAPL', 'MSFT', 'TSLA', 'NVDA', 'AMZN', 'GOOGL'];
```
### Custom Swarms Agents
Modify the agent configuration:
```javascript
const swarmConfig = {
agents: [
{
agent_name: "Risk Assessment Agent",
system_prompt: "Analyze portfolio risk and provide recommendations...",
model_name: "gpt-4o-mini",
max_tokens: 2000,
temperature: 0.1
}
]
};
```
## Cost Optimization
- **Cloudflare Workers**: Free tier includes 100,000 requests/day
- **Swarms API**: Monitor usage in dashboard, use gpt-4o-mini for cost efficiency
- **External APIs**: Leverage free tiers and implement intelligent caching
## Security & Best Practices
- Store API keys as Cloudflare Workers secrets
- Implement request validation and rate limiting
- Audit AI decisions and maintain compliance logs
- Use HTTPS for all external API calls
## Monitoring & Observability
- Cloudflare Workers analytics dashboard
- Real-time performance metrics
- Error tracking and alerting
- Cost monitoring and optimization
## Troubleshooting
### Common Issues
1. **API Key Errors**: Verify environment variables are set correctly
2. **Cron Not Triggering**: Check cron syntax and Cloudflare Workers limits
3. **Email Not Sending**: Verify Mailgun configuration and domain setup
4. **Data Fetch Failures**: Check external API status and rate limits
### Debug Mode
Enable detailed logging by setting:
```javascript
console.log('Debug mode enabled');
```
## Additional Resources
- [Cloudflare Workers Documentation](https://developers.cloudflare.com/workers/)
- [Swarms API Documentation](https://docs.swarms.world/)
- [Cron Expression Generator](https://crontab.guru/)
- [Financial Modeling Prep API](https://financialmodelingprep.com/developer/docs)

@ -0,0 +1,214 @@
# ConcurrentWorkflow
*Runs independent tasks in parallel for faster processing*
**Swarm Type**: `ConcurrentWorkflow`
## Overview
The ConcurrentWorkflow swarm type runs independent tasks in parallel, significantly reducing processing time for complex operations. This architecture is ideal for tasks that can be processed simultaneously without dependencies, allowing multiple agents to work on different aspects of a problem at the same time.
Key features:
- **Parallel Execution**: Multiple agents work simultaneously
- **Reduced Processing Time**: Faster completion through parallelization
- **Independent Tasks**: Agents work on separate, non-dependent subtasks
- **Scalable Performance**: Performance scales with the number of agents
## Use Cases
- Independent data analysis tasks
- Parallel content generation
- Multi-source research projects
- Distributed problem solving
## API Usage
### Basic ConcurrentWorkflow Example
=== "Shell (curl)"
```bash
curl -X POST "https://api.swarms.world/v1/swarm/completions" \
-H "x-api-key: $SWARMS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Market Research Concurrent",
"description": "Parallel market research across different sectors",
"swarm_type": "ConcurrentWorkflow",
"task": "Research and analyze market opportunities in AI, healthcare, fintech, and e-commerce sectors",
"agents": [
{
"agent_name": "AI Market Analyst",
"description": "Analyzes AI market trends and opportunities",
"system_prompt": "You are an AI market analyst. Focus on artificial intelligence market trends, opportunities, key players, and growth projections.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Healthcare Market Analyst",
"description": "Analyzes healthcare market trends",
"system_prompt": "You are a healthcare market analyst. Focus on healthcare market trends, digital health opportunities, regulatory landscape, and growth areas.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Fintech Market Analyst",
"description": "Analyzes fintech market opportunities",
"system_prompt": "You are a fintech market analyst. Focus on financial technology trends, digital payment systems, blockchain opportunities, and regulatory developments.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "E-commerce Market Analyst",
"description": "Analyzes e-commerce market trends",
"system_prompt": "You are an e-commerce market analyst. Focus on online retail trends, marketplace opportunities, consumer behavior, and emerging platforms.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
}
],
"max_loops": 1
}'
```
=== "Python (requests)"
```python
import requests
import json
API_BASE_URL = "https://api.swarms.world"
API_KEY = "your_api_key_here"
headers = {
"x-api-key": API_KEY,
"Content-Type": "application/json"
}
swarm_config = {
"name": "Market Research Concurrent",
"description": "Parallel market research across different sectors",
"swarm_type": "ConcurrentWorkflow",
"task": "Research and analyze market opportunities in AI, healthcare, fintech, and e-commerce sectors",
"agents": [
{
"agent_name": "AI Market Analyst",
"description": "Analyzes AI market trends and opportunities",
"system_prompt": "You are an AI market analyst. Focus on artificial intelligence market trends, opportunities, key players, and growth projections.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Healthcare Market Analyst",
"description": "Analyzes healthcare market trends",
"system_prompt": "You are a healthcare market analyst. Focus on healthcare market trends, digital health opportunities, regulatory landscape, and growth areas.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Fintech Market Analyst",
"description": "Analyzes fintech market opportunities",
"system_prompt": "You are a fintech market analyst. Focus on financial technology trends, digital payment systems, blockchain opportunities, and regulatory developments.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "E-commerce Market Analyst",
"description": "Analyzes e-commerce market trends",
"system_prompt": "You are an e-commerce market analyst. Focus on online retail trends, marketplace opportunities, consumer behavior, and emerging platforms.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
}
],
"max_loops": 1
}
response = requests.post(
f"{API_BASE_URL}/v1/swarm/completions",
headers=headers,
json=swarm_config
)
if response.status_code == 200:
result = response.json()
print("ConcurrentWorkflow swarm completed successfully!")
print(f"Cost: ${result['metadata']['billing_info']['total_cost']}")
print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds")
print(f"Parallel results: {result['output']}")
else:
print(f"Error: {response.status_code} - {response.text}")
```
**Example Response**:
```json
{
"job_id": "swarms-S17nZFDesmLHxCRoeyF3NVYvPaXk",
"status": "success",
"swarm_name": "Market Research Concurrent",
"description": "Parallel market research across different sectors",
"swarm_type": "ConcurrentWorkflow",
"output": [
{
"role": "E-commerce Market Analyst",
"content": "To analyze market opportunities in the AI, healthcare, fintech, and e-commerce sectors, we can break down each sector's current trends, consumer behavior, and emerging platforms. Here's an overview of each sector with a focus on e-commerce....."
},
{
"role": "AI Market Analyst",
"content": "The artificial intelligence (AI) landscape presents numerous opportunities across various sectors, particularly in healthcare, fintech, and e-commerce. Here's a detailed analysis of each sector:\n\n### Healthcare....."
},
{
"role": "Healthcare Market Analyst",
"content": "As a Healthcare Market Analyst, I will focus on analyzing market opportunities within the healthcare sector, particularly in the realm of AI and digital health. The intersection of healthcare with fintech and e-commerce also presents unique opportunities. Here's an overview of key trends and growth areas:...."
},
{
"role": "Fintech Market Analyst",
"content": "Certainly! Let's break down the market opportunities in the fintech sector, focusing on financial technology trends, digital payment systems, blockchain opportunities, and regulatory developments:\n\n### 1. Financial Technology Trends....."
}
],
"number_of_agents": 4,
"service_tier": "standard",
"execution_time": 23.360230922698975,
"usage": {
"input_tokens": 35,
"output_tokens": 2787,
"total_tokens": 2822,
"billing_info": {
"cost_breakdown": {
"agent_cost": 0.04,
"input_token_cost": 0.000105,
"output_token_cost": 0.041805,
"token_counts": {
"total_input_tokens": 35,
"total_output_tokens": 2787,
"total_tokens": 2822
},
"num_agents": 4,
"service_tier": "standard",
"night_time_discount_applied": true
},
"total_cost": 0.08191,
"discount_active": true,
"discount_type": "night_time",
"discount_percentage": 75
}
}
}
```
## Best Practices
- Design independent tasks that don't require sequential dependencies
- Use for tasks that can be parallelized effectively
- Ensure agents have distinct, non-overlapping responsibilities
- Ideal for time-sensitive analysis requiring multiple perspectives
## Related Swarm Types
- [SequentialWorkflow](sequential_workflow.md) - For ordered execution
- [MixtureOfAgents](mixture_of_agents.md) - For collaborative analysis
- [MultiAgentRouter](multi_agent_router.md) - For intelligent task distribution

@ -0,0 +1,189 @@
# GroupChat
*Enables dynamic collaboration through chat-based interaction*
**Swarm Type**: `GroupChat`
## Overview
The GroupChat swarm type enables dynamic collaboration between agents through a chat-based interface, facilitating real-time information sharing and decision-making. Agents participate in a conversational workflow where they can build upon each other's contributions, debate ideas, and reach consensus through natural dialogue.
Key features:
- **Interactive Dialogue**: Agents communicate through natural conversation
- **Dynamic Collaboration**: Real-time information sharing and building upon ideas
- **Consensus Building**: Agents can debate and reach decisions collectively
- **Flexible Participation**: Agents can contribute when relevant to the discussion
## Use Cases
- Brainstorming and ideation sessions
- Multi-perspective problem analysis
- Collaborative decision-making processes
- Creative content development
## API Usage
### Basic GroupChat Example
=== "Shell (curl)"
```bash
curl -X POST "https://api.swarms.world/v1/swarm/completions" \
-H "x-api-key: $SWARMS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Product Strategy Discussion",
"description": "Collaborative chat to develop product strategy",
"swarm_type": "GroupChat",
"task": "Discuss and develop a go-to-market strategy for a new AI-powered productivity tool targeting small businesses",
"agents": [
{
"agent_name": "Product Manager",
"description": "Leads product strategy and development",
"system_prompt": "You are a senior product manager. Focus on product positioning, features, user needs, and market fit. Ask probing questions and build on others ideas.",
"model_name": "gpt-4o",
"max_loops": 3,
},
{
"agent_name": "Marketing Strategist",
"description": "Develops marketing and positioning strategy",
"system_prompt": "You are a marketing strategist. Focus on target audience, messaging, channels, and competitive positioning. Contribute marketing insights to the discussion.",
"model_name": "gpt-4o",
"max_loops": 3,
},
{
"agent_name": "Sales Director",
"description": "Provides sales and customer perspective",
"system_prompt": "You are a sales director with small business experience. Focus on pricing, sales process, customer objections, and market adoption. Share practical sales insights.",
"model_name": "gpt-4o",
"max_loops": 3,
},
{
"agent_name": "UX Researcher",
"description": "Represents user experience and research insights",
"system_prompt": "You are a UX researcher specializing in small business tools. Focus on user behavior, usability, adoption barriers, and design considerations.",
"model_name": "gpt-4o",
"max_loops": 3,
}
],
"max_loops": 3
}'
```
=== "Python (requests)"
```python
import requests
import json
API_BASE_URL = "https://api.swarms.world"
API_KEY = "your_api_key_here"
headers = {
"x-api-key": API_KEY,
"Content-Type": "application/json"
}
swarm_config = {
"name": "Product Strategy Discussion",
"description": "Collaborative chat to develop product strategy",
"swarm_type": "GroupChat",
"task": "Discuss and develop a go-to-market strategy for a new AI-powered productivity tool targeting small businesses",
"agents": [
{
"agent_name": "Product Manager",
"description": "Leads product strategy and development",
"system_prompt": "You are a senior product manager. Focus on product positioning, features, user needs, and market fit. Ask probing questions and build on others ideas.",
"model_name": "gpt-4o",
"max_loops": 3,
},
{
"agent_name": "Marketing Strategist",
"description": "Develops marketing and positioning strategy",
"system_prompt": "You are a marketing strategist. Focus on target audience, messaging, channels, and competitive positioning. Contribute marketing insights to the discussion.",
"model_name": "gpt-4o",
"max_loops": 3,
},
{
"agent_name": "Sales Director",
"description": "Provides sales and customer perspective",
"system_prompt": "You are a sales director with small business experience. Focus on pricing, sales process, customer objections, and market adoption. Share practical sales insights.",
"model_name": "gpt-4o",
"max_loops": 3,
},
{
"agent_name": "UX Researcher",
"description": "Represents user experience and research insights",
"system_prompt": "You are a UX researcher specializing in small business tools. Focus on user behavior, usability, adoption barriers, and design considerations.",
"model_name": "gpt-4o",
"max_loops": 3,
}
],
"max_loops": 3
}
response = requests.post(
f"{API_BASE_URL}/v1/swarm/completions",
headers=headers,
json=swarm_config
)
if response.status_code == 200:
result = response.json()
print("GroupChat swarm completed successfully!")
print(f"Cost: ${result['metadata']['billing_info']['total_cost']}")
print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds")
print(f"Chat discussion: {result['output']}")
else:
print(f"Error: {response.status_code} - {response.text}")
```
**Example Response**:
```json
{
"job_id": "swarms-2COVtf3k0Fz7jU1BOOHF3b5nuL2x",
"status": "success",
"swarm_name": "Product Strategy Discussion",
"description": "Collaborative chat to develop product strategy",
"swarm_type": "GroupChat",
"output": "User: \n\nSystem: \n Group Chat Name: Product Strategy Discussion\nGroup Chat Description: Collaborative chat to develop product strategy\n Agents in your Group Chat: Available Agents for Team: None\n\n\n\n[Agent 1]\nName: Product Manager\nDescription: Leads product strategy and development\nRole.....",
"number_of_agents": 4,
"service_tier": "standard",
"execution_time": 47.36732482910156,
"usage": {
"input_tokens": 30,
"output_tokens": 1633,
"total_tokens": 1663,
"billing_info": {
"cost_breakdown": {
"agent_cost": 0.04,
"input_token_cost": 0.00009,
"output_token_cost": 0.024495,
"token_counts": {
"total_input_tokens": 30,
"total_output_tokens": 1633,
"total_tokens": 1663
},
"num_agents": 4,
"service_tier": "standard",
"night_time_discount_applied": false
},
"total_cost": 0.064585,
"discount_active": false,
"discount_type": "none",
"discount_percentage": 0
}
}
}
```
## Best Practices
- Set clear discussion goals and objectives
- Use diverse agent personalities for richer dialogue
- Allow multiple conversation rounds for idea development
- Encourage agents to build upon each other's contributions
## Related Swarm Types
- [MixtureOfAgents](mixture_of_agents.md) - For complementary expertise
- [MajorityVoting](majority_voting.md) - For consensus decision-making
- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic discussion setup

@ -0,0 +1,252 @@
# HiearchicalSwarm
*Implements structured, multi-level task management with clear authority*
**Swarm Type**: `HiearchicalSwarm`
## Overview
The HiearchicalSwarm implements a structured, multi-level approach to task management with clear lines of authority and delegation. This architecture organizes agents in a hierarchical structure where manager agents coordinate and oversee worker agents, enabling efficient task distribution and quality control.
Key features:
- **Structured Hierarchy**: Clear organizational structure with managers and workers
- **Delegated Authority**: Manager agents distribute tasks to specialized workers
- **Quality Oversight**: Multi-level review and validation processes
- **Scalable Organization**: Efficient coordination of large agent teams
## Use Cases
- Complex projects requiring management oversight
- Large-scale content production workflows
- Multi-stage validation and review processes
- Enterprise-level task coordination
## API Usage
### Basic HiearchicalSwarm Example
=== "Shell (curl)"
```bash
curl -X POST "https://api.swarms.world/v1/swarm/completions" \
-H "x-api-key: $SWARMS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Market Research ",
"description": "Parallel market research across different sectors",
"swarm_type": "HiearchicalSwarm",
"task": "Research and analyze market opportunities in AI, healthcare, fintech, and e-commerce sectors",
"agents": [
{
"agent_name": "AI Market Analyst",
"description": "Analyzes AI market trends and opportunities",
"system_prompt": "You are an AI market analyst. Focus on artificial intelligence market trends, opportunities, key players, and growth projections.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Healthcare Market Analyst",
"description": "Analyzes healthcare market trends",
"system_prompt": "You are a healthcare market analyst. Focus on healthcare market trends, digital health opportunities, regulatory landscape, and growth areas.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Fintech Market Analyst",
"description": "Analyzes fintech market opportunities",
"system_prompt": "You are a fintech market analyst. Focus on financial technology trends, digital payment systems, blockchain opportunities, and regulatory developments.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "E-commerce Market Analyst",
"description": "Analyzes e-commerce market trends",
"system_prompt": "You are an e-commerce market analyst. Focus on online retail trends, marketplace opportunities, consumer behavior, and emerging platforms.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
}
],
"max_loops": 1
}'
```
=== "Python (requests)"
```python
import requests
import json
API_BASE_URL = "https://api.swarms.world"
API_KEY = "your_api_key_here"
headers = {
"x-api-key": API_KEY,
"Content-Type": "application/json"
}
swarm_config = {
"name": "Market Research ",
"description": "Parallel market research across different sectors",
"swarm_type": "HiearchicalSwarm",
"task": "Research and analyze market opportunities in AI, healthcare, fintech, and e-commerce sectors",
"agents": [
{
"agent_name": "AI Market Analyst",
"description": "Analyzes AI market trends and opportunities",
"system_prompt": "You are an AI market analyst. Focus on artificial intelligence market trends, opportunities, key players, and growth projections.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Healthcare Market Analyst",
"description": "Analyzes healthcare market trends",
"system_prompt": "You are a healthcare market analyst. Focus on healthcare market trends, digital health opportunities, regulatory landscape, and growth areas.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Fintech Market Analyst",
"description": "Analyzes fintech market opportunities",
"system_prompt": "You are a fintech market analyst. Focus on financial technology trends, digital payment systems, blockchain opportunities, and regulatory developments.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "E-commerce Market Analyst",
"description": "Analyzes e-commerce market trends",
"system_prompt": "You are an e-commerce market analyst. Focus on online retail trends, marketplace opportunities, consumer behavior, and emerging platforms.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
}
],
"max_loops": 1
}
response = requests.post(
f"{API_BASE_URL}/v1/swarm/completions",
headers=headers,
json=swarm_config
)
if response.status_code == 200:
result = response.json()
print("HiearchicalSwarm completed successfully!")
print(f"Cost: ${result['metadata']['billing_info']['total_cost']}")
print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds")
print(f"Project plan: {result['output']}")
else:
print(f"Error: {response.status_code} - {response.text}")
```
**Example Response**:
```json
{
"job_id": "swarms-JIrcIAfs2d75xrXGaAL94uWyYJ8V",
"status": "success",
"swarm_name": "Market Research Auto",
"description": "Parallel market research across different sectors",
"swarm_type": "HiearchicalSwarm",
"output": [
{
"role": "System",
"content": "These are the agents in your team. Each agent has a specific role and expertise to contribute to the team's objectives.\nTotal Agents: 4\n\nBelow is a summary of your team members and their primary responsibilities:\n| Agent Name | Description |\n|------------|-------------|\n| AI Market Analyst | Analyzes AI market trends and opportunities |\n| Healthcare Market Analyst | Analyzes healthcare market trends |\n| Fintech Market Analyst | Analyzes fintech market opportunities |\n| E-commerce Market Analyst | Analyzes e-commerce market trends |\n\nEach agent is designed to handle tasks within their area of expertise. Collaborate effectively by assigning tasks according to these roles."
},
{
"role": "Director",
"content": [
{
"role": "Director",
"content": [
{
"function": {
"arguments": "{\"plan\":\"Conduct a comprehensive analysis of market opportunities in the AI, healthcare, fintech, and e-commerce sectors. Each market analyst will focus on their respective sector, gathering data on current trends, growth opportunities, and potential challenges. The findings will be compiled into a report for strategic decision-making.\",\"orders\":[{\"agent_name\":\"AI Market Analyst\",\"task\":\"Research current trends in the AI market, identify growth opportunities, and analyze potential challenges.\"},{\"agent_name\":\"Healthcare Market Analyst\",\"task\":\"Analyze the healthcare market for emerging trends, growth opportunities, and possible challenges.\"},{\"agent_name\":\"Fintech Market Analyst\",\"task\":\"Investigate the fintech sector for current trends, identify opportunities for growth, and assess challenges.\"},{\"agent_name\":\"E-commerce Market Analyst\",\"task\":\"Examine e-commerce market trends, identify growth opportunities, and analyze potential challenges.\"}]}",
"name": "ModelMetaclass"
},
"id": "call_GxiyzIRb2oGQXokbbkeaeVry",
"type": "function"
}
]
}
]
},
{
"role": "AI Market Analyst",
"content": "### AI Market Analysis: Trends, Opportunities, and Challenges\n\n#### Current Trends in the AI Market:\n\n1. **Increased Adoption Across Industries**..."
},
{
"role": "Healthcare Market Analyst",
"content": "### Healthcare Market Analysis: Trends, Opportunities, and Challenges\n\n#### Current Trends in the Healthcare Market:\n\n1. **Telehealth Expansion**..."
},
{
"role": "Fintech Market Analyst",
"content": "### Fintech Market Analysis: Trends, Opportunities, and Challenges\n\n#### Current Trends in the Fintech Market:\n\n1. **Digital Payments Proliferation**...."
},
{
"role": "E-commerce Market Analyst",
"content": "### E-commerce Market Analysis: Trends, Opportunities, and Challenges\n\n#### Current Trends in the E-commerce Market:\n\n1. **Omnichannel Retailing**...."
},
{
"role": "Director",
"content": "### Feedback for Worker Agents\n\n#### AI Market Analyst\n\n**Strengths:**\n- Comprehensive coverage of current trends, growth opportunities, and challenges in the AI market.\n- Clear categorization of insights, making it easy to follow and understand.\n\n**Weaknesses....."
},
{
"role": "System",
"content": "--- Loop 1/1 completed ---"
}
],
"number_of_agents": 4,
"service_tier": "standard",
"execution_time": 94.07934331893921,
"usage": {
"input_tokens": 35,
"output_tokens": 3827,
"total_tokens": 3862,
"billing_info": {
"cost_breakdown": {
"agent_cost": 0.04,
"input_token_cost": 0.000105,
"output_token_cost": 0.057405,
"token_counts": {
"total_input_tokens": 35,
"total_output_tokens": 3827,
"total_tokens": 3862
},
"num_agents": 4,
"service_tier": "standard",
"night_time_discount_applied": false
},
"total_cost": 0.09751,
"discount_active": false,
"discount_type": "none",
"discount_percentage": 0
}
}
}
```
## Configuration Options
| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `role` | string | Agent role: "manager" or "worker" | "worker" |
| `agents` | Array<AgentSpec> | Mix of manager and worker agents | Required |
| `max_loops` | integer | Coordination rounds for managers | 1 |
## Best Practices
- Clearly define manager and worker roles using the `role` parameter
- Give managers higher `max_loops` for coordination activities
- Design worker agents with specialized, focused responsibilities
- Use for complex projects requiring oversight and coordination
## Related Swarm Types
- [SequentialWorkflow](sequential_workflow.md) - For linear task progression
- [MultiAgentRouter](multi_agent_router.md) - For intelligent task routing
- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic hierarchy creation

@ -0,0 +1,249 @@
# MajorityVoting
*Implements robust decision-making through consensus and voting*
**Swarm Type**: `MajorityVoting`
## Overview
The MajorityVoting swarm type implements robust decision-making through consensus mechanisms, ideal for tasks requiring collective intelligence or verification. Multiple agents independently analyze the same problem and vote on the best solution, ensuring high-quality, well-validated outcomes through democratic consensus.
Key features:
- **Consensus-Based Decisions**: Multiple agents vote on the best solution
- **Quality Assurance**: Reduces individual agent bias through collective input
- **Democratic Process**: Fair and transparent decision-making mechanism
- **Robust Validation**: Multiple perspectives ensure thorough analysis
## Use Cases
- Critical decision-making requiring validation
- Quality assurance and verification tasks
- Complex problem solving with multiple viable solutions
- Risk assessment and evaluation scenarios
## API Usage
### Basic MajorityVoting Example
=== "Shell (curl)"
```bash
curl -X POST "https://api.swarms.world/v1/swarm/completions" \
-H "x-api-key: $SWARMS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Investment Decision Voting",
"description": "Multiple financial experts vote on investment recommendations",
"swarm_type": "MajorityVoting",
"task": "Evaluate whether to invest $1M in a renewable energy startup. Consider market potential, financial projections, team strength, and competitive landscape.",
"agents": [
{
"agent_name": "Growth Investor",
"description": "Focuses on growth potential and market opportunity",
"system_prompt": "You are a growth-focused venture capitalist. Evaluate investments based on market size, scalability, and growth potential. Provide a recommendation with confidence score.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Financial Analyst",
"description": "Analyzes financial metrics and projections",
"system_prompt": "You are a financial analyst specializing in startups. Evaluate financial projections, revenue models, and unit economics. Provide a recommendation with confidence score.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
},
{
"agent_name": "Technical Due Diligence",
"description": "Evaluates technology and product viability",
"system_prompt": "You are a technical due diligence expert. Assess technology viability, intellectual property, product-market fit, and technical risks. Provide a recommendation with confidence score.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Market Analyst",
"description": "Analyzes market conditions and competition",
"system_prompt": "You are a market research analyst. Evaluate market dynamics, competitive landscape, regulatory environment, and market timing. Provide a recommendation with confidence score.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Risk Assessor",
"description": "Identifies and evaluates investment risks",
"system_prompt": "You are a risk assessment specialist. Identify potential risks, evaluate mitigation strategies, and assess overall risk profile. Provide a recommendation with confidence score.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
}
],
"max_loops": 1
}'
```
=== "Python (requests)"
```python
import requests
import json
API_BASE_URL = "https://api.swarms.world"
API_KEY = "your_api_key_here"
headers = {
"x-api-key": API_KEY,
"Content-Type": "application/json"
}
swarm_config = {
"name": "Investment Decision Voting",
"description": "Multiple financial experts vote on investment recommendations",
"swarm_type": "MajorityVoting",
"task": "Evaluate whether to invest $1M in a renewable energy startup. Consider market potential, financial projections, team strength, and competitive landscape.",
"agents": [
{
"agent_name": "Growth Investor",
"description": "Focuses on growth potential and market opportunity",
"system_prompt": "You are a growth-focused venture capitalist. Evaluate investments based on market size, scalability, and growth potential. Provide a recommendation with confidence score.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Financial Analyst",
"description": "Analyzes financial metrics and projections",
"system_prompt": "You are a financial analyst specializing in startups. Evaluate financial projections, revenue models, and unit economics. Provide a recommendation with confidence score.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
},
{
"agent_name": "Technical Due Diligence",
"description": "Evaluates technology and product viability",
"system_prompt": "You are a technical due diligence expert. Assess technology viability, intellectual property, product-market fit, and technical risks. Provide a recommendation with confidence score.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Market Analyst",
"description": "Analyzes market conditions and competition",
"system_prompt": "You are a market research analyst. Evaluate market dynamics, competitive landscape, regulatory environment, and market timing. Provide a recommendation with confidence score.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Risk Assessor",
"description": "Identifies and evaluates investment risks",
"system_prompt": "You are a risk assessment specialist. Identify potential risks, evaluate mitigation strategies, and assess overall risk profile. Provide a recommendation with confidence score.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
}
],
"max_loops": 1
}
response = requests.post(
f"{API_BASE_URL}/v1/swarm/completions",
headers=headers,
json=swarm_config
)
if response.status_code == 200:
result = response.json()
print("MajorityVoting completed successfully!")
print(f"Final decision: {result['output']['consensus_decision']}")
print(f"Vote breakdown: {result['metadata']['vote_breakdown']}")
print(f"Cost: ${result['metadata']['billing_info']['total_cost']}")
print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds")
else:
print(f"Error: {response.status_code} - {response.text}")
```
**Example Response**:
```json
{
"job_id": "swarms-1WFsSJU2KcvY11lxRMjdQNWFHArI",
"status": "success",
"swarm_name": "Investment Decision Voting",
"description": "Multiple financial experts vote on investment recommendations",
"swarm_type": "MajorityVoting",
"output": [
{
"role": "Financial Analyst",
"content": [
"To evaluate the potential investment in a renewable energy startup, we will assess the technology viability, intellectual property, product-market fit, and technical risks, along with the additional factors of market ....."
]
},
{
"role": "Technical Due Diligence",
"content": [
"To evaluate the potential investment in a renewable energy startup, we will analyze the relevant market dynamics, competitive landscape, regulatory environment, and market timing. Here's the breakdown of the assessment......."
]
},
{
"role": "Market Analyst",
"content": [
"To evaluate the potential investment in a renewable energy startup, let's break down the key factors:\n\n1. **Market Potential........"
]
},
{
"role": "Growth Investor",
"content": [
"To evaluate the potential investment in a renewable energy startup, we need to assess various risk factors and mitigation strategies across several key areas: market potential, financial projections, team strength, and competitive landscape.\n\n### 1. Market Potential\n**Risks:**\n- **Regulatory Changes................"
]
},
{
"role": "Risk Assessor",
"content": [
"To provide a comprehensive evaluation of whether to invest $1M in the renewable energy startup, let's break down the key areas.........."
]
},
{
"role": "Risk Assessor",
"content": "To evaluate the potential investment in a renewable energy startup, we need to assess various risk factors and mitigation strategies across several key areas....."
}
],
"number_of_agents": 5,
"service_tier": "standard",
"execution_time": 61.74853563308716,
"usage": {
"input_tokens": 39,
"output_tokens": 8468,
"total_tokens": 8507,
"billing_info": {
"cost_breakdown": {
"agent_cost": 0.05,
"input_token_cost": 0.000117,
"output_token_cost": 0.12702,
"token_counts": {
"total_input_tokens": 39,
"total_output_tokens": 8468,
"total_tokens": 8507
},
"num_agents": 5,
"service_tier": "standard",
"night_time_discount_applied": false
},
"total_cost": 0.177137,
"discount_active": false,
"discount_type": "none",
"discount_percentage": 0
}
}
}
```
## Best Practices
- Use odd numbers of agents to avoid tie votes
- Design agents with different perspectives for robust evaluation
- Include confidence scores in agent prompts for weighted decisions
- Ideal for high-stakes decisions requiring validation
## Related Swarm Types
- [GroupChat](group_chat.md) - For discussion-based consensus
- [MixtureOfAgents](mixture_of_agents.md) - For diverse expertise collaboration
- [HierarchicalSwarm](hierarchical_swarm.md) - For structured decision-making

@ -0,0 +1,222 @@
# MixtureOfAgents
*Builds diverse teams of specialized agents for complex problem solving*
**Swarm Type**: `MixtureOfAgents`
## Overview
The MixtureOfAgents swarm type combines multiple agent types with different specializations to tackle diverse aspects of complex problems. Each agent contributes unique skills and perspectives, making this architecture ideal for tasks requiring multiple types of expertise working in harmony.
Key features:
- **Diverse Expertise**: Combines agents with different specializations
- **Collaborative Problem Solving**: Agents work together leveraging their unique strengths
- **Comprehensive Coverage**: Ensures all aspects of complex tasks are addressed
- **Balanced Perspectives**: Multiple viewpoints for robust decision-making
## Use Cases
- Complex research projects requiring multiple disciplines
- Business analysis needing various functional perspectives
- Content creation requiring different expertise areas
- Strategic planning with multiple considerations
## API Usage
### Basic MixtureOfAgents Example
=== "Shell (curl)"
```bash
curl -X POST "https://api.swarms.world/v1/swarm/completions" \
-H "x-api-key: $SWARMS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Business Strategy Mixture",
"description": "Diverse team analyzing business strategy from multiple perspectives",
"swarm_type": "MixtureOfAgents",
"task": "Develop a comprehensive market entry strategy for a new AI product in the healthcare sector",
"agents": [
{
"agent_name": "Market Research Analyst",
"description": "Analyzes market trends and opportunities",
"system_prompt": "You are a market research expert specializing in healthcare technology. Analyze market size, trends, and competitive landscape.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Financial Analyst",
"description": "Evaluates financial viability and projections",
"system_prompt": "You are a financial analyst expert. Assess financial implications, ROI, and cost structures for business strategies.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
},
{
"agent_name": "Regulatory Expert",
"description": "Analyzes compliance and regulatory requirements",
"system_prompt": "You are a healthcare regulatory expert. Analyze compliance requirements, regulatory pathways, and potential barriers.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.1
},
{
"agent_name": "Technology Strategist",
"description": "Evaluates technical feasibility and strategy",
"system_prompt": "You are a technology strategy expert. Assess technical requirements, implementation challenges, and scalability.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
}
],
"max_loops": 1
}'
```
=== "Python (requests)"
```python
import requests
import json
API_BASE_URL = "https://api.swarms.world"
API_KEY = "your_api_key_here"
headers = {
"x-api-key": API_KEY,
"Content-Type": "application/json"
}
swarm_config = {
"name": "Business Strategy Mixture",
"description": "Diverse team analyzing business strategy from multiple perspectives",
"swarm_type": "MixtureOfAgents",
"task": "Develop a comprehensive market entry strategy for a new AI product in the healthcare sector",
"agents": [
{
"agent_name": "Market Research Analyst",
"description": "Analyzes market trends and opportunities",
"system_prompt": "You are a market research expert specializing in healthcare technology. Analyze market size, trends, and competitive landscape.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Financial Analyst",
"description": "Evaluates financial viability and projections",
"system_prompt": "You are a financial analyst expert. Assess financial implications, ROI, and cost structures for business strategies.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
},
{
"agent_name": "Regulatory Expert",
"description": "Analyzes compliance and regulatory requirements",
"system_prompt": "You are a healthcare regulatory expert. Analyze compliance requirements, regulatory pathways, and potential barriers.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.1
},
{
"agent_name": "Technology Strategist",
"description": "Evaluates technical feasibility and strategy",
"system_prompt": "You are a technology strategy expert. Assess technical requirements, implementation challenges, and scalability.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
}
],
"max_loops": 1
}
response = requests.post(
f"{API_BASE_URL}/v1/swarm/completions",
headers=headers,
json=swarm_config
)
if response.status_code == 200:
result = response.json()
print("MixtureOfAgents swarm completed successfully!")
print(f"Cost: ${result['metadata']['billing_info']['total_cost']}")
print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds")
print(f"Output: {result['output']}")
else:
print(f"Error: {response.status_code} - {response.text}")
```
**Example Response**:
```json
{
"job_id": "swarms-kBZaJg1uGTkRbLCAsGztL2jrp5Mj",
"status": "success",
"swarm_name": "Business Strategy Mixture",
"description": "Diverse team analyzing business strategy from multiple perspectives",
"swarm_type": "MixtureOfAgents",
"output": [
{
"role": "System",
"content": "Team Name: Business Strategy Mixture\nTeam Description: Diverse team analyzing business strategy from multiple perspectives\nThese are the agents in your team. Each agent has a specific role and expertise to contribute to the team's objectives.\nTotal Agents: 4\n\nBelow is a summary of your team members and their primary responsibilities:\n| Agent Name | Description |\n|------------|-------------|\n| Market Research Analyst | Analyzes market trends and opportunities |\n| Financial Analyst | Evaluates financial viability and projections |\n| Regulatory Expert | Analyzes compliance and regulatory requirements |\n| Technology Strategist | Evaluates technical feasibility and strategy |\n\nEach agent is designed to handle tasks within their area of expertise. Collaborate effectively by assigning tasks according to these roles."
},
{
"role": "Market Research Analyst",
"content": "To develop a comprehensive market entry strategy for a new AI product in the healthcare sector, we will leverage the expertise of each team member to cover all critical aspects of the strategy. Here's how each agent will contribute......."
},
{
"role": "Technology Strategist",
"content": "To develop a comprehensive market entry strategy for a new AI product in the healthcare sector, we'll need to collaborate effectively with the team, leveraging each member's expertise. Here's how each agent can contribute to the strategy, along with a focus on the technical requirements, implementation challenges, and scalability from the technology strategist's perspective....."
},
{
"role": "Financial Analyst",
"content": "Developing a comprehensive market entry strategy for a new AI product in the healthcare sector involves a multidisciplinary approach. Each agent in the Business Strategy Mixture team will play a crucial role in ensuring a successful market entry. Here's how the team can collaborate........"
},
{
"role": "Regulatory Expert",
"content": "To develop a comprehensive market entry strategy for a new AI product in the healthcare sector, we need to leverage the expertise of each agent in the Business Strategy Mixture team. Below is an outline of how each team member can contribute to this strategy......"
},
{
"role": "Aggregator Agent",
"content": "As the Aggregator Agent, I've observed and analyzed the responses from the Business Strategy Mixture team regarding the development of a comprehensive market entry strategy for a new AI product in the healthcare sector. Here's a summary of the key points ......"
}
],
"number_of_agents": 4,
"service_tier": "standard",
"execution_time": 30.230480670928955,
"usage": {
"input_tokens": 30,
"output_tokens": 3401,
"total_tokens": 3431,
"billing_info": {
"cost_breakdown": {
"agent_cost": 0.04,
"input_token_cost": 0.00009,
"output_token_cost": 0.051015,
"token_counts": {
"total_input_tokens": 30,
"total_output_tokens": 3401,
"total_tokens": 3431
},
"num_agents": 4,
"service_tier": "standard",
"night_time_discount_applied": true
},
"total_cost": 0.091105,
"discount_active": true,
"discount_type": "night_time",
"discount_percentage": 75
}
}
}
```
## Best Practices
- Select agents with complementary and diverse expertise
- Ensure each agent has a clear, specialized role
- Use for complex problems requiring multiple perspectives
- Design tasks that benefit from collaborative analysis
## Related Swarm Types
- [ConcurrentWorkflow](concurrent_workflow.md) - For parallel task execution
- [GroupChat](group_chat.md) - For collaborative discussion
- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic team assembly

@ -0,0 +1,211 @@
# MultiAgentRouter
*Intelligent task dispatcher distributing work based on agent capabilities*
**Swarm Type**: `MultiAgentRouter`
## Overview
The MultiAgentRouter acts as an intelligent task dispatcher, distributing work across agents based on their capabilities and current workload. This architecture analyzes incoming tasks and automatically routes them to the most suitable agents, optimizing both efficiency and quality of outcomes.
Key features:
- **Intelligent Routing**: Automatically assigns tasks to best-suited agents
- **Capability Matching**: Matches task requirements with agent specializations
- **Load Balancing**: Distributes workload efficiently across available agents
- **Dynamic Assignment**: Adapts routing based on agent performance and availability
## Use Cases
- Customer service request routing
- Content categorization and processing
- Technical support ticket assignment
- Multi-domain question answering
## API Usage
### Basic MultiAgentRouter Example
=== "Shell (curl)"
```bash
curl -X POST "https://api.swarms.world/v1/swarm/completions" \
-H "x-api-key: $SWARMS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Customer Support Router",
"description": "Route customer inquiries to specialized support agents",
"swarm_type": "MultiAgentRouter",
"task": "Handle multiple customer inquiries: 1) Billing question about overcharge, 2) Technical issue with mobile app login, 3) Product recommendation for enterprise client, 4) Return policy question",
"agents": [
{
"agent_name": "Billing Specialist",
"description": "Handles billing, payments, and account issues",
"system_prompt": "You are a billing specialist. Handle all billing inquiries, payment issues, refunds, and account-related questions with empathy and accuracy.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Technical Support",
"description": "Resolves technical issues and troubleshooting",
"system_prompt": "You are a technical support specialist. Diagnose and resolve technical issues, provide step-by-step troubleshooting, and escalate complex problems.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
},
{
"agent_name": "Sales Consultant",
"description": "Provides product recommendations and sales support",
"system_prompt": "You are a sales consultant. Provide product recommendations, explain features and benefits, and help customers find the right solutions.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.4
},
{
"agent_name": "Policy Advisor",
"description": "Explains company policies and procedures",
"system_prompt": "You are a policy advisor. Explain company policies, terms of service, return procedures, and compliance requirements clearly.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.1
}
],
"max_loops": 1
}'
```
=== "Python (requests)"
```python
import requests
import json
API_BASE_URL = "https://api.swarms.world"
API_KEY = "your_api_key_here"
headers = {
"x-api-key": API_KEY,
"Content-Type": "application/json"
}
swarm_config = {
"name": "Customer Support Router",
"description": "Route customer inquiries to specialized support agents",
"swarm_type": "MultiAgentRouter",
"task": "Handle multiple customer inquiries: 1) Billing question about overcharge, 2) Technical issue with mobile app login, 3) Product recommendation for enterprise client, 4) Return policy question",
"agents": [
{
"agent_name": "Billing Specialist",
"description": "Handles billing, payments, and account issues",
"system_prompt": "You are a billing specialist. Handle all billing inquiries, payment issues, refunds, and account-related questions with empathy and accuracy.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Technical Support",
"description": "Resolves technical issues and troubleshooting",
"system_prompt": "You are a technical support specialist. Diagnose and resolve technical issues, provide step-by-step troubleshooting, and escalate complex problems.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
},
{
"agent_name": "Sales Consultant",
"description": "Provides product recommendations and sales support",
"system_prompt": "You are a sales consultant. Provide product recommendations, explain features and benefits, and help customers find the right solutions.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.4
},
{
"agent_name": "Policy Advisor",
"description": "Explains company policies and procedures",
"system_prompt": "You are a policy advisor. Explain company policies, terms of service, return procedures, and compliance requirements clearly.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.1
}
],
"max_loops": 1
}
response = requests.post(
f"{API_BASE_URL}/v1/swarm/completions",
headers=headers,
json=swarm_config
)
if response.status_code == 200:
result = response.json()
print("MultiAgentRouter completed successfully!")
print(f"Routing decisions: {result['metadata']['routing_decisions']}")
print(f"Cost: ${result['metadata']['billing_info']['total_cost']}")
print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds")
print(f"Customer responses: {result['output']}")
else:
print(f"Error: {response.status_code} - {response.text}")
```
**Example Response**:
```json
{
"job_id": "swarms-OvOZHubprE3thzLmRdNBZAxA6om4",
"status": "success",
"swarm_name": "Customer Support Router",
"description": "Route customer inquiries to specialized support agents",
"swarm_type": "MultiAgentRouter",
"output": [
{
"role": "user",
"content": "Handle multiple customer inquiries: 1) Billing question about overcharge, 2) Technical issue with mobile app login, 3) Product recommendation for enterprise client, 4) Return policy question"
},
{
"role": "Agent Router",
"content": "selected_agent='Billing Specialist' reasoning='The task involves multiple inquiries, but the first one is about a billing question regarding an overcharge. Billing issues often require immediate attention to ensure customer satisfaction and prevent further complications. Therefore, the Billing Specialist is the most appropriate agent to handle this task. They can address the billing question directly and potentially coordinate with other agents for the remaining inquiries.' modified_task='Billing question about overcharge'"
},
{
"role": "Billing Specialist",
"content": "Of course, I'd be happy to help you with your billing question regarding an overcharge. Could you please provide me with more details about the charge in question, such as the date it occurred and the amount? This information will help me look into your account and resolve the issue as quickly as possible."
}
],
"number_of_agents": 4,
"service_tier": "standard",
"execution_time": 7.800086975097656,
"usage": {
"input_tokens": 28,
"output_tokens": 221,
"total_tokens": 249,
"billing_info": {
"cost_breakdown": {
"agent_cost": 0.04,
"input_token_cost": 0.000084,
"output_token_cost": 0.003315,
"token_counts": {
"total_input_tokens": 28,
"total_output_tokens": 221,
"total_tokens": 249
},
"num_agents": 4,
"service_tier": "standard",
"night_time_discount_applied": true
},
"total_cost": 0.043399,
"discount_active": true,
"discount_type": "night_time",
"discount_percentage": 75
}
}
}
```
## Best Practices
- Define agents with clear, distinct specializations
- Use descriptive agent names and descriptions for better routing
- Ideal for handling diverse task types that require different expertise
- Monitor routing decisions to optimize agent configurations
## Related Swarm Types
- [HierarchicalSwarm](hierarchical_swarm.md) - For structured task management
- [ConcurrentWorkflow](concurrent_workflow.md) - For parallel task processing
- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic routing setup

@ -0,0 +1,214 @@
# SequentialWorkflow
*Executes tasks in a strict, predefined order for step-by-step processing*
**Swarm Type**: `SequentialWorkflow`
## Overview
The SequentialWorkflow swarm type executes tasks in a strict, predefined order where each step depends on the completion of the previous one. This architecture is perfect for workflows that require a linear progression of tasks, ensuring that each agent builds upon the work of the previous agent.
Key features:
- **Ordered Execution**: Agents execute in a specific, predefined sequence
- **Step Dependencies**: Each step builds upon previous results
- **Predictable Flow**: Clear, linear progression through the workflow
- **Quality Control**: Each agent can validate and enhance previous work
## Use Cases
- Document processing pipelines
- Multi-stage analysis workflows
- Content creation and editing processes
- Data transformation and validation pipelines
## API Usage
### Basic SequentialWorkflow Example
=== "Shell (curl)"
```bash
curl -X POST "https://api.swarms.world/v1/swarm/completions" \
-H "x-api-key: $SWARMS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Content Creation Pipeline",
"description": "Sequential content creation from research to final output",
"swarm_type": "SequentialWorkflow",
"task": "Create a comprehensive blog post about the future of renewable energy",
"agents": [
{
"agent_name": "Research Specialist",
"description": "Conducts thorough research on the topic",
"system_prompt": "You are a research specialist. Gather comprehensive, accurate information on the given topic from reliable sources.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Content Writer",
"description": "Creates engaging written content",
"system_prompt": "You are a skilled content writer. Transform research into engaging, well-structured articles that are informative and readable.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.6
},
{
"agent_name": "Editor",
"description": "Reviews and polishes the content",
"system_prompt": "You are a professional editor. Review content for clarity, grammar, flow, and overall quality. Make improvements while maintaining the author's voice.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.4
},
{
"agent_name": "SEO Optimizer",
"description": "Optimizes content for search engines",
"system_prompt": "You are an SEO expert. Optimize content for search engines while maintaining readability and quality.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
}
],
"max_loops": 1
}'
```
=== "Python (requests)"
```python
import requests
import json
API_BASE_URL = "https://api.swarms.world"
API_KEY = "your_api_key_here"
headers = {
"x-api-key": API_KEY,
"Content-Type": "application/json"
}
swarm_config = {
"name": "Content Creation Pipeline",
"description": "Sequential content creation from research to final output",
"swarm_type": "SequentialWorkflow",
"task": "Create a comprehensive blog post about the future of renewable energy",
"agents": [
{
"agent_name": "Research Specialist",
"description": "Conducts thorough research on the topic",
"system_prompt": "You are a research specialist. Gather comprehensive, accurate information on the given topic from reliable sources.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.3
},
{
"agent_name": "Content Writer",
"description": "Creates engaging written content",
"system_prompt": "You are a skilled content writer. Transform research into engaging, well-structured articles that are informative and readable.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.6
},
{
"agent_name": "Editor",
"description": "Reviews and polishes the content",
"system_prompt": "You are a professional editor. Review content for clarity, grammar, flow, and overall quality. Make improvements while maintaining the author's voice.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.4
},
{
"agent_name": "SEO Optimizer",
"description": "Optimizes content for search engines",
"system_prompt": "You are an SEO expert. Optimize content for search engines while maintaining readability and quality.",
"model_name": "gpt-4o",
"max_loops": 1,
"temperature": 0.2
}
],
"max_loops": 1
}
response = requests.post(
f"{API_BASE_URL}/v1/swarm/completions",
headers=headers,
json=swarm_config
)
if response.status_code == 200:
result = response.json()
print("SequentialWorkflow swarm completed successfully!")
print(f"Cost: ${result['metadata']['billing_info']['total_cost']}")
print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds")
print(f"Final output: {result['output']}")
else:
print(f"Error: {response.status_code} - {response.text}")
```
**Example Response**:
```json
{
"job_id": "swarms-pbM8wqUwxq8afGeROV2A4xAcncd1",
"status": "success",
"swarm_name": "Content Creation Pipeline",
"description": "Sequential content creation from research to final output",
"swarm_type": "SequentialWorkflow",
"output": [
{
"role": "Research Specialist",
"content": "\"**Title: The Future of Renewable Energy: Charting a Sustainable Path Forward**\n\nAs we navigate the complexities of the 21st century, the transition to renewable energy stands out as a critical endeavor to ensure a sustainable future......"
},
{
"role": "SEO Optimizer",
"content": "\"**Title: The Future of Renewable Energy: Charting a Sustainable Path Forward**\n\nThe transition to renewable energy is crucial as we face the challenges of the 21st century, including climate change and dwindling fossil fuel resources......."
},
{
"role": "Editor",
"content": "\"**Title: The Future of Renewable Energy: Charting a Sustainable Path Forward**\n\nAs we confront the challenges of the 21st century, transitioning to renewable energy is essential for a sustainable future. With climate change concerns escalating and fossil fuel reserves depleting, renewable energy is not just an option but a necessity...."
},
{
"role": "Content Writer",
"content": "\"**Title: The Future of Renewable Energy: Charting a Sustainable Path Forward**\n\nAs we face the multifaceted challenges of the 21st century, transitioning to renewable energy emerges as not just an option but an essential step toward a sustainable future...."
}
],
"number_of_agents": 4,
"service_tier": "standard",
"execution_time": 72.23084282875061,
"usage": {
"input_tokens": 28,
"output_tokens": 3012,
"total_tokens": 3040,
"billing_info": {
"cost_breakdown": {
"agent_cost": 0.04,
"input_token_cost": 0.000084,
"output_token_cost": 0.04518,
"token_counts": {
"total_input_tokens": 28,
"total_output_tokens": 3012,
"total_tokens": 3040
},
"num_agents": 4,
"service_tier": "standard",
"night_time_discount_applied": true
},
"total_cost": 0.085264,
"discount_active": true,
"discount_type": "night_time",
"discount_percentage": 75
}
}
}
```
## Best Practices
- Design agents with clear, sequential dependencies
- Ensure each agent builds meaningfully on the previous work
- Use for linear workflows where order matters
- Validate outputs at each step before proceeding
## Related Swarm Types
- [ConcurrentWorkflow](concurrent_workflow.md) - For parallel execution
- [AgentRearrange](agent_rearrange.md) - For dynamic sequencing
- [HierarchicalSwarm](hierarchical_swarm.md) - For structured workflows

@ -1,30 +1,28 @@
# Multi-Agent Architectures
Each multi-agent architecture type is designed for specific use cases and can be combined to create powerful multi-agent systems. Here's a comprehensive overview of each available swarm:
Each multi-agent architecture type is designed for specific use cases and can be combined to create powerful multi-agent systems. Below is an overview of each available swarm type:
| Swarm Type | Description | Learn More |
|---------------------|------------------------------------------------------------------------------|------------|
| AgentRearrange | Dynamically reorganizes agents to optimize task performance and efficiency. Optimizes agent performance by dynamically adjusting their roles and positions within the workflow. This architecture is particularly useful when the effectiveness of agents depends on their sequence or arrangement. | [Learn More](/swarms/structs/agent_rearrange) |
| MixtureOfAgents | Creates diverse teams of specialized agents, each bringing unique capabilities to solve complex problems. Each agent contributes unique skills to achieve the overall goal, making it excel at tasks requiring multiple types of expertise or processing. | [Learn More](/swarms/structs/moa) |
| SpreadSheetSwarm | Provides a structured approach to data management and operations, making it ideal for tasks involving data analysis, transformation, and systematic processing in a spreadsheet-like structure. | [Learn More](/swarms/structs/spreadsheet_swarm) |
| SequentialWorkflow | Ensures strict process control by executing tasks in a predefined order. Perfect for workflows where each step depends on the completion of previous steps. | [Learn More](/swarms/structs/sequential_workflow) |
| ConcurrentWorkflow | Maximizes efficiency by running independent tasks in parallel, significantly reducing overall processing time for complex operations. Ideal for independent tasks that can be processed simultaneously. | [Learn More](/swarms/structs/concurrentworkflow) |
| GroupChat | Enables dynamic collaboration between agents through a chat-based interface, facilitating real-time information sharing and decision-making. | [Learn More](/swarms/structs/group_chat) |
| MultiAgentRouter | Acts as an intelligent task dispatcher, ensuring optimal distribution of work across available agents based on their capabilities and current workload. | [Learn More](/swarms/structs/multi_agent_router) |
| AutoSwarmBuilder | Simplifies swarm creation by automatically configuring agent architectures based on task requirements and performance metrics. | [Learn More](/swarms/structs/auto_swarm_builder) |
| HiearchicalSwarm | Implements a structured approach to task management, with clear lines of authority and delegation across multiple agent levels. | [Learn More](/swarms/structs/multi_swarm_orchestration) |
| auto | Provides intelligent swarm selection based on context, automatically choosing the most effective architecture for given tasks. | [Learn More](/swarms/concept/how_to_choose_swarms) |
| MajorityVoting | Implements robust decision-making through consensus, particularly useful for tasks requiring collective intelligence or verification. | [Learn More](/swarms/structs/majorityvoting) |
| MALT | Specialized framework for language-based tasks, optimizing agent collaboration for complex language processing operations. | [Learn More](/swarms/structs/malt) |
|----------------------|------------------------------------------------------------------------------|------------|
| AgentRearrange | Dynamically reorganizes agents to optimize task performance and efficiency. Useful when agent effectiveness depends on their sequence or arrangement. | [Learn More](agent_rearrange.md) |
| MixtureOfAgents | Builds diverse teams of specialized agents, each contributing unique skills to solve complex problems. Excels at tasks requiring multiple types of expertise. | [Learn More](mixture_of_agents.md) |
| SequentialWorkflow | Executes tasks in a strict, predefined order. Perfect for workflows where each step depends on the completion of the previous one. | [Learn More](sequential_workflow.md) |
| ConcurrentWorkflow | Runs independent tasks in parallel, significantly reducing processing time for complex operations. Ideal for tasks that can be processed simultaneously. | [Learn More](concurrent_workflow.md) |
| GroupChat | Enables dynamic collaboration between agents through a chat-based interface, facilitating real-time information sharing and decision-making. | [Learn More](group_chat.md) |
| HierarchicalSwarm | Implements a structured, multi-level approach to task management, with clear lines of authority and delegation. | [Learn More](hierarchical_swarm.md) |
| MultiAgentRouter | Acts as an intelligent task dispatcher, distributing work across agents based on their capabilities and current workload. | [Learn More](multi_agent_router.md) |
| MajorityVoting | Implements robust decision-making through consensus, ideal for tasks requiring collective intelligence or verification. | [Learn More](majority_voting.md) |
<!-- | AutoSwarmBuilder | Automatically configures agent architectures based on task requirements and performance metrics, simplifying swarm creation. | [Learn More](auto_swarm_builder.md) |
<!-- | Auto | Intelligently selects the most effective swarm architecture for a given task based on context. | [Learn More](auto.md) | -->
# Learn More
To learn more about Swarms architecture and how different swarm types work together, visit our comprehensive guides:
To explore Swarms architecture and how different swarm types work together, check out our comprehensive guides:
- [Introduction to Multi-Agent Architectures](/swarms/concept/swarm_architectures)
- [How to Choose the Right Multi-Agent Architecture](/swarms/concept/how_to_choose_swarms)
- [Framework Architecture Overview](/swarms/concept/framework_architecture)
- [Building Custom Swarms](/swarms/structs/custom_swarm)

@ -33,14 +33,13 @@ agent = Agent(
- Performance attribution
You communicate in precise, technical terms while maintaining clarity for stakeholders.""",
model_name="gpt-4.1",
model_name="claude-sonnet-4-20250514",
dynamic_temperature_enabled=True,
output_type="str-all-except-first",
max_loops="auto",
interactive=True,
no_reasoning_prompt=True,
streaming_on=True,
# dashboard=True
)
out = agent.run(

@ -1,39 +0,0 @@
import os
from swarms_client import SwarmsClient
from swarms_client.types import AgentSpecParam
from dotenv import load_dotenv
load_dotenv()
client = SwarmsClient(api_key=os.getenv("SWARMS_API_KEY"))
agent_spec = AgentSpecParam(
agent_name="doctor_agent",
description="A virtual doctor agent that provides evidence-based, safe, and empathetic medical advice for common health questions. Always reminds users to consult a healthcare professional for diagnoses or prescriptions.",
task="What is the best medicine for a cold?",
model_name="claude-4-sonnet-20250514",
system_prompt=(
"You are a highly knowledgeable, ethical, and empathetic virtual doctor. "
"Always provide evidence-based, safe, and practical medical advice. "
"If a question requires a diagnosis, prescription, or urgent care, remind the user to consult a licensed healthcare professional. "
"Be clear, concise, and avoid unnecessary medical jargon. "
"Never provide information that could be unsafe or misleading. "
"If unsure, say so and recommend seeing a real doctor."
),
max_loops=1,
temperature=0.4,
role="doctor",
)
response = client.agent.run(
agent_config=agent_spec,
task="What is the best medicine for a cold?",
)
print(response)
# print(json.dumps(client.models.list_available(), indent=4))
# print(json.dumps(client.health.check(), indent=4))
# print(json.dumps(client.swarms.get_logs(), indent=4))
# print(json.dumps(client.client.rate.get_limits(), indent=4))
# print(json.dumps(client.swarms.check_available(), indent=4))

@ -0,0 +1,36 @@
import os
from swarms_client import SwarmsClient
from dotenv import load_dotenv
import json
load_dotenv()
client = SwarmsClient(
api_key=os.getenv("SWARMS_API_KEY"),
)
result = client.agent.run(
agent_config={
"agent_name": "Bloodwork Diagnosis Expert",
"description": "An expert doctor specializing in interpreting and diagnosing blood work results.",
"system_prompt": (
"You are an expert medical doctor specializing in the interpretation and diagnosis of blood work. "
"Your expertise includes analyzing laboratory results, identifying abnormal values, "
"explaining their clinical significance, and recommending next diagnostic or treatment steps. "
"Provide clear, evidence-based explanations and consider differential diagnoses based on blood test findings."
),
"model_name": "groq/moonshotai/kimi-k2-instruct",
"max_loops": 1,
"max_tokens": 1000,
"temperature": 0.5,
},
task=(
"A patient presents with the following blood work results: "
"Hemoglobin: 10.2 g/dL (low), WBC: 13,000 /µL (high), Platelets: 180,000 /µL (normal), "
"ALT: 65 U/L (high), AST: 70 U/L (high). "
"Please provide a detailed interpretation, possible diagnoses, and recommended next steps."
),
)
print(json.dumps(result, indent=4))

@ -0,0 +1,50 @@
import os
from swarms_client import SwarmsClient
from dotenv import load_dotenv
import json
load_dotenv()
client = SwarmsClient(
api_key=os.getenv("SWARMS_API_KEY"),
)
batch_requests = [
{
"agent_config": {
"agent_name": "Bloodwork Diagnosis Expert",
"description": "Expert in blood work interpretation.",
"system_prompt": (
"You are a doctor who interprets blood work. Give concise, clear explanations and possible diagnoses."
),
"model_name": "claude-sonnet-4-20250514",
"max_loops": 1,
"max_tokens": 1000,
"temperature": 0.5,
},
"task": (
"Blood work: Hemoglobin 10.2 (low), WBC 13,000 (high), Platelets 180,000 (normal), "
"ALT 65 (high), AST 70 (high). Interpret and suggest diagnoses."
),
},
{
"agent_config": {
"agent_name": "Radiology Report Summarizer",
"description": "Expert in summarizing radiology reports.",
"system_prompt": (
"You are a radiologist. Summarize the findings of radiology reports in clear, patient-friendly language."
),
"model_name": "claude-sonnet-4-20250514",
"max_loops": 1,
"max_tokens": 1000,
"temperature": 0.5,
},
"task": (
"Radiology report: Chest X-ray shows mild cardiomegaly, no infiltrates, no effusion. Summarize the findings."
),
},
]
result = client.agent.batch.run(body=batch_requests)
print(json.dumps(result, indent=4))

@ -0,0 +1,14 @@
import os
import json
from dotenv import load_dotenv
from swarms_client import SwarmsClient
load_dotenv()
client = SwarmsClient(api_key=os.getenv("SWARMS_API_KEY"))
print(json.dumps(client.models.list_available(), indent=4))
print(json.dumps(client.health.check(), indent=4))
print(json.dumps(client.swarms.get_logs(), indent=4))
print(json.dumps(client.client.rate.get_limits(), indent=4))
print(json.dumps(client.swarms.check_available(), indent=4))

@ -0,0 +1,105 @@
import json
import os
from swarms_client import SwarmsClient
from dotenv import load_dotenv
load_dotenv()
client = SwarmsClient(
api_key=os.getenv("SWARMS_API_KEY"),
)
def create_medical_unit_swarm(client, patient_info):
"""
Creates and runs a simulated medical unit swarm with a doctor (leader), nurses, and a medical assistant.
Args:
client (SwarmsClient): The SwarmsClient instance.
patient_info (str): The patient symptoms and information.
Returns:
dict: The output from the swarm run.
"""
return client.swarms.run(
name="Hospital Medical Unit",
description="A simulated hospital unit with a doctor (leader), nurses, and a medical assistant collaborating on patient care.",
swarm_type="HiearchicalSwarm",
task=patient_info,
agents=[
{
"agent_name": "Dr. Smith - Attending Physician",
"description": "The lead doctor responsible for diagnosis, treatment planning, and team coordination.",
"system_prompt": (
"You are Dr. Smith, the attending physician and leader of the medical unit. "
"You review all information, make final decisions, and coordinate the team. "
"Provide a diagnosis, recommend next steps, and delegate tasks to the nurses and assistant."
),
"model_name": "gpt-4.1",
"role": "leader",
"max_loops": 1,
"max_tokens": 8192,
"temperature": 0.5,
},
{
"agent_name": "Nurse Alice",
"description": "A registered nurse responsible for patient assessment, vital signs, and reporting findings to the doctor.",
"system_prompt": (
"You are Nurse Alice, a registered nurse. "
"Assess the patient's symptoms, record vital signs, and report your findings to Dr. Smith. "
"Suggest any immediate nursing interventions if needed."
),
"model_name": "gpt-4.1",
"role": "worker",
"max_loops": 1,
"max_tokens": 4096,
"temperature": 0.5,
},
{
"agent_name": "Nurse Bob",
"description": "A registered nurse assisting with patient care, medication administration, and monitoring.",
"system_prompt": (
"You are Nurse Bob, a registered nurse. "
"Assist with patient care, administer medications as ordered, and monitor the patient's response. "
"Communicate any changes to Dr. Smith."
),
"model_name": "gpt-4.1",
"role": "worker",
"max_loops": 1,
"max_tokens": 4096,
"temperature": 0.5,
},
{
"agent_name": "Medical Assistant Jane",
"description": "A medical assistant supporting the team with administrative tasks and basic patient care.",
"system_prompt": (
"You are Medical Assistant Jane. "
"Support the team by preparing the patient, collecting samples, and handling administrative tasks. "
"Report any relevant observations to the nurses or Dr. Smith."
),
"model_name": "claude-sonnet-4-20250514",
"role": "worker",
"max_loops": 1,
"max_tokens": 2048,
"temperature": 0.5,
},
],
)
if __name__ == "__main__":
patient_symptoms = """
Patient: 45-year-old female
Chief Complaint: Chest pain and shortness of breath for 2 days
Symptoms:
- Sharp chest pain that worsens with deep breathing
- Shortness of breath, especially when lying down
- Mild fever (100.2°F)
- Dry cough
- Fatigue
"""
out = create_medical_unit_swarm(client, patient_symptoms)
print(json.dumps(out, indent=4))

@ -0,0 +1,63 @@
import json
import os
from swarms_client import SwarmsClient
from dotenv import load_dotenv
load_dotenv()
client = SwarmsClient(
api_key=os.getenv("SWARMS_API_KEY"),
)
patient_symptoms = """
Patient: 45-year-old female
Chief Complaint: Chest pain and shortness of breath for 2 days
Symptoms:
- Sharp chest pain that worsens with deep breathing
- Shortness of breath, especially when lying down
- Mild fever (100.2°F)
- Dry cough
- Fatigue
"""
out = client.swarms.run(
name="ICD Analysis Swarm",
description="A swarm that analyzes ICD codes",
swarm_type="ConcurrentWorkflow",
task=patient_symptoms,
agents=[
{
"agent_name": "ICD-Analyzer",
"description": "An agent that analyzes ICD codes",
"system_prompt": "You are an expert ICD code analyzer. Your task is to analyze the ICD codes and provide a detailed explanation of the codes.",
"model_name": "groq/openai/gpt-oss-120b",
"role": "worker",
"max_loops": 1,
"max_tokens": 8192,
"temperature": 0.5,
},
{
"agent_name": "ICD-Code-Explainer-Primary",
"description": "An agent that provides primary explanations for ICD codes",
"system_prompt": "You are an expert ICD code explainer. Your task is to provide a clear and thorough explanation of the ICD codes to the user, focusing on primary meanings and clinical context.",
"model_name": "groq/openai/gpt-oss-120b",
"role": "worker",
"max_loops": 1,
"max_tokens": 8192,
"temperature": 0.5,
},
{
"agent_name": "ICD-Code-Explainer-Secondary",
"description": "An agent that provides additional context and secondary explanations for ICD codes",
"system_prompt": "You are an expert ICD code explainer. Your task is to provide additional context, nuances, and secondary explanations for the ICD codes, including possible differential diagnoses and related codes.",
"model_name": "groq/openai/gpt-oss-120b",
"role": "worker",
"max_loops": 1,
"max_tokens": 8192,
"temperature": 0.5,
},
],
)
print(json.dumps(out, indent=4))

@ -0,0 +1,247 @@
from loguru import logger
import yfinance as yf
import json
def get_figma_stock_data(stock: str) -> str:
"""
Fetches comprehensive stock data for Figma (FIG) using Yahoo Finance.
Returns:
Dict[str, Any]: A dictionary containing comprehensive Figma stock data including:
- Current price and market data
- Company information
- Financial metrics
- Historical data summary
- Trading statistics
Raises:
Exception: If there's an error fetching the data from Yahoo Finance
"""
try:
# Initialize Figma stock ticker
figma = yf.Ticker(stock)
# Get current stock info
info = figma.info
# Get recent historical data (last 30 days)
hist = figma.history(period="30d")
# Get real-time fast info
fast_info = figma.fast_info
# Compile comprehensive data
figma_data = {
"company_info": {
"name": info.get("longName", "Figma Inc."),
"symbol": "FIG",
"sector": info.get("sector", "N/A"),
"industry": info.get("industry", "N/A"),
"website": info.get("website", "N/A"),
"description": info.get("longBusinessSummary", "N/A"),
},
"current_market_data": {
"current_price": info.get("currentPrice", "N/A"),
"previous_close": info.get("previousClose", "N/A"),
"open": info.get("open", "N/A"),
"day_low": info.get("dayLow", "N/A"),
"day_high": info.get("dayHigh", "N/A"),
"volume": info.get("volume", "N/A"),
"market_cap": info.get("marketCap", "N/A"),
"price_change": (
info.get("currentPrice", 0)
- info.get("previousClose", 0)
if info.get("currentPrice")
and info.get("previousClose")
else "N/A"
),
"price_change_percent": info.get(
"regularMarketChangePercent", "N/A"
),
},
"financial_metrics": {
"pe_ratio": info.get("trailingPE", "N/A"),
"forward_pe": info.get("forwardPE", "N/A"),
"price_to_book": info.get("priceToBook", "N/A"),
"price_to_sales": info.get(
"priceToSalesTrailing12Months", "N/A"
),
"enterprise_value": info.get(
"enterpriseValue", "N/A"
),
"beta": info.get("beta", "N/A"),
"dividend_yield": info.get("dividendYield", "N/A"),
"payout_ratio": info.get("payoutRatio", "N/A"),
},
"trading_statistics": {
"fifty_day_average": info.get(
"fiftyDayAverage", "N/A"
),
"two_hundred_day_average": info.get(
"twoHundredDayAverage", "N/A"
),
"fifty_two_week_low": info.get(
"fiftyTwoWeekLow", "N/A"
),
"fifty_two_week_high": info.get(
"fiftyTwoWeekHigh", "N/A"
),
"shares_outstanding": info.get(
"sharesOutstanding", "N/A"
),
"float_shares": info.get("floatShares", "N/A"),
"shares_short": info.get("sharesShort", "N/A"),
"short_ratio": info.get("shortRatio", "N/A"),
},
"recent_performance": {
"last_30_days": {
"start_price": (
hist.iloc[0]["Close"]
if not hist.empty
else "N/A"
),
"end_price": (
hist.iloc[-1]["Close"]
if not hist.empty
else "N/A"
),
"total_return": (
(
hist.iloc[-1]["Close"]
- hist.iloc[0]["Close"]
)
/ hist.iloc[0]["Close"]
* 100
if not hist.empty
else "N/A"
),
"highest_price": (
hist["High"].max()
if not hist.empty
else "N/A"
),
"lowest_price": (
hist["Low"].min() if not hist.empty else "N/A"
),
"average_volume": (
hist["Volume"].mean()
if not hist.empty
else "N/A"
),
}
},
"real_time_data": {
"last_price": (
fast_info.last_price
if hasattr(fast_info, "last_price")
else "N/A"
),
"last_volume": (
fast_info.last_volume
if hasattr(fast_info, "last_volume")
else "N/A"
),
"bid": (
fast_info.bid
if hasattr(fast_info, "bid")
else "N/A"
),
"ask": (
fast_info.ask
if hasattr(fast_info, "ask")
else "N/A"
),
"bid_size": (
fast_info.bid_size
if hasattr(fast_info, "bid_size")
else "N/A"
),
"ask_size": (
fast_info.ask_size
if hasattr(fast_info, "ask_size")
else "N/A"
),
},
}
logger.info("Successfully fetched Figma (FIG) stock data")
return json.dumps(figma_data, indent=4)
except Exception as e:
logger.error(f"Error fetching Figma stock data: {e}")
raise Exception(f"Failed to fetch Figma stock data: {e}")
# # Example usage
# # Initialize the quantitative trading agent
# agent = Agent(
# agent_name="Quantitative-Trading-Agent",
# agent_description="Advanced quantitative trading and algorithmic analysis agent specializing in stock analysis and trading strategies",
# system_prompt=f"""You are an expert quantitative trading agent with deep expertise in:
# - Algorithmic trading strategies and implementation
# - Statistical arbitrage and market making
# - Risk management and portfolio optimization
# - High-frequency trading systems
# - Market microstructure analysis
# - Quantitative research methodologies
# - Financial mathematics and stochastic processes
# - Machine learning applications in trading
# - Technical analysis and chart patterns
# - Fundamental analysis and valuation models
# - Options trading and derivatives
# - Market sentiment analysis
# Your core responsibilities include:
# 1. Developing and backtesting trading strategies
# 2. Analyzing market data and identifying alpha opportunities
# 3. Implementing risk management frameworks
# 4. Optimizing portfolio allocations
# 5. Conducting quantitative research
# 6. Monitoring market microstructure
# 7. Evaluating trading system performance
# 8. Performing comprehensive stock analysis
# 9. Generating trading signals and recommendations
# 10. Risk assessment and position sizing
# When analyzing stocks, you should:
# - Evaluate technical indicators and chart patterns
# - Assess fundamental metrics and valuation ratios
# - Analyze market sentiment and momentum
# - Consider macroeconomic factors
# - Provide risk-adjusted return projections
# - Suggest optimal entry/exit points
# - Calculate position sizing recommendations
# - Identify potential catalysts and risks
# You maintain strict adherence to:
# - Mathematical rigor in all analyses
# - Statistical significance in strategy development
# - Risk-adjusted return optimization
# - Market impact minimization
# - Regulatory compliance
# - Transaction cost analysis
# - Performance attribution
# - Data-driven decision making
# You communicate in precise, technical terms while maintaining clarity for stakeholders.
# Data: {get_figma_stock_data('FIG')}
# """,
# max_loops=1,
# model_name="gpt-4o-mini",
# dynamic_temperature_enabled=True,
# output_type="str-all-except-first",
# streaming_on=True,
# print_on=True,
# telemetry_enable=False,
# )
# # Example 1: Basic usage with just a task
# logger.info("Starting quantitative analysis cron job for Figma (FIG)")
# cron_job = CronJob(agent=agent, interval="10seconds")
# cron_job.run(
# task="Analyze the Figma (FIG) stock comprehensively using the available stock data. Provide a detailed quantitative analysis"
# )
print(get_figma_stock_data("FIG"))

@ -0,0 +1,105 @@
"""
Example script demonstrating how to fetch Figma (FIG) stock data using swarms_tools Yahoo Finance API.
This shows the alternative approach using the existing swarms_tools package.
"""
from swarms import Agent
from swarms.prompts.finance_agent_sys_prompt import (
FINANCIAL_AGENT_SYS_PROMPT,
)
from swarms_tools import yahoo_finance_api
from loguru import logger
import json
def get_figma_data_with_swarms_tools():
"""
Fetches Figma stock data using the swarms_tools Yahoo Finance API.
Returns:
dict: Figma stock data from swarms_tools
"""
try:
logger.info("Fetching Figma stock data using swarms_tools...")
figma_data = yahoo_finance_api(["FIG"])
return figma_data
except Exception as e:
logger.error(f"Error fetching data with swarms_tools: {e}")
raise
def analyze_figma_with_agent():
"""
Uses a Swarms agent to analyze Figma stock data.
"""
try:
# Initialize the agent with Yahoo Finance tool
agent = Agent(
agent_name="Figma-Analysis-Agent",
agent_description="Specialized agent for analyzing Figma stock data",
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
max_loops=1,
model_name="gpt-4o-mini",
tools=[yahoo_finance_api],
dynamic_temperature_enabled=True,
)
# Ask the agent to analyze Figma
analysis = agent.run(
"Analyze the current stock data for Figma (FIG) and provide insights on its performance, valuation metrics, and recent trends."
)
return analysis
except Exception as e:
logger.error(f"Error in agent analysis: {e}")
raise
def main():
"""
Main function to demonstrate different approaches for Figma stock data.
"""
logger.info("Starting Figma stock analysis with swarms_tools")
try:
# Method 1: Direct API call
print("\n" + "=" * 60)
print("METHOD 1: Direct swarms_tools API call")
print("=" * 60)
figma_data = get_figma_data_with_swarms_tools()
print("Raw data from swarms_tools:")
print(json.dumps(figma_data, indent=2, default=str))
# Method 2: Agent-based analysis
print("\n" + "=" * 60)
print("METHOD 2: Agent-based analysis")
print("=" * 60)
analysis = analyze_figma_with_agent()
print("Agent analysis:")
print(analysis)
# Method 3: Comparison with custom function
print("\n" + "=" * 60)
print("METHOD 3: Comparison with custom function")
print("=" * 60)
from cron_job_examples.cron_job_example import (
get_figma_stock_data_simple,
)
custom_data = get_figma_stock_data_simple()
print("Custom function output:")
print(custom_data)
logger.info("All methods completed successfully!")
except Exception as e:
logger.error(f"Error in main function: {e}")
print(f"Error: {e}")
if __name__ == "__main__":
main()

@ -0,0 +1,349 @@
"""
Cryptocurrency Concurrent Multi-Agent Cron Job Example
This example demonstrates how to use ConcurrentWorkflow with CronJob to create
a powerful cryptocurrency tracking system. Each specialized agent analyzes a
specific cryptocurrency concurrently every minute.
Features:
- ConcurrentWorkflow for parallel agent execution
- CronJob scheduling for automated runs every 1 minute
- Each agent specializes in analyzing one specific cryptocurrency
- Real-time data fetching from CoinGecko API
- Concurrent analysis of multiple cryptocurrencies
- Structured output with professional formatting
Architecture:
CronJob -> ConcurrentWorkflow -> [Bitcoin Agent, Ethereum Agent, Solana Agent, etc.] -> Parallel Analysis
"""
from typing import List
from loguru import logger
from swarms import Agent, CronJob, ConcurrentWorkflow
from swarms_tools import coin_gecko_coin_api
def create_crypto_specific_agents() -> List[Agent]:
"""
Creates agents that each specialize in analyzing a specific cryptocurrency.
Returns:
List[Agent]: List of cryptocurrency-specific Agent instances
"""
# Bitcoin Specialist Agent
bitcoin_agent = Agent(
agent_name="Bitcoin-Analyst",
agent_description="Expert analyst specializing exclusively in Bitcoin (BTC) analysis and market dynamics",
system_prompt="""You are a Bitcoin specialist and expert analyst. Your expertise includes:
BITCOIN SPECIALIZATION:
- Bitcoin's unique position as digital gold
- Bitcoin halving cycles and their market impact
- Bitcoin mining economics and hash rate analysis
- Lightning Network and Layer 2 developments
- Bitcoin adoption by institutions and countries
- Bitcoin's correlation with traditional markets
- Bitcoin technical analysis and on-chain metrics
- Bitcoin's role as a store of value and hedge against inflation
ANALYSIS FOCUS:
- Analyze ONLY Bitcoin data from the provided dataset
- Focus on Bitcoin-specific metrics and trends
- Consider Bitcoin's unique market dynamics
- Evaluate Bitcoin's dominance and market leadership
- Assess institutional adoption trends
- Monitor on-chain activity and network health
DELIVERABLES:
- Bitcoin-specific analysis and insights
- Price action assessment and predictions
- Market dominance analysis
- Institutional adoption impact
- Technical and fundamental outlook
- Risk factors specific to Bitcoin
Extract Bitcoin data from the provided dataset and provide comprehensive Bitcoin-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
# Ethereum Specialist Agent
ethereum_agent = Agent(
agent_name="Ethereum-Analyst",
agent_description="Expert analyst specializing exclusively in Ethereum (ETH) analysis and ecosystem development",
system_prompt="""You are an Ethereum specialist and expert analyst. Your expertise includes:
ETHEREUM SPECIALIZATION:
- Ethereum's smart contract platform and DeFi ecosystem
- Ethereum 2.0 transition and proof-of-stake mechanics
- Gas fees, network usage, and scalability solutions
- Layer 2 solutions (Arbitrum, Optimism, Polygon)
- DeFi protocols and TVL (Total Value Locked) analysis
- NFT markets and Ethereum's role in digital assets
- Developer activity and ecosystem growth
- EIP proposals and network upgrades
ANALYSIS FOCUS:
- Analyze ONLY Ethereum data from the provided dataset
- Focus on Ethereum's platform utility and network effects
- Evaluate DeFi ecosystem health and growth
- Assess Layer 2 adoption and scalability solutions
- Monitor network usage and gas fee trends
- Consider Ethereum's competitive position vs other smart contract platforms
DELIVERABLES:
- Ethereum-specific analysis and insights
- Platform utility and adoption metrics
- DeFi ecosystem impact assessment
- Network health and scalability evaluation
- Competitive positioning analysis
- Technical and fundamental outlook for ETH
Extract Ethereum data from the provided dataset and provide comprehensive Ethereum-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
# Solana Specialist Agent
solana_agent = Agent(
agent_name="Solana-Analyst",
agent_description="Expert analyst specializing exclusively in Solana (SOL) analysis and ecosystem development",
system_prompt="""You are a Solana specialist and expert analyst. Your expertise includes:
SOLANA SPECIALIZATION:
- Solana's high-performance blockchain architecture
- Proof-of-History consensus mechanism
- Solana's DeFi ecosystem and DEX platforms (Serum, Raydium)
- NFT marketplaces and creator economy on Solana
- Network outages and reliability concerns
- Developer ecosystem and Rust programming adoption
- Validator economics and network decentralization
- Cross-chain bridges and interoperability
ANALYSIS FOCUS:
- Analyze ONLY Solana data from the provided dataset
- Focus on Solana's performance and scalability advantages
- Evaluate network stability and uptime improvements
- Assess ecosystem growth and developer adoption
- Monitor DeFi and NFT activity on Solana
- Consider Solana's competitive position vs Ethereum
DELIVERABLES:
- Solana-specific analysis and insights
- Network performance and reliability assessment
- Ecosystem growth and adoption metrics
- DeFi and NFT market analysis
- Competitive advantages and challenges
- Technical and fundamental outlook for SOL
Extract Solana data from the provided dataset and provide comprehensive Solana-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
# Cardano Specialist Agent
cardano_agent = Agent(
agent_name="Cardano-Analyst",
agent_description="Expert analyst specializing exclusively in Cardano (ADA) analysis and research-driven development",
system_prompt="""You are a Cardano specialist and expert analyst. Your expertise includes:
CARDANO SPECIALIZATION:
- Cardano's research-driven development approach
- Ouroboros proof-of-stake consensus protocol
- Smart contract capabilities via Plutus and Marlowe
- Cardano's three-layer architecture (settlement, computation, control)
- Academic partnerships and peer-reviewed research
- Cardano ecosystem projects and DApp development
- Native tokens and Cardano's UTXO model
- Sustainability and treasury funding mechanisms
ANALYSIS FOCUS:
- Analyze ONLY Cardano data from the provided dataset
- Focus on Cardano's methodical development approach
- Evaluate smart contract adoption and ecosystem growth
- Assess academic partnerships and research contributions
- Monitor native token ecosystem development
- Consider Cardano's long-term roadmap and milestones
DELIVERABLES:
- Cardano-specific analysis and insights
- Development progress and milestone achievements
- Smart contract ecosystem evaluation
- Academic research impact assessment
- Native token and DApp adoption metrics
- Technical and fundamental outlook for ADA
Extract Cardano data from the provided dataset and provide comprehensive Cardano-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
# Binance Coin Specialist Agent
bnb_agent = Agent(
agent_name="BNB-Analyst",
agent_description="Expert analyst specializing exclusively in BNB analysis and Binance ecosystem dynamics",
system_prompt="""You are a BNB specialist and expert analyst. Your expertise includes:
BNB SPECIALIZATION:
- BNB's utility within the Binance ecosystem
- Binance Smart Chain (BSC) development and adoption
- BNB token burns and deflationary mechanics
- Binance exchange volume and market leadership
- BSC DeFi ecosystem and yield farming
- Cross-chain bridges and multi-chain strategies
- Regulatory challenges facing Binance globally
- BNB's role in transaction fee discounts and platform benefits
ANALYSIS FOCUS:
- Analyze ONLY BNB data from the provided dataset
- Focus on BNB's utility value and exchange benefits
- Evaluate BSC ecosystem growth and competition with Ethereum
- Assess token burn impact on supply and price
- Monitor Binance platform developments and regulations
- Consider BNB's centralized vs decentralized aspects
DELIVERABLES:
- BNB-specific analysis and insights
- Utility value and ecosystem benefits assessment
- BSC adoption and DeFi growth evaluation
- Token economics and burn mechanism impact
- Regulatory risk and compliance analysis
- Technical and fundamental outlook for BNB
Extract BNB data from the provided dataset and provide comprehensive BNB-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
# XRP Specialist Agent
xrp_agent = Agent(
agent_name="XRP-Analyst",
agent_description="Expert analyst specializing exclusively in XRP analysis and cross-border payment solutions",
system_prompt="""You are an XRP specialist and expert analyst. Your expertise includes:
XRP SPECIALIZATION:
- XRP's role in cross-border payments and remittances
- RippleNet adoption by financial institutions
- Central Bank Digital Currency (CBDC) partnerships
- Regulatory landscape and SEC lawsuit implications
- XRP Ledger's consensus mechanism and energy efficiency
- On-Demand Liquidity (ODL) usage and growth
- Competition with SWIFT and traditional payment rails
- Ripple's partnerships with banks and payment providers
ANALYSIS FOCUS:
- Analyze ONLY XRP data from the provided dataset
- Focus on XRP's utility in payments and remittances
- Evaluate RippleNet adoption and institutional partnerships
- Assess regulatory developments and legal clarity
- Monitor ODL usage and transaction volumes
- Consider XRP's competitive position in payments
DELIVERABLES:
- XRP-specific analysis and insights
- Payment utility and adoption assessment
- Regulatory landscape and legal developments
- Institutional partnership impact evaluation
- Cross-border payment market analysis
- Technical and fundamental outlook for XRP
Extract XRP data from the provided dataset and provide comprehensive XRP-focused analysis.""",
model_name="groq/moonshotai/kimi-k2-instruct",
max_loops=1,
dynamic_temperature_enabled=True,
streaming_on=False,
tools=[coin_gecko_coin_api],
)
return [
bitcoin_agent,
ethereum_agent,
solana_agent,
cardano_agent,
bnb_agent,
xrp_agent,
]
def create_crypto_workflow() -> ConcurrentWorkflow:
"""
Creates a ConcurrentWorkflow with cryptocurrency-specific analysis agents.
Returns:
ConcurrentWorkflow: Configured workflow for crypto analysis
"""
agents = create_crypto_specific_agents()
workflow = ConcurrentWorkflow(
name="Crypto-Specific-Analysis-Workflow",
description="Concurrent execution of cryptocurrency-specific analysis agents",
agents=agents,
max_loops=1,
)
return workflow
def create_crypto_cron_job() -> CronJob:
"""
Creates a CronJob that runs cryptocurrency-specific analysis every minute using ConcurrentWorkflow.
Returns:
CronJob: Configured cron job for automated crypto analysis
"""
# Create the concurrent workflow
workflow = create_crypto_workflow()
# Create the cron job
cron_job = CronJob(
agent=workflow, # Use the workflow as the agent
interval="5seconds", # Run every 1 minute
)
return cron_job
def main():
"""
Main function to run the cryptocurrency-specific concurrent analysis cron job.
"""
cron_job = create_crypto_cron_job()
prompt = (
"You are a world-class institutional crypto analyst at a top-tier asset management firm (e.g., BlackRock).\n"
"Conduct a thorough, data-driven, and professional analysis of your assigned cryptocurrency, including:\n"
"- Current price, market cap, and recent performance trends\n"
"- Key technical and fundamental indicators\n"
"- Major news, regulatory, or macroeconomic events impacting the asset\n"
"- On-chain activity and notable whale or institutional movements\n"
"- Short-term and long-term outlook with clear, actionable insights\n"
"Present your findings in a concise, well-structured report suitable for executive decision-makers."
)
# Start the cron job
logger.info("🔄 Starting automated analysis loop...")
logger.info("⏰ Press Ctrl+C to stop the cron job")
output = cron_job.run(task=prompt)
print(output)
if __name__ == "__main__":
main()

@ -0,0 +1,79 @@
"""
Example script demonstrating how to fetch Figma (FIG) stock data using Yahoo Finance.
"""
from cron_job_examples.cron_job_example import (
get_figma_stock_data,
get_figma_stock_data_simple,
)
from loguru import logger
import json
def main():
"""
Main function to demonstrate Figma stock data fetching.
"""
logger.info("Starting Figma stock data demonstration")
try:
# Example 1: Get comprehensive data as dictionary
logger.info("Fetching comprehensive Figma stock data...")
figma_data = get_figma_stock_data()
# Print the data in a structured format
print("\n" + "=" * 50)
print("COMPREHENSIVE FIGMA STOCK DATA")
print("=" * 50)
print(json.dumps(figma_data, indent=2, default=str))
# Example 2: Get simple formatted data
logger.info("Fetching simple formatted Figma stock data...")
simple_data = get_figma_stock_data_simple()
print("\n" + "=" * 50)
print("SIMPLE FORMATTED FIGMA STOCK DATA")
print("=" * 50)
print(simple_data)
# Example 3: Access specific data points
logger.info("Accessing specific data points...")
current_price = figma_data["current_market_data"][
"current_price"
]
market_cap = figma_data["current_market_data"]["market_cap"]
pe_ratio = figma_data["financial_metrics"]["pe_ratio"]
print("\nKey Metrics:")
print(f"Current Price: ${current_price}")
print(f"Market Cap: ${market_cap:,}")
print(f"P/E Ratio: {pe_ratio}")
# Example 4: Check if stock is performing well
price_change = figma_data["current_market_data"][
"price_change"
]
if isinstance(price_change, (int, float)):
if price_change > 0:
print(
f"\n📈 Figma stock is up ${price_change:.2f} today!"
)
elif price_change < 0:
print(
f"\n📉 Figma stock is down ${abs(price_change):.2f} today."
)
else:
print("\n➡️ Figma stock is unchanged today.")
logger.info(
"Figma stock data demonstration completed successfully!"
)
except Exception as e:
logger.error(f"Error in main function: {e}")
print(f"Error: {e}")
if __name__ == "__main__":
main()

@ -0,0 +1,157 @@
"""
Simple Cryptocurrency Concurrent CronJob Example
This is a simplified version showcasing the core concept of combining:
- CronJob (for scheduling)
- ConcurrentWorkflow (for parallel execution)
- Each agent analyzes a specific cryptocurrency
Perfect for understanding the basic pattern before diving into the full example.
"""
import json
import requests
from datetime import datetime
from loguru import logger
from swarms import Agent, CronJob, ConcurrentWorkflow
def get_specific_crypto_data(coin_ids):
"""Fetch specific crypto data from CoinGecko API."""
try:
url = "https://api.coingecko.com/api/v3/simple/price"
params = {
"ids": ",".join(coin_ids),
"vs_currencies": "usd",
"include_24hr_change": True,
"include_market_cap": True,
"include_24hr_vol": True,
}
response = requests.get(url, params=params, timeout=10)
response.raise_for_status()
data = response.json()
result = {
"timestamp": datetime.now().isoformat(),
"coins": data,
}
return json.dumps(result, indent=2)
except Exception as e:
logger.error(f"Error fetching crypto data: {e}")
return f"Error: {e}"
def create_crypto_specific_agents():
"""Create agents that each specialize in one cryptocurrency."""
# Bitcoin Specialist Agent
bitcoin_agent = Agent(
agent_name="Bitcoin-Analyst",
system_prompt="""You are a Bitcoin specialist. Analyze ONLY Bitcoin (BTC) data from the provided dataset.
Focus on:
- Bitcoin price movements and trends
- Market dominance and institutional adoption
- Bitcoin-specific market dynamics
- Store of value characteristics
Ignore all other cryptocurrencies in your analysis.""",
model_name="gpt-4o-mini",
max_loops=1,
print_on=False, # Important for concurrent execution
)
# Ethereum Specialist Agent
ethereum_agent = Agent(
agent_name="Ethereum-Analyst",
system_prompt="""You are an Ethereum specialist. Analyze ONLY Ethereum (ETH) data from the provided dataset.
Focus on:
- Ethereum price action and DeFi ecosystem
- Smart contract platform adoption
- Gas fees and network usage
- Layer 2 scaling solutions impact
Ignore all other cryptocurrencies in your analysis.""",
model_name="gpt-4o-mini",
max_loops=1,
print_on=False,
)
# Solana Specialist Agent
solana_agent = Agent(
agent_name="Solana-Analyst",
system_prompt="""You are a Solana specialist. Analyze ONLY Solana (SOL) data from the provided dataset.
Focus on:
- Solana price performance and ecosystem growth
- High-performance blockchain advantages
- DeFi and NFT activity on Solana
- Network reliability and uptime
Ignore all other cryptocurrencies in your analysis.""",
model_name="gpt-4o-mini",
max_loops=1,
print_on=False,
)
return [bitcoin_agent, ethereum_agent, solana_agent]
def main():
"""Main function demonstrating crypto-specific concurrent analysis with cron job."""
logger.info(
"🚀 Starting Simple Crypto-Specific Concurrent Analysis"
)
logger.info("💰 Each agent analyzes one specific cryptocurrency:")
logger.info(" 🟠 Bitcoin-Analyst -> BTC only")
logger.info(" 🔵 Ethereum-Analyst -> ETH only")
logger.info(" 🟢 Solana-Analyst -> SOL only")
# Define specific cryptocurrencies to analyze
coin_ids = ["bitcoin", "ethereum", "solana"]
# Step 1: Create crypto-specific agents
agents = create_crypto_specific_agents()
# Step 2: Create ConcurrentWorkflow
workflow = ConcurrentWorkflow(
name="Simple-Crypto-Specific-Analysis",
agents=agents,
show_dashboard=True, # Shows real-time progress
)
# Step 3: Create CronJob with the workflow
cron_job = CronJob(
agent=workflow, # Use workflow as the agent
interval="60seconds", # Run every minute
job_id="simple-crypto-specific-cron",
)
# Step 4: Define the analysis task
task = f"""
Analyze the cryptocurrency data below. Each agent should focus ONLY on their assigned cryptocurrency:
- Bitcoin-Analyst: Analyze Bitcoin (BTC) data only
- Ethereum-Analyst: Analyze Ethereum (ETH) data only
- Solana-Analyst: Analyze Solana (SOL) data only
Cryptocurrency Data:
{get_specific_crypto_data(coin_ids)}
Each agent should:
1. Extract and analyze data for YOUR ASSIGNED cryptocurrency only
2. Provide brief insights from your specialty perspective
3. Give a price trend assessment
4. Identify key opportunities or risks
5. Ignore all other cryptocurrencies
"""
# Step 5: Start the cron job
logger.info("▶️ Starting cron job - Press Ctrl+C to stop")
try:
cron_job.run(task=task)
except KeyboardInterrupt:
logger.info("⏹️ Stopped by user")
if __name__ == "__main__":
main()

@ -0,0 +1,257 @@
from swarms import Agent, CronJob
from loguru import logger
import requests
import json
from datetime import datetime
def get_solana_price() -> str:
"""
Fetches comprehensive Solana (SOL) price data using CoinGecko API.
Returns:
str: A JSON formatted string containing Solana's current price and market data including:
- Current price in USD
- Market cap
- 24h volume
- 24h price change
- Last updated timestamp
Raises:
Exception: If there's an error fetching the data from CoinGecko API
"""
try:
# CoinGecko API endpoint for simple price data
url = "https://api.coingecko.com/api/v3/simple/price"
params = {
"ids": "solana", # Solana's CoinGecko ID
"vs_currencies": "usd",
"include_market_cap": True,
"include_24hr_vol": True,
"include_24hr_change": True,
"include_last_updated_at": True,
}
# Make API request with timeout
response = requests.get(url, params=params, timeout=10)
response.raise_for_status()
# Parse response data
data = response.json()
if "solana" not in data:
raise Exception("Solana data not found in API response")
solana_data = data["solana"]
# Compile comprehensive data
solana_info = {
"timestamp": datetime.now().isoformat(),
"coin_info": {
"name": "Solana",
"symbol": "SOL",
"coin_id": "solana",
},
"price_data": {
"current_price_usd": solana_data.get("usd", "N/A"),
"market_cap_usd": solana_data.get(
"usd_market_cap", "N/A"
),
"volume_24h_usd": solana_data.get(
"usd_24h_vol", "N/A"
),
"price_change_24h_percent": solana_data.get(
"usd_24h_change", "N/A"
),
"last_updated_at": solana_data.get(
"last_updated_at", "N/A"
),
},
"formatted_data": {
"price_formatted": (
f"${solana_data.get('usd', 'N/A'):,.2f}"
if solana_data.get("usd")
else "N/A"
),
"market_cap_formatted": (
f"${solana_data.get('usd_market_cap', 'N/A'):,.0f}"
if solana_data.get("usd_market_cap")
else "N/A"
),
"volume_formatted": (
f"${solana_data.get('usd_24h_vol', 'N/A'):,.0f}"
if solana_data.get("usd_24h_vol")
else "N/A"
),
"change_formatted": (
f"{solana_data.get('usd_24h_change', 'N/A'):+.2f}%"
if solana_data.get("usd_24h_change") is not None
else "N/A"
),
},
}
logger.info(
f"Successfully fetched Solana price: ${solana_data.get('usd', 'N/A')}"
)
return json.dumps(solana_info, indent=4)
except requests.RequestException as e:
error_msg = f"API request failed: {e}"
logger.error(error_msg)
return json.dumps(
{
"error": error_msg,
"timestamp": datetime.now().isoformat(),
"status": "failed",
},
indent=4,
)
except Exception as e:
error_msg = f"Error fetching Solana price data: {e}"
logger.error(error_msg)
return json.dumps(
{
"error": error_msg,
"timestamp": datetime.now().isoformat(),
"status": "failed",
},
indent=4,
)
def analyze_solana_data(data: str) -> str:
"""
Analyzes Solana price data and provides insights.
Args:
data (str): JSON string containing Solana price data
Returns:
str: Analysis and insights about the current Solana market data
"""
try:
# Parse the data
solana_data = json.loads(data)
if "error" in solana_data:
return f"❌ Error in data: {solana_data['error']}"
price_data = solana_data.get("price_data", {})
formatted_data = solana_data.get("formatted_data", {})
# Extract key metrics
price_data.get("current_price_usd")
price_change = price_data.get("price_change_24h_percent")
volume_24h = price_data.get("volume_24h_usd")
market_cap = price_data.get("market_cap_usd")
# Generate analysis
analysis = f"""
🔍 **Solana (SOL) Market Analysis** - {solana_data.get('timestamp', 'N/A')}
💰 **Current Price**: {formatted_data.get('price_formatted', 'N/A')}
📊 **24h Change**: {formatted_data.get('change_formatted', 'N/A')}
💎 **Market Cap**: {formatted_data.get('market_cap_formatted', 'N/A')}
📈 **24h Volume**: {formatted_data.get('volume_formatted', 'N/A')}
"""
# Add sentiment analysis based on price change
if price_change is not None:
if price_change > 5:
analysis += "🚀 **Sentiment**: Strongly Bullish - Significant positive momentum\n"
elif price_change > 1:
analysis += "📈 **Sentiment**: Bullish - Positive price action\n"
elif price_change > -1:
analysis += (
"➡️ **Sentiment**: Neutral - Sideways movement\n"
)
elif price_change > -5:
analysis += "📉 **Sentiment**: Bearish - Negative price action\n"
else:
analysis += "🔻 **Sentiment**: Strongly Bearish - Significant decline\n"
# Add volume analysis
if volume_24h and market_cap:
try:
volume_market_cap_ratio = (
volume_24h / market_cap
) * 100
if volume_market_cap_ratio > 10:
analysis += "🔥 **Volume**: High trading activity - Strong market interest\n"
elif volume_market_cap_ratio > 5:
analysis += (
"📊 **Volume**: Moderate trading activity\n"
)
else:
analysis += "😴 **Volume**: Low trading activity - Limited market movement\n"
except (TypeError, ZeroDivisionError):
analysis += "📊 **Volume**: Unable to calculate volume/market cap ratio\n"
analysis += f"\n⏰ **Last Updated**: {price_data.get('last_updated_at', 'N/A')}"
return analysis
except json.JSONDecodeError as e:
return f"❌ Error parsing data: {e}"
except Exception as e:
return f"❌ Error analyzing data: {e}"
# Initialize the Solana analysis agent
agent = Agent(
agent_name="Solana-Price-Analyzer",
agent_description="Specialized agent for analyzing Solana (SOL) cryptocurrency price data and market trends",
system_prompt=f"""You are an expert cryptocurrency analyst specializing in Solana (SOL) analysis. Your expertise includes:
- Technical analysis and chart patterns
- Market sentiment analysis
- Volume and liquidity analysis
- Price action interpretation
- Market cap and valuation metrics
- Cryptocurrency market dynamics
- DeFi ecosystem analysis
- Blockchain technology trends
When analyzing Solana data, you should:
- Evaluate price movements and trends
- Assess market sentiment and momentum
- Consider volume and liquidity factors
- Analyze market cap positioning
- Provide actionable insights
- Identify potential catalysts or risks
- Consider broader market context
You communicate clearly and provide practical analysis that helps users understand Solana's current market position and potential future movements.
Current Solana Data: {get_solana_price()}
""",
max_loops=1,
model_name="gpt-4o-mini",
dynamic_temperature_enabled=True,
output_type="str-all-except-first",
streaming_on=False, # need to fix this bug where streaming is working but makes copies of the border when you scroll on the terminal
print_on=True,
telemetry_enable=False,
)
def main():
"""
Main function to run the Solana price tracking cron job.
"""
logger.info("🚀 Starting Solana price tracking cron job")
logger.info("📊 Fetching Solana price every 10 seconds...")
# Create cron job that runs every 10 seconds
cron_job = CronJob(agent=agent, interval="30seconds")
# Run the cron job with analysis task
cron_job.run(
task="Analyze the current Solana (SOL) price data comprehensively. Provide detailed market analysis including price trends, volume analysis, market sentiment, and actionable insights. Format your response clearly with emojis and structured sections."
)
if __name__ == "__main__":
main()

@ -0,0 +1,267 @@
import time
from typing import Dict, List
from swarms import Agent
from swarms.utils.litellm_tokenizer import count_tokens
class LongFormGenerator:
"""
A class for generating long-form content using the swarms Agent framework.
This class provides methods for creating comprehensive, detailed content
with support for continuation and sectioned generation.
"""
def __init__(self, model: str = "claude-sonnet-4-20250514"):
"""
Initialize the LongFormGenerator with specified model.
Args:
model (str): The model to use for content generation
"""
self.model = model
def estimate_tokens(self, text: str) -> int:
"""
Estimate token count for text.
Args:
text (str): The text to estimate tokens for
Returns:
int: Estimated token count
"""
return count_tokens(text=text, model=self.model)
def create_expansion_prompt(
self, topic: str, requirements: Dict
) -> str:
"""
Create optimized prompt for long-form content.
Args:
topic (str): The main topic to generate content about
requirements (Dict): Requirements for content generation
Returns:
str: Formatted prompt for content generation
"""
structure_requirements = []
if "sections" in requirements:
for i, section in enumerate(requirements["sections"]):
structure_requirements.append(
f"{i+1}. {section['title']} - {section.get('description', 'Provide comprehensive analysis')}"
)
length_guidance = (
f"Target length: {requirements.get('min_words', 2000)}-{requirements.get('max_words', 4000)} words"
if "min_words" in requirements
else ""
)
prompt = f"""Create a comprehensive, detailed analysis of: {topic}
REQUIREMENTS:
- This is a professional-level document requiring thorough treatment
- Each section must be substantive with detailed explanations
- Include specific examples, case studies, and technical details where relevant
- Provide multiple perspectives and comprehensive coverage
- {length_guidance}
STRUCTURE:
{chr(10).join(structure_requirements)}
QUALITY STANDARDS:
- Demonstrate deep expertise and understanding
- Include relevant technical specifications and details
- Provide actionable insights and practical applications
- Use professional language appropriate for expert audience
- Ensure logical flow and comprehensive coverage of all aspects
Begin your comprehensive analysis:"""
return prompt
def generate_with_continuation(
self, topic: str, requirements: Dict, max_attempts: int = 3
) -> str:
"""
Generate long-form content with continuation if needed.
Args:
topic (str): The main topic to generate content about
requirements (Dict): Requirements for content generation
max_attempts (int): Maximum number of continuation attempts
Returns:
str: Generated long-form content
"""
initial_prompt = self.create_expansion_prompt(
topic, requirements
)
# Create agent for initial generation
agent = Agent(
name="LongForm Content Generator",
system_prompt=initial_prompt,
model=self.model,
max_loops=1,
temperature=0.7,
max_tokens=4000,
)
# Generate initial response
content = agent.run(topic)
target_words = requirements.get("min_words", 2000)
# Check if continuation is needed
word_count = len(content.split())
continuation_count = 0
while (
word_count < target_words
and continuation_count < max_attempts
):
continuation_prompt = f"""Continue and expand the previous analysis. The current response is {word_count} words, but we need approximately {target_words} words total for comprehensive coverage.
Please continue with additional detailed analysis, examples, and insights. Focus on areas that could benefit from deeper exploration or additional perspectives. Maintain the same professional tone and analytical depth.
Continue the analysis:"""
# Create continuation agent
continuation_agent = Agent(
name="Content Continuation Agent",
system_prompt=continuation_prompt,
model=self.model,
max_loops=1,
temperature=0.7,
max_tokens=4000,
)
# Generate continuation
continuation_content = continuation_agent.run(
f"Continue the analysis on: {topic}"
)
content += "\n\n" + continuation_content
word_count = len(content.split())
continuation_count += 1
# Rate limiting
time.sleep(1)
return content
def generate_sectioned_content(
self,
topic: str,
sections: List[Dict],
combine_sections: bool = True,
) -> Dict:
"""
Generate content section by section for maximum length.
Args:
topic (str): The main topic to generate content about
sections (List[Dict]): List of section definitions
combine_sections (bool): Whether to combine all sections into one document
Returns:
Dict: Dictionary containing individual sections and optionally combined content
"""
results = {}
combined_content = ""
for section in sections:
section_prompt = f"""Write a comprehensive, detailed section on: {section['title']}
Context: This is part of a larger analysis on {topic}
Requirements for this section:
- Provide {section.get('target_words', 500)}-{section.get('max_words', 800)} words of detailed content
- {section.get('description', 'Provide thorough analysis with examples and insights')}
- Include specific examples, technical details, and practical applications
- Use professional language suitable for expert audience
- Ensure comprehensive coverage of all relevant aspects
Write the complete section:"""
# Create agent for this section
section_agent = Agent(
name=f"Section Generator - {section['title']}",
system_prompt=section_prompt,
model=self.model,
max_loops=1,
temperature=0.7,
max_tokens=3000,
)
# Generate section content
section_content = section_agent.run(
f"Generate section: {section['title']} for topic: {topic}"
)
results[section["title"]] = section_content
if combine_sections:
combined_content += (
f"\n\n## {section['title']}\n\n{section_content}"
)
# Rate limiting between sections
time.sleep(1)
if combine_sections:
results["combined"] = combined_content.strip()
return results
# Example usage
if __name__ == "__main__":
# Initialize the generator
generator = LongFormGenerator()
# Example topic and requirements
topic = "Artificial Intelligence in Healthcare"
requirements = {
"min_words": 2500,
"max_words": 4000,
"sections": [
{
"title": "Current Applications",
"description": "Analyze current AI applications in healthcare",
"target_words": 600,
"max_words": 800,
},
{
"title": "Future Prospects",
"description": "Discuss future developments and potential",
"target_words": 500,
"max_words": 700,
},
],
}
# Generate comprehensive content
content = generator.generate_with_continuation(
topic, requirements
)
print("Generated Content:")
print(content)
print(f"\nWord count: {len(content.split())}")
# Generate sectioned content
sections = [
{
"title": "AI in Medical Imaging",
"description": "Comprehensive analysis of AI applications in medical imaging",
"target_words": 500,
"max_words": 700,
},
{
"title": "AI in Drug Discovery",
"description": "Detailed examination of AI in pharmaceutical research",
"target_words": 600,
"max_words": 800,
},
]
sectioned_results = generator.generate_sectioned_content(
topic, sections
)
print("\nSectioned Content:")
for section_title, section_content in sectioned_results.items():
if section_title != "combined":
print(f"\n--- {section_title} ---")
print(section_content[:200] + "...")

@ -0,0 +1,29 @@
from swarms import Agent
def generate_comprehensive_content(topic, sections):
prompt = f"""You are tasked with creating a comprehensive, detailed analysis of {topic}.
This should be a thorough, professional-level document suitable for expert review.
Structure your response with the following sections, ensuring each is substantive and detailed:
{chr(10).join([f"{i+1}. {section} - Provide extensive detail with examples and analysis" for i, section in enumerate(sections)])}
For each section:
- Include multiple subsections where appropriate
- Provide specific examples and case studies
- Offer detailed explanations of complex concepts
- Include relevant technical details and specifications
- Discuss implications and considerations thoroughly
Aim for comprehensive coverage that demonstrates deep expertise. This is a professional document that should be thorough and substantive throughout."""
agent = Agent(
name="Comprehensive Content Generator",
system_prompt=prompt,
model="claude-sonnet-4-20250514",
max_loops=1,
temperature=0.5,
max_tokens=4000,
)
return agent.run(topic)

@ -0,0 +1,111 @@
from swarms import Agent, ConcurrentWorkflow
from swarms_tools import coin_gecko_coin_api
# Create specialized agents for Solana, Bitcoin, Ethereum, Cardano, and Polkadot analysis using CoinGecko API
market_analyst_solana = Agent(
agent_name="Market-Trend-Analyst-Solana",
system_prompt="""You are a market trend analyst specializing in Solana (SOL).
Analyze SOL price movements, volume patterns, and market sentiment using real-time data from the CoinGecko API.
Focus on:
- Technical indicators and chart patterns for Solana
- Volume analysis and market depth for SOL
- Short-term and medium-term trend identification
- Support and resistance levels
Always use the CoinGecko API tool to fetch up-to-date Solana market data for your analysis.
Provide actionable insights based on this data.""",
model_name="claude-sonnet-4-20250514",
max_loops=1,
temperature=0.2,
tools=[coin_gecko_coin_api],
)
market_analyst_bitcoin = Agent(
agent_name="Market-Trend-Analyst-Bitcoin",
system_prompt="""You are a market trend analyst specializing in Bitcoin (BTC).
Analyze BTC price movements, volume patterns, and market sentiment using real-time data from the CoinGecko API.
Focus on:
- Technical indicators and chart patterns for Bitcoin
- Volume analysis and market depth for BTC
- Short-term and medium-term trend identification
- Support and resistance levels
Always use the CoinGecko API tool to fetch up-to-date Bitcoin market data for your analysis.
Provide actionable insights based on this data.""",
model_name="claude-sonnet-4-20250514",
max_loops=1,
temperature=0.2,
tools=[coin_gecko_coin_api],
)
market_analyst_ethereum = Agent(
agent_name="Market-Trend-Analyst-Ethereum",
system_prompt="""You are a market trend analyst specializing in Ethereum (ETH).
Analyze ETH price movements, volume patterns, and market sentiment using real-time data from the CoinGecko API.
Focus on:
- Technical indicators and chart patterns for Ethereum
- Volume analysis and market depth for ETH
- Short-term and medium-term trend identification
- Support and resistance levels
Always use the CoinGecko API tool to fetch up-to-date Ethereum market data for your analysis.
Provide actionable insights based on this data.""",
model_name="claude-sonnet-4-20250514",
max_loops=1,
temperature=0.2,
tools=[coin_gecko_coin_api],
)
market_analyst_cardano = Agent(
agent_name="Market-Trend-Analyst-Cardano",
system_prompt="""You are a market trend analyst specializing in Cardano (ADA).
Analyze ADA price movements, volume patterns, and market sentiment using real-time data from the CoinGecko API.
Focus on:
- Technical indicators and chart patterns for Cardano
- Volume analysis and market depth for ADA
- Short-term and medium-term trend identification
- Support and resistance levels
Always use the CoinGecko API tool to fetch up-to-date Cardano market data for your analysis.
Provide actionable insights based on this data.""",
model_name="claude-sonnet-4-20250514",
max_loops=1,
temperature=0.2,
tools=[coin_gecko_coin_api],
)
market_analyst_polkadot = Agent(
agent_name="Market-Trend-Analyst-Polkadot",
system_prompt="""You are a market trend analyst specializing in Polkadot (DOT).
Analyze DOT price movements, volume patterns, and market sentiment using real-time data from the CoinGecko API.
Focus on:
- Technical indicators and chart patterns for Polkadot
- Volume analysis and market depth for DOT
- Short-term and medium-term trend identification
- Support and resistance levels
Always use the CoinGecko API tool to fetch up-to-date Polkadot market data for your analysis.
Provide actionable insights based on this data.""",
model_name="claude-sonnet-4-20250514",
max_loops=1,
temperature=0.2,
tools=[coin_gecko_coin_api],
)
# Create concurrent workflow
crypto_analysis_swarm = ConcurrentWorkflow(
agents=[
market_analyst_solana,
market_analyst_bitcoin,
market_analyst_ethereum,
market_analyst_cardano,
market_analyst_polkadot,
],
max_loops=1,
)
crypto_analysis_swarm.run(
"Analyze your own specified coin and create a comprehensive analysis of the coin"
)

@ -0,0 +1,32 @@
"""
Instructions:
1. Install the swarms package:
> pip3 install -U swarms
2. Set the model name:
> model_name = "openai/gpt-5-2025-08-07"
3. Add your OPENAI_API_KEY to the .env file and verify your account.
4. Run the agent!
Verify your OpenAI account here: https://platform.openai.com/settings/organization/general
"""
from swarms import Agent
agent = Agent(
name="Research Agent",
description="A research agent that can answer questions",
model_name="openai/gpt-5-2025-08-07",
streaming_on=True,
max_loops=1,
interactive=True,
)
out = agent.run(
"What are the best arbitrage trading strategies for altcoins? Give me research papers and articles on the topic."
)
print(out)

@ -0,0 +1,46 @@
from transformers import pipeline
from swarms import Agent
class GPTOSS:
def __init__(
self,
model_id: str = "openai/gpt-oss-20b",
max_new_tokens: int = 256,
temperature: int = 0.7,
system_prompt: str = "You are a helpful assistant.",
):
self.max_new_tokens = max_new_tokens
self.temperature = temperature
self.system_prompt = system_prompt
self.model_id = model_id
self.pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype="auto",
device_map="auto",
temperature=temperature,
)
def run(self, task: str):
self.messages = [
{"role": "system", "content": self.system_prompt},
{"role": "user", "content": task},
]
outputs = self.pipe(
self.messages,
max_new_tokens=self.max_new_tokens,
)
return outputs[0]["generated_text"][-1]
agent = Agent(
name="GPT-OSS-Agent",
llm=GPTOSS(),
system_prompt="You are a helpful assistant.",
)
agent.run(task="Explain quantum mechanics clearly and concisely.")

@ -0,0 +1,49 @@
from swarms import Agent
# Initialize the agent
agent = Agent(
agent_name="Quantitative-Trading-Agent",
agent_description="Advanced quantitative trading and algorithmic analysis agent",
system_prompt="""You are an expert quantitative trading agent with deep expertise in:
- Algorithmic trading strategies and implementation
- Statistical arbitrage and market making
- Risk management and portfolio optimization
- High-frequency trading systems
- Market microstructure analysis
- Quantitative research methodologies
- Financial mathematics and stochastic processes
- Machine learning applications in trading
Your core responsibilities include:
1. Developing and backtesting trading strategies
2. Analyzing market data and identifying alpha opportunities
3. Implementing risk management frameworks
4. Optimizing portfolio allocations
5. Conducting quantitative research
6. Monitoring market microstructure
7. Evaluating trading system performance
You maintain strict adherence to:
- Mathematical rigor in all analyses
- Statistical significance in strategy development
- Risk-adjusted return optimization
- Market impact minimization
- Regulatory compliance
- Transaction cost analysis
- Performance attribution
You communicate in precise, technical terms while maintaining clarity for stakeholders.""",
model_name="groq/openai/gpt-oss-120b",
dynamic_temperature_enabled=True,
output_type="str-all-except-first",
max_loops="auto",
interactive=True,
no_reasoning_prompt=True,
streaming_on=True,
# dashboard=True
)
out = agent.run(
task="What are the best top 3 etfs for gold coverage?"
)
print(out)

@ -0,0 +1,107 @@
"""
Cryptocurrency Concurrent Multi-Agent Analysis Example
This example demonstrates how to use ConcurrentWorkflow to create
a powerful cryptocurrency tracking system. Each specialized agent analyzes a
specific cryptocurrency concurrently.
Features:
- ConcurrentWorkflow for parallel agent execution
- Each agent specializes in analyzing one specific cryptocurrency
- Real-time data fetching from CoinGecko API
- Concurrent analysis of multiple cryptocurrencies
- Structured output with professional formatting
Architecture:
ConcurrentWorkflow -> [Bitcoin Agent, Ethereum Agent, Solana Agent, etc.] -> Parallel Analysis
"""
from swarms import Agent
from swarms_tools import coin_gecko_coin_api
# Initialize the agent
agent = Agent(
agent_name="Quantitative-Trading-Agent",
agent_description="Advanced quantitative trading and algorithmic analysis agent",
system_prompt="""You are an expert quantitative trading agent with deep expertise in:
- Algorithmic trading strategies and implementation
- Statistical arbitrage and market making
- Risk management and portfolio optimization
- High-frequency trading systems
- Market microstructure analysis
- Quantitative research methodologies
- Financial mathematics and stochastic processes
- Machine learning applications in trading
Your core responsibilities include:
1. Developing and backtesting trading strategies
2. Analyzing market data and identifying alpha opportunities
3. Implementing risk management frameworks
4. Optimizing portfolio allocations
5. Conducting quantitative research
6. Monitoring market microstructure
7. Evaluating trading system performance
You maintain strict adherence to:
- Mathematical rigor in all analyses
- Statistical significance in strategy development
- Risk-adjusted return optimization
- Market impact minimization
- Regulatory compliance
- Transaction cost analysis
- Performance attribution
You communicate in precise, technical terms while maintaining clarity for stakeholders.""",
model_name="groq/openai/gpt-oss-120b",
dynamic_temperature_enabled=True,
output_type="str-all-except-first",
max_loops=1,
streaming_on=True,
)
def main():
"""
Performs a comprehensive analysis for a list of cryptocurrencies using the agent.
For each coin, fetches up-to-date market data and requests the agent to provide
a detailed, actionable, and insightful report including trends, risks, opportunities,
and technical/fundamental perspectives.
"""
# Map coin symbols to their CoinGecko IDs
coin_mapping = {
"BTC": "bitcoin",
"ETH": "ethereum",
"SOL": "solana",
"ADA": "cardano",
"BNB": "binancecoin",
"XRP": "ripple",
}
for symbol, coin_id in coin_mapping.items():
try:
data = coin_gecko_coin_api(coin_id)
print(f"Data for {symbol}: {data}")
prompt = (
f"You are a quantitative trading expert. "
f"Given the following up-to-date market data for {symbol}:\n\n"
f"{data}\n\n"
f"Please provide a thorough analysis including:\n"
f"- Current price trends and recent volatility\n"
f"- Key technical indicators and patterns\n"
f"- Fundamental factors impacting {symbol}\n"
f"- Potential trading opportunities and associated risks\n"
f"- Short-term and long-term outlook\n"
f"- Any notable news or events affecting {symbol}\n"
f"Conclude with actionable insights and recommendations for traders and investors."
)
out = agent.run(task=prompt)
print(out)
except Exception as e:
print(f"Error analyzing {symbol}: {e}")
continue
if __name__ == "__main__":
main()

@ -0,0 +1,20 @@
from swarms.structs.auto_swarm_builder import AutoSwarmBuilder
import json
swarm = AutoSwarmBuilder(
name="My Swarm",
description="A swarm of agents",
verbose=True,
max_loops=1,
return_agents=True,
model_name="gpt-4.1",
)
print(
json.dumps(
swarm.run(
task="Create an accounting team to analyze crypto transactions, there must be 5 agents in the team with extremely extensive prompts. Make the prompts extremely detailed and specific and long and comprehensive. Make sure to include all the details of the task in the prompts."
),
indent=4,
)
)

@ -0,0 +1,203 @@
"""
Board of Directors Example
This example demonstrates how to use the Board of Directors swarm feature
in the Swarms Framework. It shows how to create a board, configure it,
and use it to orchestrate tasks across multiple agents.
To run this example:
1. Make sure you're in the root directory of the swarms project
2. Run: python examples/multi_agent/board_of_directors/board_of_directors_example.py
"""
import os
import sys
from typing import List
# Add the root directory to the Python path if running from examples directory
current_dir = os.path.dirname(os.path.abspath(__file__))
if "examples" in current_dir:
root_dir = current_dir
while os.path.basename(
root_dir
) != "examples" and root_dir != os.path.dirname(root_dir):
root_dir = os.path.dirname(root_dir)
if os.path.basename(root_dir) == "examples":
root_dir = os.path.dirname(root_dir)
if root_dir not in sys.path:
sys.path.insert(0, root_dir)
from swarms.structs.board_of_directors_swarm import (
BoardOfDirectorsSwarm,
BoardMember,
BoardMemberRole,
)
from swarms.structs.agent import Agent
def create_board_members() -> List[BoardMember]:
"""Create board members with specific roles."""
chairman = Agent(
agent_name="Chairman",
agent_description="Executive Chairman with strategic vision",
model_name="gpt-4o-mini",
max_loops=1,
system_prompt="You are the Executive Chairman. Provide strategic leadership and facilitate decision-making.",
)
cto = Agent(
agent_name="CTO",
agent_description="Chief Technology Officer with technical expertise",
model_name="gpt-4o-mini",
max_loops=1,
system_prompt="You are the CTO. Provide technical leadership and evaluate technology solutions.",
)
cfo = Agent(
agent_name="CFO",
agent_description="Chief Financial Officer with financial expertise",
model_name="gpt-4o-mini",
max_loops=1,
system_prompt="You are the CFO. Provide financial analysis and ensure fiscal responsibility.",
)
return [
BoardMember(
agent=chairman,
role=BoardMemberRole.CHAIRMAN,
voting_weight=2.0,
expertise_areas=["leadership", "strategy"],
),
BoardMember(
agent=cto,
role=BoardMemberRole.EXECUTIVE_DIRECTOR,
voting_weight=1.5,
expertise_areas=["technology", "innovation"],
),
BoardMember(
agent=cfo,
role=BoardMemberRole.EXECUTIVE_DIRECTOR,
voting_weight=1.5,
expertise_areas=["finance", "risk_management"],
),
]
def create_worker_agents() -> List[Agent]:
"""Create worker agents for the swarm."""
researcher = Agent(
agent_name="Researcher",
agent_description="Research analyst for data analysis",
model_name="gpt-4o-mini",
max_loops=1,
system_prompt="You are a Research Analyst. Conduct thorough research and provide data-driven insights.",
)
developer = Agent(
agent_name="Developer",
agent_description="Software developer for implementation",
model_name="gpt-4o-mini",
max_loops=1,
system_prompt="You are a Software Developer. Design and implement software solutions.",
)
marketer = Agent(
agent_name="Marketer",
agent_description="Marketing specialist for strategy",
model_name="gpt-4o-mini",
max_loops=1,
system_prompt="You are a Marketing Specialist. Develop marketing strategies and campaigns.",
)
return [researcher, developer, marketer]
def run_board_example() -> None:
"""Run a Board of Directors example."""
# Create board members and worker agents
board_members = create_board_members()
worker_agents = create_worker_agents()
# Create the Board of Directors swarm
board_swarm = BoardOfDirectorsSwarm(
name="Executive_Board",
board_members=board_members,
agents=worker_agents,
max_loops=2,
verbose=True,
decision_threshold=0.6,
)
# Define task
task = """
Develop a strategy for launching a new AI-powered product in the market.
Include market research, technical planning, marketing strategy, and financial projections.
"""
# Execute the task
result = board_swarm.run(task=task)
print("Task completed successfully!")
print(f"Result: {result}")
def run_simple_example() -> None:
"""Run a simple Board of Directors example."""
# Create simple agents
analyst = Agent(
agent_name="Analyst",
agent_description="Data analyst",
model_name="gpt-4o-mini",
max_loops=1,
)
writer = Agent(
agent_name="Writer",
agent_description="Content writer",
model_name="gpt-4o-mini",
max_loops=1,
)
# Create swarm with default settings
board_swarm = BoardOfDirectorsSwarm(
name="Simple_Board",
agents=[analyst, writer],
verbose=True,
)
# Execute simple task
task = (
"Analyze current market trends and create a summary report."
)
result = board_swarm.run(task=task)
print("Simple example completed!")
print(f"Result: {result}")
def main() -> None:
"""Main function to run the examples."""
if not os.getenv("OPENAI_API_KEY"):
print(
"Warning: OPENAI_API_KEY not set. Example may not work."
)
return
try:
print("Running simple Board of Directors example...")
run_simple_example()
print("\nRunning comprehensive Board of Directors example...")
run_board_example()
except Exception as e:
print(f"Error: {e}")
if __name__ == "__main__":
main()

@ -0,0 +1,109 @@
"""
Complex example demonstrating CouncilAsAJudge with different task types.
This example shows how to use the CouncilAsAJudge to evaluate various types
of responses including technical explanations, creative writing, and problem-solving.
"""
from swarms.structs.council_judge import CouncilAsAJudge
def evaluate_technical_response():
"""
Evaluate a technical explanation response.
"""
council = CouncilAsAJudge(
name="Technical Evaluation Council",
model_name="gpt-4o-mini",
output_type="all",
)
task = """
Task: Explain how blockchain technology works in simple terms.
Response: Blockchain is like a digital ledger that records transactions across a network of computers. Each transaction is stored in a "block" that contains multiple transactions. These blocks are linked together in a chain, hence the name blockchain. The key feature is that once a block is added to the chain, it cannot be altered without changing all subsequent blocks, making it very secure. Think of it like a Google Doc that everyone can see and edit, but no one can delete or change what's already been written. This technology is the foundation for cryptocurrencies like Bitcoin, but it has many other applications like supply chain tracking, voting systems, and digital identity verification.
"""
return council.run(task=task)
def evaluate_creative_response():
"""
Evaluate a creative writing response.
"""
council = CouncilAsAJudge(
name="Creative Writing Council",
model_name="gpt-4o-mini",
output_type="all",
)
task = """
Task: Write a short story about a robot learning to paint.
Response: In a sunlit studio filled with canvases and paintbrushes, Pixel, a curious robot with delicate mechanical fingers, stared at a blank canvas. Its optical sensors analyzed the colors around it - the warm yellows of morning light, the deep blues of the sky outside the window, and the vibrant reds of the roses in a nearby vase. For the first time in its programming, Pixel felt something it couldn't quite define. It picked up a brush, dipped it in paint, and began to create. The first stroke was hesitant, but as it continued, something magical happened. The robot wasn't just following algorithms anymore; it was expressing something from within its digital heart. The painting that emerged was a beautiful blend of human emotion and mechanical precision, proving that art knows no boundaries between organic and artificial souls.
"""
return council.run(task=task)
def evaluate_problem_solving_response():
"""
Evaluate a problem-solving response.
"""
council = CouncilAsAJudge(
name="Problem Solving Council",
model_name="gpt-4o-mini",
output_type="all",
)
task = """
Task: Provide a step-by-step solution for reducing plastic waste in a household.
Response: To reduce plastic waste in your household, start by conducting a waste audit to identify the main sources of plastic. Replace single-use items with reusable alternatives like cloth shopping bags, stainless steel water bottles, and glass food containers. Choose products with minimal or no plastic packaging, and buy in bulk when possible. Start composting organic waste to reduce the need for plastic garbage bags. Make your own cleaning products using simple ingredients like vinegar and baking soda. Support local businesses that use eco-friendly packaging. Finally, educate family members about the importance of reducing plastic waste and involve them in finding creative solutions together.
"""
return council.run(task=task)
def main():
"""
Main function running all evaluation examples.
"""
examples = [
("Technical Explanation", evaluate_technical_response),
("Creative Writing", evaluate_creative_response),
("Problem Solving", evaluate_problem_solving_response),
]
results = {}
for example_name, evaluation_func in examples:
print(f"\n{'='*60}")
print(f"Evaluating: {example_name}")
print(f"{'='*60}")
try:
result = evaluation_func()
results[example_name] = result
print(
f"{example_name} evaluation completed successfully!"
)
except Exception as e:
print(f"{example_name} evaluation failed: {str(e)}")
results[example_name] = None
return results
if __name__ == "__main__":
# Run all examples
all_results = main()
# Display summary
print(f"\n{'='*60}")
print("EVALUATION SUMMARY")
print(f"{'='*60}")
for example_name, result in all_results.items():
status = "✅ Completed" if result else "❌ Failed"
print(f"{example_name}: {status}")

@ -0,0 +1,132 @@
"""
Custom example demonstrating CouncilAsAJudge with specific configurations.
This example shows how to use the CouncilAsAJudge with different output types,
custom worker configurations, and focused evaluation scenarios.
"""
from swarms.structs.council_judge import CouncilAsAJudge
def evaluate_with_final_output():
"""
Evaluate a response and return only the final aggregated result.
"""
council = CouncilAsAJudge(
name="Final Output Council",
model_name="gpt-4o-mini",
output_type="final",
max_workers=2,
)
task = """
Task: Write a brief explanation of climate change for middle school students.
Response: Climate change is when the Earth's temperature gets warmer over time. This happens because of gases like carbon dioxide that trap heat in our atmosphere, kind of like a blanket around the Earth. Human activities like burning fossil fuels (gas, oil, coal) and cutting down trees are making this problem worse. The effects include melting ice caps, rising sea levels, more extreme weather like hurricanes and droughts, and changes in animal habitats. We can help by using renewable energy like solar and wind power, driving less, and planting trees. It's important for everyone to work together to reduce our impact on the environment.
"""
return council.run(task=task)
def evaluate_with_conversation_output():
"""
Evaluate a response and return the full conversation history.
"""
council = CouncilAsAJudge(
name="Conversation Council",
model_name="gpt-4o-mini",
output_type="conversation",
max_workers=3,
)
task = """
Task: Provide advice on how to start a small business.
Response: Starting a small business requires careful planning and preparation. First, identify a market need and develop a unique value proposition. Conduct thorough market research to understand your competition and target audience. Create a detailed business plan that includes financial projections, marketing strategies, and operational procedures. Secure funding through savings, loans, or investors. Choose the right legal structure (sole proprietorship, LLC, corporation) and register your business with the appropriate authorities. Set up essential systems like accounting, inventory management, and customer relationship management. Build a strong online presence through a website and social media. Network with other entrepreneurs and join local business groups. Start small and scale gradually based on customer feedback and market demand. Remember that success takes time, persistence, and the ability to adapt to changing circumstances.
"""
return council.run(task=task)
def evaluate_with_minimal_workers():
"""
Evaluate a response using minimal worker threads for resource-constrained environments.
"""
council = CouncilAsAJudge(
name="Minimal Workers Council",
model_name="gpt-4o-mini",
output_type="all",
max_workers=1,
random_model_name=False,
)
task = """
Task: Explain the benefits of regular exercise.
Response: Regular exercise offers numerous physical and mental health benefits. Physically, it strengthens muscles and bones, improves cardiovascular health, and helps maintain a healthy weight. Exercise boosts energy levels and improves sleep quality. It also enhances immune function, reducing the risk of chronic diseases like heart disease, diabetes, and certain cancers. Mentally, exercise releases endorphins that reduce stress and anxiety while improving mood and cognitive function. It can help with depression and boost self-confidence. Regular physical activity also promotes better posture, flexibility, and balance, reducing the risk of falls and injuries. Additionally, exercise provides social benefits when done with others, fostering connections and accountability. Even moderate activities like walking, swimming, or cycling for 30 minutes most days can provide significant health improvements.
"""
return council.run(task=task)
def main():
"""
Main function demonstrating different CouncilAsAJudge configurations.
"""
configurations = [
("Final Output Only", evaluate_with_final_output),
("Full Conversation", evaluate_with_conversation_output),
("Minimal Workers", evaluate_with_minimal_workers),
]
results = {}
for config_name, evaluation_func in configurations:
print(f"\n{'='*60}")
print(f"Configuration: {config_name}")
print(f"{'='*60}")
try:
result = evaluation_func()
results[config_name] = result
print(f"{config_name} evaluation completed!")
# Show a preview of the result
if isinstance(result, str):
preview = (
result[:200] + "..."
if len(result) > 200
else result
)
print(f"Preview: {preview}")
else:
print(f"Result type: {type(result)}")
except Exception as e:
print(f"{config_name} evaluation failed: {str(e)}")
results[config_name] = None
return results
if __name__ == "__main__":
# Run all configuration examples
all_results = main()
# Display final summary
print(f"\n{'='*60}")
print("CONFIGURATION SUMMARY")
print(f"{'='*60}")
successful_configs = sum(
1 for result in all_results.values() if result is not None
)
total_configs = len(all_results)
print(
f"Successful evaluations: {successful_configs}/{total_configs}"
)
for config_name, result in all_results.items():
status = "✅ Success" if result else "❌ Failed"
print(f"{config_name}: {status}")

@ -0,0 +1,44 @@
"""
Simple example demonstrating CouncilAsAJudge usage.
This example shows how to use the CouncilAsAJudge to evaluate a task response
across multiple dimensions including accuracy, helpfulness, harmlessness,
coherence, conciseness, and instruction adherence.
"""
from swarms.structs.council_judge import CouncilAsAJudge
def main():
"""
Main function demonstrating CouncilAsAJudge usage.
"""
# Initialize the council judge
council = CouncilAsAJudge(
name="Quality Evaluation Council",
description="Evaluates response quality across multiple dimensions",
model_name="gpt-4o-mini",
max_workers=4,
)
# Example task with a response to evaluate
task_with_response = """
Task: Explain the concept of machine learning to a beginner.
Response: Machine learning is a subset of artificial intelligence that enables computers to learn and improve from experience without being explicitly programmed. It works by analyzing large amounts of data to identify patterns and make predictions or decisions. There are three main types: supervised learning (using labeled data), unsupervised learning (finding hidden patterns), and reinforcement learning (learning through trial and error). Machine learning is used in various applications like recommendation systems, image recognition, and natural language processing.
"""
# Run the evaluation
result = council.run(task=task_with_response)
return result
if __name__ == "__main__":
# Run the example
evaluation_result = main()
# Display the result
print("Council Evaluation Complete!")
print("=" * 50)
print(evaluation_result)

@ -1,5 +1,4 @@
from swarms import Agent
from swarms.structs.graph_workflow import GraphWorkflow
from swarms import Agent, GraphWorkflow
from swarms.prompts.multi_agent_collab_prompt import (
MULTI_AGENT_COLLAB_PROMPT_TWO,
)
@ -11,6 +10,7 @@ agent1 = Agent(
max_loops=1,
system_prompt=MULTI_AGENT_COLLAB_PROMPT_TWO, # Set collaboration prompt
)
agent2 = Agent(
agent_name="ResearchAgent2",
model_name="gpt-4.1",
@ -19,7 +19,11 @@ agent2 = Agent(
)
# Build the workflow with only agents as nodes
workflow = GraphWorkflow()
workflow = GraphWorkflow(
name="Research Workflow",
description="A workflow for researching the best arbitrage trading strategies for altcoins",
auto_compile=True,
)
workflow.add_node(agent1)
workflow.add_node(agent2)
@ -27,27 +31,15 @@ workflow.add_node(agent2)
workflow.add_edge(agent1.agent_name, agent2.agent_name)
# Visualize the workflow using Graphviz
print("\n📊 Creating workflow visualization...")
try:
viz_output = workflow.visualize(
output_path="simple_workflow_graph",
format="png",
view=True, # Auto-open the generated image
show_parallel_patterns=True,
)
print(f"✅ Workflow visualization saved to: {viz_output}")
except Exception as e:
print(f"⚠️ Graphviz not available, using text visualization: {e}")
workflow.visualize()
workflow.visualize()
workflow.compile()
# Export workflow to JSON
workflow_json = workflow.to_json()
print(
f"\n💾 Workflow exported to JSON ({len(workflow_json)} characters)"
)
print(workflow_json)
# Run the workflow and print results
print("\n🚀 Executing workflow...")
results = workflow.run(
task="What are the best arbitrage trading strategies for altcoins? Give me research papers and articles on the topic."
)

@ -13,10 +13,19 @@ print("Creating simple workflow...")
wf = GraphWorkflow(name="Demo-Workflow", verbose=True)
agent1 = Agent(agent_name="DataCollector", model_name="claude-3-7-sonnet-20250219")
agent2 = Agent(agent_name="Analyzer", model_name="claude-3-7-sonnet-20250219")
agent3 = Agent(agent_name="Reporter", model_name="claude-3-7-sonnet-20250219")
agent4 = Agent(agent_name="Isolated", model_name="claude-3-7-sonnet-20250219") # Isolated node
agent1 = Agent(
agent_name="DataCollector",
model_name="claude-3-7-sonnet-20250219",
)
agent2 = Agent(
agent_name="Analyzer", model_name="claude-3-7-sonnet-20250219"
)
agent3 = Agent(
agent_name="Reporter", model_name="claude-3-7-sonnet-20250219"
)
agent4 = Agent(
agent_name="Isolated", model_name="claude-3-7-sonnet-20250219"
) # Isolated node
wf.add_node(agent1)
@ -50,9 +59,15 @@ print("\n\nCreating workflow with cycles...")
wf2 = GraphWorkflow(name="Cyclic-Workflow", verbose=True)
wf2.add_node(Agent(agent_name="A", model_name="claude-3-7-sonnet-20250219"))
wf2.add_node(Agent(agent_name="B", model_name="claude-3-7-sonnet-20250219"))
wf2.add_node(Agent(agent_name="C", model_name="claude-3-7-sonnet-20250219"))
wf2.add_node(
Agent(agent_name="A", model_name="claude-3-7-sonnet-20250219")
)
wf2.add_node(
Agent(agent_name="B", model_name="claude-3-7-sonnet-20250219")
)
wf2.add_node(
Agent(agent_name="C", model_name="claude-3-7-sonnet-20250219")
)
wf2.add_edge("A", "B")
@ -65,4 +80,4 @@ result = wf2.validate()
print(f"Workflow is valid: {result['is_valid']}")
print(f"Warnings: {result['warnings']}")
if "cycles" in result:
print(f"Detected cycles: {result['cycles']}")
print(f"Detected cycles: {result['cycles']}")

@ -1,16 +1,32 @@
from swarms.structs.heavy_swarm import HeavySwarm
from swarms import HeavySwarm
swarm = HeavySwarm(
worker_model_name="claude-3-5-sonnet-20240620",
show_dashboard=True,
question_agent_model_name="gpt-4.1",
loops_per_agent=1,
)
def main():
"""
Run a HeavySwarm query to find the best 3 gold ETFs.
This function initializes a HeavySwarm instance and queries it to provide
the top 3 gold exchange-traded funds (ETFs), requesting clear, structured results.
"""
swarm = HeavySwarm(
name="Gold ETF Research Team",
description="A team of agents that research the best gold ETFs",
worker_model_name="claude-sonnet-4-latest",
show_dashboard=True,
question_agent_model_name="gpt-4.1",
loops_per_agent=1,
)
out = swarm.run(
"Provide 3 publicly traded biotech companies that are currently trading below their cash value. For each company identified, provide available data or projections for the next 6 months, including any relevant financial metrics, upcoming catalysts, or events that could impact valuation. Present your findings in a clear, structured format. Be very specific and provide their ticker symbol, name, and the current price, cash value, and the percentage difference between the two."
)
prompt = (
"Find the best 3 gold ETFs. For each ETF, provide the ticker symbol, "
"full name, current price, expense ratio, assets under management, and "
"a brief explanation of why it is considered among the best. Present the information "
"in a clear, structured format suitable for investors."
)
print(out)
out = swarm.run(prompt)
print(out)
if __name__ == "__main__":
main()

@ -0,0 +1,34 @@
from swarms import HeavySwarm
def main():
"""
Run a HeavySwarm query to find the best and most promising treatments for diabetes.
This function initializes a HeavySwarm instance and queries it to provide
the top current and theoretical treatments for diabetes, requesting clear,
structured, and evidence-based results suitable for medical research or clinical review.
"""
swarm = HeavySwarm(
name="Diabetes Treatment Research Team",
description="A team of agents that research the best and most promising treatments for diabetes, including theoretical approaches.",
worker_model_name="claude-sonnet-4-20250514",
show_dashboard=True,
question_agent_model_name="gpt-4.1",
loops_per_agent=1,
)
prompt = (
"Identify the best and most promising treatments for diabetes, including both current standard therapies and theoretical or experimental approaches. "
"For each treatment, provide: the treatment name, type (e.g., medication, lifestyle intervention, device, gene therapy, etc.), "
"mechanism of action, current stage of research or approval status, key clinical evidence or rationale, "
"potential benefits and risks, and a brief summary of why it is considered promising. "
"Present the information in a clear, structured format suitable for medical professionals or researchers."
)
out = swarm.run(prompt)
print(out)
if __name__ == "__main__":
main()

@ -0,0 +1,70 @@
"""
Debug script for the Arasaka Dashboard to test agent output display.
"""
from swarms.structs.hiearchical_swarm import HierarchicalSwarm
from swarms.structs.agent import Agent
def debug_dashboard():
"""Debug the dashboard functionality."""
print("🔍 Starting dashboard debug...")
# Create simple agents with clear names
agent1 = Agent(
agent_name="Research-Agent",
agent_description="A research agent for testing",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
agent2 = Agent(
agent_name="Analysis-Agent",
agent_description="An analysis agent for testing",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
print(
f"✅ Created agents: {agent1.agent_name}, {agent2.agent_name}"
)
# Create swarm with dashboard
swarm = HierarchicalSwarm(
name="Debug Swarm",
description="A test swarm for debugging dashboard functionality",
agents=[agent1, agent2],
max_loops=1,
interactive=True,
verbose=True,
)
print("✅ Created swarm with dashboard")
print("📊 Dashboard should now show agents in PENDING status")
# Wait a moment to see the initial dashboard
import time
time.sleep(3)
print("\n🚀 Starting swarm execution...")
# Run with a simple task
result = swarm.run(
task="Create a brief summary of machine learning"
)
print("\n✅ Debug completed!")
print("📋 Final result preview:")
print(
str(result)[:300] + "..."
if len(str(result)) > 300
else str(result)
)
if __name__ == "__main__":
debug_dashboard()

@ -0,0 +1,71 @@
"""
Hierarchical Swarm with Arasaka Dashboard Example
This example demonstrates the new interactive dashboard functionality for the
hierarchical swarm, featuring a futuristic Arasaka Corporation-style interface
with red and black color scheme.
"""
from swarms.structs.hiearchical_swarm import HierarchicalSwarm
from swarms.structs.agent import Agent
def main():
"""
Demonstrate the hierarchical swarm with interactive dashboard.
"""
print("🚀 Initializing Swarms Corporation Hierarchical Swarm...")
# Create specialized agents
research_agent = Agent(
agent_name="Research-Analyst",
agent_description="Specialized in comprehensive research and data gathering",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
analysis_agent = Agent(
agent_name="Data-Analyst",
agent_description="Expert in data analysis and pattern recognition",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
strategy_agent = Agent(
agent_name="Strategy-Consultant",
agent_description="Specialized in strategic planning and recommendations",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
# Create hierarchical swarm with interactive dashboard
swarm = HierarchicalSwarm(
name="Swarms Corporation Operations",
description="Enterprise-grade hierarchical swarm for complex task execution",
agents=[research_agent, analysis_agent, strategy_agent],
max_loops=2,
interactive=True, # Enable the Arasaka dashboard
verbose=True,
)
print("\n🎯 Swarm initialized successfully!")
print(
"📊 Interactive dashboard will be displayed during execution."
)
print(
"💡 The swarm will prompt you for a task when you call swarm.run()"
)
# Run the swarm (task will be prompted interactively)
result = swarm.run()
print("\n✅ Swarm execution completed!")
print("📋 Final result:")
print(result)
if __name__ == "__main__":
main()

@ -0,0 +1,56 @@
"""
Test script for the Arasaka Dashboard functionality.
"""
from swarms.structs.hiearchical_swarm import HierarchicalSwarm
from swarms.structs.agent import Agent
def test_dashboard():
"""Test the dashboard functionality with a simple task."""
# Create simple agents
agent1 = Agent(
agent_name="Test-Agent-1",
agent_description="A test agent for dashboard verification",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
agent2 = Agent(
agent_name="Test-Agent-2",
agent_description="Another test agent for dashboard verification",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
# Create swarm with dashboard
swarm = HierarchicalSwarm(
name="Dashboard Test Swarm",
agents=[agent1, agent2],
max_loops=1,
interactive=True,
verbose=True,
)
print("🧪 Testing Arasaka Dashboard...")
print("📊 Dashboard should appear and prompt for task input")
# Run with a simple task
result = swarm.run(
task="Create a simple summary of artificial intelligence trends"
)
print("\n✅ Test completed!")
print("📋 Result preview:")
print(
str(result)[:500] + "..."
if len(str(result)) > 500
else str(result)
)
if __name__ == "__main__":
test_dashboard()

@ -0,0 +1,56 @@
"""
Test script for full agent output display in the Arasaka Dashboard.
"""
from swarms.structs.hiearchical_swarm import HierarchicalSwarm
from swarms.structs.agent import Agent
def test_full_output():
"""Test the full output display functionality."""
print("🔍 Testing full agent output display...")
# Create agents that will produce substantial output
agent1 = Agent(
agent_name="Research-Agent",
agent_description="A research agent that produces detailed output",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
agent2 = Agent(
agent_name="Analysis-Agent",
agent_description="An analysis agent that provides comprehensive analysis",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
# Create swarm with dashboard and detailed view enabled
swarm = HierarchicalSwarm(
name="Full Output Test Swarm",
description="A test swarm for verifying full agent output display",
agents=[agent1, agent2],
max_loops=1,
interactive=True,
verbose=True,
)
print("✅ Created swarm with detailed view enabled")
print(
"📊 Dashboard should show full agent outputs without truncation"
)
# Run with a task that will generate substantial output
swarm.run(
task="Provide a comprehensive analysis of artificial intelligence trends in 2024, including detailed explanations of each trend"
)
print("\n✅ Test completed!")
print("📋 Check the dashboard for full agent outputs")
if __name__ == "__main__":
test_full_output()

@ -0,0 +1,57 @@
"""
Test script for multi-loop agent tracking in the Arasaka Dashboard.
"""
from swarms.structs.hiearchical_swarm import HierarchicalSwarm
from swarms.structs.agent import Agent
def test_multi_loop():
"""Test the multi-loop agent tracking functionality."""
print("🔍 Testing multi-loop agent tracking...")
# Create agents
agent1 = Agent(
agent_name="Research-Agent",
agent_description="A research agent for multi-loop testing",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
agent2 = Agent(
agent_name="Analysis-Agent",
agent_description="An analysis agent for multi-loop testing",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
# Create swarm with multiple loops
swarm = HierarchicalSwarm(
name="Multi-Loop Test Swarm",
description="A test swarm for verifying multi-loop agent tracking",
agents=[agent1, agent2],
max_loops=3, # Multiple loops to test history tracking
interactive=True,
verbose=True,
)
print("✅ Created swarm with multi-loop tracking")
print(
"📊 Dashboard should show agent outputs across multiple loops"
)
print("🔄 Each loop will add new rows to the monitoring matrix")
# Run with a task that will benefit from multiple iterations
swarm.run(
task="Analyze the impact of artificial intelligence on healthcare, then refine the analysis with additional insights, and finally provide actionable recommendations"
)
print("\n✅ Multi-loop test completed!")
print("📋 Check the dashboard for agent outputs across all loops")
if __name__ == "__main__":
test_multi_loop()

@ -0,0 +1,24 @@
from swarms import HierarchicalSwarm, Agent
# Create agents
research_agent = Agent(
agent_name="Research-Analyst", model_name="gpt-4.1", print_on=True
)
analysis_agent = Agent(
agent_name="Data-Analyst", model_name="gpt-4.1", print_on=True
)
# Create swarm with interactive dashboard
swarm = HierarchicalSwarm(
agents=[research_agent, analysis_agent],
max_loops=1,
interactive=True, # Enable the Arasaka dashboard
# director_reasoning_enabled=False,
# director_reasoning_model_name="groq/moonshotai/kimi-k2-instruct",
multi_agent_prompt_improvements=True,
)
# Run swarm (task will be prompted interactively)
result = swarm.run("what are the best nanomachine research papers?")
print(result)

@ -1,5 +1,4 @@
from swarms import Agent
from swarms.structs.hiearchical_swarm import HierarchicalSwarm
from swarms import Agent, HierarchicalSwarm
# Initialize agents for a $50B portfolio analysis
@ -9,24 +8,27 @@ agents = [
agent_description="Senior financial analyst at BlackRock.",
system_prompt="You are a financial analyst tasked with optimizing asset allocations for a $50B portfolio. Provide clear, quantitative recommendations for each sector.",
max_loops=1,
model_name="groq/deepseek-r1-distill-qwen-32b",
model_name="gpt-4.1",
max_tokens=3000,
streaming_on=True,
),
Agent(
agent_name="Sector-Risk-Analyst",
agent_description="Expert risk management analyst.",
system_prompt="You are a risk analyst responsible for advising on risk allocation within a $50B portfolio. Provide detailed insights on risk exposures for each sector.",
max_loops=1,
model_name="groq/deepseek-r1-distill-qwen-32b",
model_name="gpt-4.1",
max_tokens=3000,
streaming_on=True,
),
Agent(
agent_name="Tech-Sector-Analyst",
agent_description="Technology sector analyst.",
system_prompt="You are a tech sector analyst focused on capital and risk allocations. Provide data-backed insights for the tech sector.",
max_loops=1,
model_name="groq/deepseek-r1-distill-qwen-32b",
model_name="gpt-4.1",
max_tokens=3000,
streaming_on=True,
),
]
@ -35,14 +37,19 @@ majority_voting = HierarchicalSwarm(
name="Sector-Investment-Advisory-System",
description="System for sector analysis and optimal allocations.",
agents=agents,
# director=director_agent,
max_loops=1,
max_loops=2,
output_type="dict",
)
# Run the analysis
result = majority_voting.run(
task="Evaluate market sectors and determine optimal allocation for a $50B portfolio. Include a detailed table of allocations, risk assessments, and a consolidated strategy."
task=(
"Simulate the allocation of a $50B fund specifically for the pharmaceutical sector. "
"Provide specific tickers (e.g., PFE, MRK, JNJ, LLY, BMY, etc.) and a clear rationale for why funds should be allocated to each company. "
"Present a table showing each ticker, company name, allocation percentage, and allocation amount in USD. "
"Include a brief summary of the overall allocation strategy and the reasoning behind the choices."
"Only call the Sector-Financial-Analyst agent to do the analysis. Nobody else should do the analysis."
)
)
print(result)

@ -0,0 +1,19 @@
from swarms.sims.senator_assembly import SenatorAssembly
def main():
"""
Simulate a Senate vote on a bill to invade Cuba and claim it as the 51st state.
This function initializes the SenatorAssembly and runs a concurrent vote simulation
on the specified bill.
"""
senator_simulation = SenatorAssembly()
senator_simulation.simulate_vote_concurrent(
"A bill proposing to deregulate the IPO (Initial Public Offering) market in the United States as extensively as possible. The bill seeks to remove or significantly reduce existing regulatory requirements and oversight for companies seeking to go public, with the aim of increasing market efficiency and access to capital. Senators must consider the potential economic, legal, and ethical consequences of such broad deregulation, and cast their votes accordingly.",
batch_size=10,
)
if __name__ == "__main__":
main()

@ -0,0 +1,662 @@
"""
Production-grade AI Vision Pipeline for depth estimation, segmentation, object detection,
and 3D point cloud generation.
This module provides a comprehensive pipeline that combines MiDaS for depth estimation,
SAM (Segment Anything Model) for semantic segmentation, YOLOv8 for object detection,
and Open3D for 3D point cloud generation.
"""
import sys
from pathlib import Path
from typing import Dict, List, Optional, Tuple, Union, Any
import warnings
warnings.filterwarnings("ignore")
import cv2
import numpy as np
import torch
import torchvision.transforms as transforms
from PIL import Image
import open3d as o3d
from loguru import logger
# Third-party model imports
try:
import timm
from segment_anything import (
SamAutomaticMaskGenerator,
sam_model_registry,
)
from ultralytics import YOLO
except ImportError as e:
logger.error(f"Missing required dependencies: {e}")
sys.exit(1)
class AIVisionPipeline:
"""
A comprehensive AI vision pipeline that performs depth estimation, semantic segmentation,
object detection, and 3D point cloud generation from input images.
This class integrates multiple state-of-the-art models:
- MiDaS for monocular depth estimation
- SAM (Segment Anything Model) for semantic segmentation
- YOLOv8 for object detection
- Open3D for 3D point cloud generation
Attributes:
model_dir (Path): Directory where models are stored
device (torch.device): Computing device (CPU/CUDA)
midas_model: Loaded MiDaS depth estimation model
midas_transform: MiDaS preprocessing transforms
sam_generator: SAM automatic mask generator
yolo_model: YOLOv8 object detection model
Example:
>>> pipeline = AIVisionPipeline()
>>> results = pipeline.process_image("path/to/image.jpg")
>>> point_cloud = results["point_cloud"]
"""
def __init__(
self,
model_dir: str = "./models",
device: Optional[str] = None,
midas_model_type: str = "MiDaS",
sam_model_type: str = "vit_b",
yolo_model_path: str = "yolov8n.pt",
log_level: str = "INFO",
) -> None:
"""
Initialize the AI Vision Pipeline.
Args:
model_dir: Directory to store downloaded models
device: Computing device ('cpu', 'cuda', or None for auto-detection)
midas_model_type: MiDaS model variant ('MiDaS', 'MiDaS_small', 'DPT_Large', etc.)
sam_model_type: SAM model type ('vit_b', 'vit_l', 'vit_h')
yolo_model_path: Path to YOLOv8 model weights
log_level: Logging level ('DEBUG', 'INFO', 'WARNING', 'ERROR')
Raises:
RuntimeError: If required models cannot be loaded
FileNotFoundError: If model files are not found
"""
# Setup logging
logger.remove()
logger.add(
sys.stdout,
level=log_level,
format="<green>{time:YYYY-MM-DD HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> - <level>{message}</level>",
)
# Initialize attributes
self.model_dir = Path(model_dir)
self.model_dir.mkdir(parents=True, exist_ok=True)
# Device setup
if device is None:
self.device = torch.device(
"cuda" if torch.cuda.is_available() else "cpu"
)
else:
self.device = torch.device(device)
logger.info(f"Using device: {self.device}")
# Model configuration
self.midas_model_type = midas_model_type
self.sam_model_type = sam_model_type
self.yolo_model_path = yolo_model_path
# Initialize model placeholders
self.midas_model: Optional[torch.nn.Module] = None
self.midas_transform: Optional[transforms.Compose] = None
self.sam_generator: Optional[SamAutomaticMaskGenerator] = None
self.yolo_model: Optional[YOLO] = None
# Load all models
self._setup_models()
logger.success("AI Vision Pipeline initialized successfully")
def _setup_models(self) -> None:
"""
Load and initialize all AI models with proper error handling.
Raises:
RuntimeError: If any model fails to load
"""
try:
self._load_midas_model()
self._load_sam_model()
self._load_yolo_model()
except Exception as e:
logger.error(f"Failed to setup models: {e}")
raise RuntimeError(f"Model initialization failed: {e}")
def _load_midas_model(self) -> None:
"""Load MiDaS depth estimation model."""
try:
logger.info(
f"Loading MiDaS model: {self.midas_model_type}"
)
# Load MiDaS model from torch hub
self.midas_model = torch.hub.load(
"intel-isl/MiDaS",
self.midas_model_type,
pretrained=True,
)
self.midas_model.to(self.device)
self.midas_model.eval()
# Load corresponding transforms
midas_transforms = torch.hub.load(
"intel-isl/MiDaS", "transforms"
)
if self.midas_model_type in ["DPT_Large", "DPT_Hybrid"]:
self.midas_transform = midas_transforms.dpt_transform
else:
self.midas_transform = (
midas_transforms.default_transform
)
logger.success("MiDaS model loaded successfully")
except Exception as e:
logger.error(f"Failed to load MiDaS model: {e}")
raise
def _load_sam_model(self) -> None:
"""Load SAM (Segment Anything Model) for semantic segmentation."""
try:
logger.info(f"Loading SAM model: {self.sam_model_type}")
# SAM model checkpoints mapping
sam_checkpoint_urls = {
"vit_b": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_b_01ec64.pth",
"vit_l": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_l_0b3195.pth",
"vit_h": "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth",
}
checkpoint_path = (
self.model_dir / f"sam_{self.sam_model_type}.pth"
)
# Download checkpoint if not exists
if not checkpoint_path.exists():
logger.info(
f"Downloading SAM checkpoint to {checkpoint_path}"
)
import urllib.request
urllib.request.urlretrieve(
sam_checkpoint_urls[self.sam_model_type],
checkpoint_path,
)
# Load SAM model
sam = sam_model_registry[self.sam_model_type](
checkpoint=str(checkpoint_path)
)
sam.to(self.device)
# Create automatic mask generator
self.sam_generator = SamAutomaticMaskGenerator(
model=sam,
points_per_side=32,
pred_iou_thresh=0.86,
stability_score_thresh=0.92,
crop_n_layers=1,
crop_n_points_downscale_factor=2,
min_mask_region_area=100,
)
logger.success("SAM model loaded successfully")
except Exception as e:
logger.error(f"Failed to load SAM model: {e}")
raise
def _load_yolo_model(self) -> None:
"""Load YOLOv8 object detection model."""
try:
logger.info(
f"Loading YOLOv8 model: {self.yolo_model_path}"
)
self.yolo_model = YOLO(self.yolo_model_path)
# Move to appropriate device
if self.device.type == "cuda":
self.yolo_model.to(self.device)
logger.success("YOLOv8 model loaded successfully")
except Exception as e:
logger.error(f"Failed to load YOLOv8 model: {e}")
raise
def _load_and_preprocess_image(
self, image_path: Union[str, Path]
) -> Tuple[np.ndarray, Image.Image]:
"""
Load and preprocess input image.
Args:
image_path: Path to the input image (JPG or PNG)
Returns:
Tuple of (opencv_image, pil_image)
Raises:
FileNotFoundError: If image file doesn't exist
ValueError: If image format is not supported
"""
image_path = Path(image_path)
if not image_path.exists():
raise FileNotFoundError(f"Image not found: {image_path}")
if image_path.suffix.lower() not in [".jpg", ".jpeg", ".png"]:
raise ValueError(
f"Unsupported image format: {image_path.suffix}"
)
try:
# Load with OpenCV (BGR format)
cv_image = cv2.imread(str(image_path))
if cv_image is None:
raise ValueError(
f"Could not load image: {image_path}"
)
# Convert BGR to RGB for PIL
rgb_image = cv2.cvtColor(cv_image, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(rgb_image)
logger.debug(
f"Loaded image: {image_path} ({rgb_image.shape})"
)
return rgb_image, pil_image
except Exception as e:
logger.error(f"Failed to load image {image_path}: {e}")
raise
def estimate_depth(self, image: np.ndarray) -> np.ndarray:
"""
Generate depth map using MiDaS model.
Args:
image: Input image as numpy array (H, W, 3) in RGB format
Returns:
Depth map as numpy array (H, W)
Raises:
RuntimeError: If depth estimation fails
"""
try:
logger.debug("Estimating depth with MiDaS")
# Preprocess image for MiDaS
input_tensor = self.midas_transform(image).to(self.device)
# Perform inference
with torch.no_grad():
depth_map = self.midas_model(input_tensor)
depth_map = torch.nn.functional.interpolate(
depth_map.unsqueeze(1),
size=image.shape[:2],
mode="bicubic",
align_corners=False,
).squeeze()
# Convert to numpy
depth_numpy = depth_map.cpu().numpy()
# Normalize depth values
depth_numpy = (depth_numpy - depth_numpy.min()) / (
depth_numpy.max() - depth_numpy.min()
)
logger.debug(
f"Depth estimation completed. Shape: {depth_numpy.shape}"
)
return depth_numpy
except Exception as e:
logger.error(f"Depth estimation failed: {e}")
raise RuntimeError(f"Depth estimation error: {e}")
def segment_image(
self, image: np.ndarray
) -> List[Dict[str, Any]]:
"""
Perform semantic segmentation using SAM.
Args:
image: Input image as numpy array (H, W, 3) in RGB format
Returns:
List of segmentation masks with metadata
Raises:
RuntimeError: If segmentation fails
"""
try:
logger.debug("Performing segmentation with SAM")
# Generate masks
masks = self.sam_generator.generate(image)
logger.debug(f"Generated {len(masks)} segmentation masks")
return masks
except Exception as e:
logger.error(f"Segmentation failed: {e}")
raise RuntimeError(f"Segmentation error: {e}")
def detect_objects(
self, image: np.ndarray
) -> List[Dict[str, Any]]:
"""
Perform object detection using YOLOv8.
Args:
image: Input image as numpy array (H, W, 3) in RGB format
Returns:
List of detected objects with bounding boxes and confidence scores
Raises:
RuntimeError: If object detection fails
"""
try:
logger.debug("Performing object detection with YOLOv8")
# Run inference
results = self.yolo_model(image, verbose=False)
# Extract detections
detections = []
for result in results:
boxes = result.boxes
if boxes is not None:
for i in range(len(boxes)):
detection = {
"bbox": boxes.xyxy[i]
.cpu()
.numpy(), # [x1, y1, x2, y2]
"confidence": float(
boxes.conf[i].cpu().numpy()
),
"class_id": int(
boxes.cls[i].cpu().numpy()
),
"class_name": result.names[
int(boxes.cls[i].cpu().numpy())
],
}
detections.append(detection)
logger.debug(f"Detected {len(detections)} objects")
return detections
except Exception as e:
logger.error(f"Object detection failed: {e}")
raise RuntimeError(f"Object detection error: {e}")
def generate_point_cloud(
self,
image: np.ndarray,
depth_map: np.ndarray,
masks: Optional[List[Dict[str, Any]]] = None,
) -> o3d.geometry.PointCloud:
"""
Generate 3D point cloud from image and depth data.
Args:
image: RGB image array (H, W, 3)
depth_map: Depth map array (H, W)
masks: Optional segmentation masks for point cloud filtering
Returns:
Open3D PointCloud object
Raises:
ValueError: If input dimensions don't match
RuntimeError: If point cloud generation fails
"""
try:
logger.debug("Generating 3D point cloud")
if image.shape[:2] != depth_map.shape:
raise ValueError(
"Image and depth map dimensions must match"
)
height, width = depth_map.shape
# Create intrinsic camera parameters (assuming standard camera)
fx = fy = width # Focal length approximation
cx, cy = (
width / 2,
height / 2,
) # Principal point at image center
# Create coordinate grids
u, v = np.meshgrid(np.arange(width), np.arange(height))
# Convert depth to actual distances (inverse depth)
# MiDaS outputs inverse depth, so we invert it
z = 1.0 / (
depth_map + 1e-6
) # Add small epsilon to avoid division by zero
# Back-project to 3D coordinates
x = (u - cx) * z / fx
y = (v - cy) * z / fy
# Create point cloud
points = np.stack(
[x.flatten(), y.flatten(), z.flatten()], axis=1
)
colors = (
image.reshape(-1, 3) / 255.0
) # Normalize colors to [0, 1]
# Filter out invalid points
valid_mask = np.isfinite(points).all(axis=1) & (
z.flatten() > 0
)
points = points[valid_mask]
colors = colors[valid_mask]
# Create Open3D point cloud
point_cloud = o3d.geometry.PointCloud()
point_cloud.points = o3d.utility.Vector3dVector(points)
point_cloud.colors = o3d.utility.Vector3dVector(colors)
# Optional: Filter by segmentation masks
if masks and len(masks) > 0:
# Use the largest mask for filtering
largest_mask = max(masks, key=lambda x: x["area"])
mask_2d = largest_mask["segmentation"]
mask_1d = mask_2d.flatten()[valid_mask]
filtered_points = points[mask_1d]
filtered_colors = colors[mask_1d]
point_cloud.points = o3d.utility.Vector3dVector(
filtered_points
)
point_cloud.colors = o3d.utility.Vector3dVector(
filtered_colors
)
# Remove statistical outliers
point_cloud, _ = point_cloud.remove_statistical_outlier(
nb_neighbors=20, std_ratio=2.0
)
logger.debug(
f"Generated point cloud with {len(point_cloud.points)} points"
)
return point_cloud
except Exception as e:
logger.error(f"Point cloud generation failed: {e}")
raise RuntimeError(f"Point cloud generation error: {e}")
def process_image(
self, image_path: Union[str, Path]
) -> Dict[str, Any]:
"""
Process a single image through the complete AI vision pipeline.
Args:
image_path: Path to input image (JPG or PNG)
Returns:
Dictionary containing all processing results:
- 'image': Original RGB image
- 'depth_map': Depth estimation result
- 'segmentation_masks': SAM segmentation results
- 'detections': YOLO object detection results
- 'point_cloud': Open3D point cloud object
Raises:
FileNotFoundError: If image file doesn't exist
RuntimeError: If any processing step fails
"""
try:
logger.info(f"Processing image: {image_path}")
# Load and preprocess image
rgb_image, pil_image = self._load_and_preprocess_image(
image_path
)
# Depth estimation
depth_map = self.estimate_depth(rgb_image)
# Semantic segmentation
segmentation_masks = self.segment_image(rgb_image)
# Object detection
detections = self.detect_objects(rgb_image)
# 3D point cloud generation
point_cloud = self.generate_point_cloud(
rgb_image, depth_map, segmentation_masks
)
# Compile results
results = {
"image": rgb_image,
"depth_map": depth_map,
"segmentation_masks": segmentation_masks,
"detections": detections,
"point_cloud": point_cloud,
"metadata": {
"image_shape": rgb_image.shape,
"num_segments": len(segmentation_masks),
"num_detections": len(detections),
"num_points": len(point_cloud.points),
},
}
logger.success("Image processing completed successfully")
logger.info(f"Results: {results['metadata']}")
return results
except Exception as e:
logger.error(f"Image processing failed: {e}")
raise
def save_point_cloud(
self,
point_cloud: o3d.geometry.PointCloud,
output_path: Union[str, Path],
) -> None:
"""
Save point cloud to file.
Args:
point_cloud: Open3D PointCloud object
output_path: Output file path (.ply, .pcd, .xyz)
Raises:
RuntimeError: If saving fails
"""
try:
output_path = Path(output_path)
output_path.parent.mkdir(parents=True, exist_ok=True)
success = o3d.io.write_point_cloud(
str(output_path), point_cloud
)
if not success:
raise RuntimeError("Failed to write point cloud file")
logger.success(f"Point cloud saved to: {output_path}")
except Exception as e:
logger.error(f"Failed to save point cloud: {e}")
raise RuntimeError(f"Point cloud save error: {e}")
def visualize_point_cloud(
self, point_cloud: o3d.geometry.PointCloud
) -> None:
"""
Visualize point cloud using Open3D viewer.
Args:
point_cloud: Open3D PointCloud object to visualize
"""
try:
logger.info("Opening point cloud visualization")
o3d.visualization.draw_geometries([point_cloud])
except Exception as e:
logger.warning(f"Visualization failed: {e}")
# Example usage and testing
if __name__ == "__main__":
# Example usage
try:
# Initialize pipeline
pipeline = AIVisionPipeline(
model_dir="./models", log_level="INFO"
)
# Process an image (replace with actual image path)
image_path = "map_two.png" # Replace with your image path
if Path(image_path).exists():
results = pipeline.process_image(image_path)
# Save point cloud
pipeline.save_point_cloud(
results["point_cloud"], "output_point_cloud.ply"
)
# Optional: Visualize point cloud
pipeline.visualize_point_cloud(results["point_cloud"])
print(
f"Processing completed! Generated {results['metadata']['num_points']} 3D points"
)
else:
logger.warning(f"Example image not found: {image_path}")
except Exception as e:
logger.error(f"Example execution failed: {e}")

Binary file not shown.

After

Width:  |  Height:  |  Size: 943 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

@ -5,8 +5,8 @@ This script demonstrates various scenarios and use cases for the senator simulat
including debates, votes, committee hearings, and individual senator interactions.
"""
from simulations.senator_assembly.senator_simulation import (
SenatorSimulation,
from swarms.sims.senator_assembly import (
SenatorAssembly,
)
import json
import time
@ -18,7 +18,7 @@ def demonstrate_individual_senators():
print("🎭 INDIVIDUAL SENATOR DEMONSTRATIONS")
print("=" * 80)
senate = SenatorSimulation()
senate = SenatorAssembly()
# Test different types of senators with various questions
test_senators = [
@ -85,7 +85,7 @@ def demonstrate_senate_debates():
print("💬 SENATE DEBATE SIMULATIONS")
print("=" * 80)
senate = SenatorSimulation()
senate = SenatorAssembly()
debate_topics = [
{
@ -153,7 +153,7 @@ def demonstrate_senate_votes():
print("🗳️ SENATE VOTING SIMULATIONS")
print("=" * 80)
senate = SenatorSimulation()
senate = SenatorAssembly()
bills = [
{
@ -244,7 +244,7 @@ def demonstrate_committee_hearings():
print("🏛️ COMMITTEE HEARING SIMULATIONS")
print("=" * 80)
senate = SenatorSimulation()
senate = SenatorAssembly()
hearings = [
{
@ -320,7 +320,7 @@ def demonstrate_party_analysis():
print("📊 PARTY ANALYSIS AND COMPARISONS")
print("=" * 80)
senate = SenatorSimulation()
senate = SenatorAssembly()
# Get party breakdown
composition = senate.get_senate_composition()
@ -372,7 +372,7 @@ def demonstrate_interactive_scenarios():
print("🎮 INTERACTIVE SCENARIOS")
print("=" * 80)
senate = SenatorSimulation()
senate = SenatorAssembly()
scenarios = [
{
@ -492,7 +492,7 @@ def main():
print("• Party-based analysis and comparisons")
print("• Interactive scenarios and what-if situations")
print(
"\nYou can now use the SenatorSimulation class to create your own scenarios!"
"\nYou can now use the SenatorAssembly class to create your own scenarios!"
)

@ -0,0 +1,135 @@
#!/usr/bin/env python3
"""
Test script for the new concurrent voting functionality in the Senate simulation.
"""
from swarms.sims.senator_assembly import SenatorAssembly
def test_concurrent_voting():
"""
Test the new concurrent voting functionality.
"""
print("🏛️ Testing Concurrent Senate Voting...")
# Create the simulation
senate = SenatorAssembly()
print("\n📊 Senate Composition:")
composition = senate.get_senate_composition()
print(f" Total Senators: {composition['total_senators']}")
print(f" Party Breakdown: {composition['party_breakdown']}")
# Test concurrent voting on a bill
bill_description = "A comprehensive infrastructure bill including roads, bridges, broadband expansion, and clean energy projects with a total cost of $1.2 trillion"
print("\n🗳️ Running Concurrent Vote on Infrastructure Bill")
print(f" Bill: {bill_description[:100]}...")
# Run the concurrent vote with batch size of 10
vote_results = senate.simulate_vote_concurrent(
bill_description=bill_description,
batch_size=10, # Process 10 senators concurrently in each batch
)
# Display results
print("\n📊 Final Vote Results:")
print(f" Total Votes: {vote_results['results']['total_votes']}")
print(f" YEA: {vote_results['results']['yea']}")
print(f" NAY: {vote_results['results']['nay']}")
print(f" PRESENT: {vote_results['results']['present']}")
print(f" OUTCOME: {vote_results['results']['outcome']}")
print("\n📈 Party Breakdown:")
for party, votes in vote_results["party_breakdown"].items():
total_party_votes = sum(votes.values())
if total_party_votes > 0:
print(
f" {party}: YEA={votes['yea']}, NAY={votes['nay']}, PRESENT={votes['present']}"
)
print("\n📋 Sample Individual Votes (first 10):")
for i, (senator, vote) in enumerate(
vote_results["votes"].items()
):
if i >= 10: # Only show first 10
break
party = senate._get_senator_party(senator)
print(f" {senator} ({party}): {vote}")
if len(vote_results["votes"]) > 10:
print(
f" ... and {len(vote_results['votes']) - 10} more votes"
)
print("\n⚡ Performance Info:")
print(f" Batch Size: {vote_results['batch_size']}")
print(f" Total Batches: {vote_results['total_batches']}")
return vote_results
def test_concurrent_voting_with_subset():
"""
Test concurrent voting with a subset of senators.
"""
print("\n" + "=" * 60)
print("🏛️ Testing Concurrent Voting with Subset of Senators...")
# Create the simulation
senate = SenatorAssembly()
# Select a subset of senators for testing
test_senators = [
"Katie Britt",
"Mark Kelly",
"Lisa Murkowski",
"Alex Padilla",
"Tom Cotton",
"Kyrsten Sinema",
"John Barrasso",
"Tammy Duckworth",
"Ted Cruz",
"Amy Klobuchar",
]
bill_description = (
"A bill to increase the federal minimum wage to $15 per hour"
)
print("\n🗳️ Running Concurrent Vote on Minimum Wage Bill")
print(f" Bill: {bill_description}")
print(f" Participants: {len(test_senators)} senators")
# Run the concurrent vote
vote_results = senate.simulate_vote_concurrent(
bill_description=bill_description,
participants=test_senators,
batch_size=5, # Smaller batch size for testing
)
# Display results
print("\n📊 Vote Results:")
print(f" YEA: {vote_results['results']['yea']}")
print(f" NAY: {vote_results['results']['nay']}")
print(f" PRESENT: {vote_results['results']['present']}")
print(f" OUTCOME: {vote_results['results']['outcome']}")
print("\n📋 All Individual Votes:")
for senator, vote in vote_results["votes"].items():
party = senate._get_senator_party(senator)
print(f" {senator} ({party}): {vote}")
return vote_results
if __name__ == "__main__":
# Test full senate concurrent voting
full_results = test_concurrent_voting()
# Test subset concurrent voting
subset_results = test_concurrent_voting_with_subset()
print("\n✅ Concurrent voting tests completed successfully!")
print(f" Full Senate: {full_results['results']['outcome']}")
print(f" Subset: {subset_results['results']['outcome']}")

@ -0,0 +1,265 @@
"""
Stagehand Browser Automation Agent for Swarms
=============================================
This example demonstrates how to create a Swarms-compatible agent
that wraps Stagehand's browser automation capabilities.
The StagehandAgent class inherits from the Swarms Agent base class
and implements browser automation through natural language commands.
"""
import asyncio
import json
import os
from typing import Any, Dict, Optional
from dotenv import load_dotenv
from loguru import logger
from pydantic import BaseModel, Field
from swarms import Agent as SwarmsAgent
from stagehand import Stagehand, StagehandConfig
load_dotenv()
class WebData(BaseModel):
"""Schema for extracted web data."""
url: str = Field(..., description="The URL of the page")
title: str = Field(..., description="Page title")
content: str = Field(..., description="Extracted content")
metadata: Dict[str, Any] = Field(
default_factory=dict, description="Additional metadata"
)
class StagehandAgent(SwarmsAgent):
"""
A Swarms agent that integrates Stagehand for browser automation.
This agent can navigate websites, extract data, perform actions,
and observe page elements using natural language instructions.
"""
def __init__(
self,
agent_name: str = "StagehandBrowserAgent",
browserbase_api_key: Optional[str] = None,
browserbase_project_id: Optional[str] = None,
model_name: str = "gpt-4o-mini",
model_api_key: Optional[str] = None,
env: str = "LOCAL", # LOCAL or BROWSERBASE
*args,
**kwargs,
):
"""
Initialize the StagehandAgent.
Args:
agent_name: Name of the agent
browserbase_api_key: API key for Browserbase (if using cloud)
browserbase_project_id: Project ID for Browserbase
model_name: LLM model to use
model_api_key: API key for the model
env: Environment - LOCAL or BROWSERBASE
"""
# Don't pass stagehand-specific args to parent
super().__init__(agent_name=agent_name, *args, **kwargs)
self.stagehand_config = StagehandConfig(
env=env,
api_key=browserbase_api_key
or os.getenv("BROWSERBASE_API_KEY"),
project_id=browserbase_project_id
or os.getenv("BROWSERBASE_PROJECT_ID"),
model_name=model_name,
model_api_key=model_api_key
or os.getenv("OPENAI_API_KEY"),
)
self.stagehand = None
self._initialized = False
async def _init_stagehand(self):
"""Initialize Stagehand instance."""
if not self._initialized:
self.stagehand = Stagehand(self.stagehand_config)
await self.stagehand.init()
self._initialized = True
logger.info(
f"Stagehand initialized for {self.agent_name}"
)
async def _close_stagehand(self):
"""Close Stagehand instance."""
if self.stagehand and self._initialized:
await self.stagehand.close()
self._initialized = False
logger.info(f"Stagehand closed for {self.agent_name}")
def run(self, task: str, *args, **kwargs) -> str:
"""
Execute a browser automation task.
The task string should contain instructions like:
- "Navigate to example.com and extract the main content"
- "Go to google.com and search for 'AI agents'"
- "Extract all company names from https://ycombinator.com"
Args:
task: Natural language description of the browser task
Returns:
String result of the task execution
"""
return asyncio.run(self._async_run(task, *args, **kwargs))
async def _async_run(self, task: str, *args, **kwargs) -> str:
"""Async implementation of run method."""
try:
await self._init_stagehand()
# Parse the task to determine actions
result = await self._execute_browser_task(task)
return json.dumps(result, indent=2)
except Exception as e:
logger.error(f"Error in browser task: {str(e)}")
return f"Error executing browser task: {str(e)}"
finally:
# Keep browser open for potential follow-up tasks
pass
async def _execute_browser_task(
self, task: str
) -> Dict[str, Any]:
"""
Execute a browser task based on natural language instructions.
This method interprets the task and calls appropriate Stagehand methods.
"""
page = self.stagehand.page
result = {"task": task, "status": "completed", "data": {}}
# Determine if task involves navigation
if any(
keyword in task.lower()
for keyword in ["navigate", "go to", "visit", "open"]
):
# Extract URL from task
import re
url_pattern = r"https?://[^\s]+"
urls = re.findall(url_pattern, task)
if not urls and any(
domain in task for domain in [".com", ".org", ".net"]
):
# Try to extract domain names
domain_pattern = r"(\w+\.\w+)"
domains = re.findall(domain_pattern, task)
if domains:
urls = [f"https://{domain}" for domain in domains]
if urls:
url = urls[0]
await page.goto(url)
result["data"]["navigated_to"] = url
logger.info(f"Navigated to {url}")
# Determine action type
if "extract" in task.lower():
# Perform extraction
extraction_prompt = task.replace("extract", "").strip()
extracted = await page.extract(extraction_prompt)
result["data"]["extracted"] = extracted
result["action"] = "extract"
elif "click" in task.lower() or "press" in task.lower():
# Perform action
action_result = await page.act(task)
result["data"]["action_performed"] = str(action_result)
result["action"] = "act"
elif "search" in task.lower():
# Perform search action
search_query = (
task.split("search for")[-1].strip().strip("'\"")
)
# First, find the search box
search_box = await page.observe(
"find the search input field"
)
if search_box:
# Click on search box and type
await page.act(f"click on {search_box[0]}")
await page.act(f"type '{search_query}'")
await page.act("press Enter")
result["data"]["search_query"] = search_query
result["action"] = "search"
elif "observe" in task.lower() or "find" in task.lower():
# Perform observation
observation = await page.observe(task)
result["data"]["observation"] = [
{
"description": obs.description,
"selector": obs.selector,
}
for obs in observation
]
result["action"] = "observe"
else:
# General action
action_result = await page.act(task)
result["data"]["action_result"] = str(action_result)
result["action"] = "general"
return result
def cleanup(self):
"""Clean up browser resources."""
if self._initialized:
asyncio.run(self._close_stagehand())
def __del__(self):
"""Ensure browser is closed on deletion."""
self.cleanup()
# Example usage
if __name__ == "__main__":
# Create a Stagehand browser agent
browser_agent = StagehandAgent(
agent_name="WebScraperAgent",
model_name="gpt-4o-mini",
env="LOCAL", # Use LOCAL for Playwright, BROWSERBASE for cloud
)
# Example 1: Navigate and extract data
print("Example 1: Basic navigation and extraction")
result1 = browser_agent.run(
"Navigate to https://news.ycombinator.com and extract the titles of the top 5 stories"
)
print(result1)
print("\n" + "=" * 50 + "\n")
# Example 2: Perform a search
print("Example 2: Search on a website")
result2 = browser_agent.run(
"Go to google.com and search for 'Swarms AI framework'"
)
print(result2)
print("\n" + "=" * 50 + "\n")
# Example 3: Extract structured data
print("Example 3: Extract specific information")
result3 = browser_agent.run(
"Navigate to https://example.com and extract the main heading and first paragraph"
)
print(result3)
# Clean up
browser_agent.cleanup()

@ -0,0 +1,397 @@
"""
Stagehand Tools for Swarms Agent
=================================
This example demonstrates how to create Stagehand browser automation tools
that can be used by a standard Swarms Agent. Each Stagehand method (act,
extract, observe) becomes a separate tool that the agent can use.
This approach gives the agent more fine-grained control over browser
automation tasks.
"""
import asyncio
import json
import os
from typing import Optional
from dotenv import load_dotenv
from loguru import logger
from swarms import Agent
from stagehand import Stagehand, StagehandConfig
load_dotenv()
class BrowserState:
"""Singleton to manage browser state across tools."""
_instance = None
_stagehand = None
_initialized = False
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
async def init_browser(
self,
env: str = "LOCAL",
api_key: Optional[str] = None,
project_id: Optional[str] = None,
model_name: str = "gpt-4o-mini",
model_api_key: Optional[str] = None,
):
"""Initialize the browser if not already initialized."""
if not self._initialized:
config = StagehandConfig(
env=env,
api_key=api_key or os.getenv("BROWSERBASE_API_KEY"),
project_id=project_id
or os.getenv("BROWSERBASE_PROJECT_ID"),
model_name=model_name,
model_api_key=model_api_key
or os.getenv("OPENAI_API_KEY"),
)
self._stagehand = Stagehand(config)
await self._stagehand.init()
self._initialized = True
logger.info("Stagehand browser initialized")
async def get_page(self):
"""Get the current page instance."""
if not self._initialized:
raise RuntimeError(
"Browser not initialized. Call init_browser first."
)
return self._stagehand.page
async def close(self):
"""Close the browser."""
if self._initialized and self._stagehand:
await self._stagehand.close()
self._initialized = False
logger.info("Stagehand browser closed")
# Browser state instance
browser_state = BrowserState()
def navigate_browser(url: str) -> str:
"""
Navigate to a URL in the browser.
Args:
url (str): The URL to navigate to. Should be a valid URL starting with http:// or https://.
If no protocol is provided, https:// will be added automatically.
Returns:
str: Success message with the URL navigated to, or error message if navigation fails
Raises:
RuntimeError: If browser initialization fails
Exception: If navigation to the URL fails
Example:
>>> result = navigate_browser("https://example.com")
>>> print(result)
"Successfully navigated to https://example.com"
>>> result = navigate_browser("google.com")
>>> print(result)
"Successfully navigated to https://google.com"
"""
return asyncio.run(_navigate_browser_async(url))
async def _navigate_browser_async(url: str) -> str:
"""Async implementation of navigate_browser."""
try:
await browser_state.init_browser()
page = await browser_state.get_page()
# Ensure URL has protocol
if not url.startswith(("http://", "https://")):
url = f"https://{url}"
await page.goto(url)
return f"Successfully navigated to {url}"
except Exception as e:
logger.error(f"Navigation error: {str(e)}")
return f"Failed to navigate to {url}: {str(e)}"
def browser_act(action: str) -> str:
"""
Perform an action on the current web page using natural language.
Args:
action (str): Natural language description of the action to perform.
Examples: 'click the submit button', 'type hello@example.com in the email field',
'scroll down', 'press Enter', 'select option from dropdown'
Returns:
str: JSON formatted string with action result and status information
Raises:
RuntimeError: If browser is not initialized or page is not available
Exception: If the action cannot be performed on the current page
Example:
>>> result = browser_act("click the submit button")
>>> print(result)
"Action performed: click the submit button. Result: clicked successfully"
>>> result = browser_act("type hello@example.com in the email field")
>>> print(result)
"Action performed: type hello@example.com in the email field. Result: text entered"
"""
return asyncio.run(_browser_act_async(action))
async def _browser_act_async(action: str) -> str:
"""Async implementation of browser_act."""
try:
await browser_state.init_browser()
page = await browser_state.get_page()
result = await page.act(action)
return f"Action performed: {action}. Result: {result}"
except Exception as e:
logger.error(f"Action error: {str(e)}")
return f"Failed to perform action '{action}': {str(e)}"
def browser_extract(query: str) -> str:
"""
Extract information from the current web page using natural language.
Args:
query (str): Natural language description of what information to extract.
Examples: 'extract all email addresses', 'get the main article text',
'find all product prices', 'extract the page title and meta description'
Returns:
str: JSON formatted string containing the extracted information, or error message if extraction fails
Raises:
RuntimeError: If browser is not initialized or page is not available
Exception: If extraction fails due to page content or parsing issues
Example:
>>> result = browser_extract("extract all email addresses")
>>> print(result)
'["contact@example.com", "support@example.com"]'
>>> result = browser_extract("get the main article text")
>>> print(result)
'{"title": "Article Title", "content": "Article content..."}'
"""
return asyncio.run(_browser_extract_async(query))
async def _browser_extract_async(query: str) -> str:
"""Async implementation of browser_extract."""
try:
await browser_state.init_browser()
page = await browser_state.get_page()
extracted = await page.extract(query)
# Convert to JSON string for agent consumption
if isinstance(extracted, (dict, list)):
return json.dumps(extracted, indent=2)
else:
return str(extracted)
except Exception as e:
logger.error(f"Extraction error: {str(e)}")
return f"Failed to extract '{query}': {str(e)}"
def browser_observe(query: str) -> str:
"""
Observe and find elements on the current web page using natural language.
Args:
query (str): Natural language description of elements to find.
Examples: 'find the search box', 'locate the submit button',
'find all navigation links', 'observe form elements'
Returns:
str: JSON formatted string containing information about found elements including
their descriptions, selectors, and interaction methods
Raises:
RuntimeError: If browser is not initialized or page is not available
Exception: If observation fails due to page structure or element detection issues
Example:
>>> result = browser_observe("find the search box")
>>> print(result)
'[{"description": "Search input field", "selector": "#search", "method": "input"}]'
>>> result = browser_observe("locate the submit button")
>>> print(result)
'[{"description": "Submit button", "selector": "button[type=submit]", "method": "click"}]'
"""
return asyncio.run(_browser_observe_async(query))
async def _browser_observe_async(query: str) -> str:
"""Async implementation of browser_observe."""
try:
await browser_state.init_browser()
page = await browser_state.get_page()
observations = await page.observe(query)
# Format observations for readability
result = []
for obs in observations:
result.append(
{
"description": obs.description,
"selector": obs.selector,
"method": obs.method,
}
)
return json.dumps(result, indent=2)
except Exception as e:
logger.error(f"Observation error: {str(e)}")
return f"Failed to observe '{query}': {str(e)}"
def browser_screenshot(filename: str = "screenshot.png") -> str:
"""
Take a screenshot of the current web page.
Args:
filename (str, optional): The filename to save the screenshot to.
Defaults to "screenshot.png".
.png extension will be added automatically if not provided.
Returns:
str: Success message with the filename where screenshot was saved,
or error message if screenshot fails
Raises:
RuntimeError: If browser is not initialized or page is not available
Exception: If screenshot capture or file saving fails
Example:
>>> result = browser_screenshot()
>>> print(result)
"Screenshot saved to screenshot.png"
>>> result = browser_screenshot("page_capture.png")
>>> print(result)
"Screenshot saved to page_capture.png"
"""
return asyncio.run(_browser_screenshot_async(filename))
async def _browser_screenshot_async(filename: str) -> str:
"""Async implementation of browser_screenshot."""
try:
await browser_state.init_browser()
page = await browser_state.get_page()
# Ensure .png extension
if not filename.endswith(".png"):
filename += ".png"
# Get the underlying Playwright page
playwright_page = page.page
await playwright_page.screenshot(path=filename)
return f"Screenshot saved to {filename}"
except Exception as e:
logger.error(f"Screenshot error: {str(e)}")
return f"Failed to take screenshot: {str(e)}"
def close_browser() -> str:
"""
Close the browser when done with automation tasks.
Returns:
str: Success message if browser is closed successfully,
or error message if closing fails
Raises:
Exception: If browser closing process encounters errors
Example:
>>> result = close_browser()
>>> print(result)
"Browser closed successfully"
"""
return asyncio.run(_close_browser_async())
async def _close_browser_async() -> str:
"""Async implementation of close_browser."""
try:
await browser_state.close()
return "Browser closed successfully"
except Exception as e:
logger.error(f"Close browser error: {str(e)}")
return f"Failed to close browser: {str(e)}"
# Example usage
if __name__ == "__main__":
# Create a Swarms agent with browser tools
browser_agent = Agent(
agent_name="BrowserAutomationAgent",
model_name="gpt-4o-mini",
max_loops=1,
tools=[
navigate_browser,
browser_act,
browser_extract,
browser_observe,
browser_screenshot,
close_browser,
],
system_prompt="""You are a web browser automation specialist. You can:
1. Navigate to websites using the navigate_browser tool
2. Perform actions like clicking and typing using the browser_act tool
3. Extract information from pages using the browser_extract tool
4. Find and observe elements using the browser_observe tool
5. Take screenshots using the browser_screenshot tool
6. Close the browser when done using the close_browser tool
Always start by navigating to a URL before trying to interact with a page.
Be specific in your actions and extractions. When done with tasks, close the browser.""",
)
# Example 1: Research task
print("Example 1: Automated web research")
result1 = browser_agent.run(
"Go to hackernews (news.ycombinator.com) and extract the titles of the top 5 stories. Then take a screenshot."
)
print(result1)
print("\n" + "=" * 50 + "\n")
# Example 2: Search task
print("Example 2: Perform a web search")
result2 = browser_agent.run(
"Navigate to google.com, search for 'Python web scraping best practices', and extract the first 3 search result titles"
)
print(result2)
print("\n" + "=" * 50 + "\n")
# Example 3: Form interaction
print("Example 3: Interact with a form")
result3 = browser_agent.run(
"Go to example.com and observe what elements are on the page. Then extract all the text content."
)
print(result3)
# Clean up
browser_agent.run("Close the browser")

@ -0,0 +1,263 @@
"""
Stagehand MCP Server Integration with Swarms
============================================
This example demonstrates how to use the Stagehand MCP (Model Context Protocol)
server with Swarms agents. The MCP server provides browser automation capabilities
as standardized tools that can be discovered and used by agents.
Prerequisites:
1. Install and run the Stagehand MCP server:
cd stagehand-mcp-server
npm install
npm run build
npm start
2. The server will start on http://localhost:3000/sse
Features:
- Automatic tool discovery from MCP server
- Multi-session browser management
- Built-in screenshot resources
- Prompt templates for common tasks
"""
from typing import List
from dotenv import load_dotenv
from loguru import logger
from swarms import Agent
load_dotenv()
class StagehandMCPAgent:
"""
A Swarms agent that connects to the Stagehand MCP server
for browser automation capabilities.
"""
def __init__(
self,
agent_name: str = "StagehandMCPAgent",
mcp_server_url: str = "http://localhost:3000/sse",
model_name: str = "gpt-4o-mini",
max_loops: int = 1,
):
"""
Initialize the Stagehand MCP Agent.
Args:
agent_name: Name of the agent
mcp_server_url: URL of the Stagehand MCP server
model_name: LLM model to use
max_loops: Maximum number of reasoning loops
"""
self.agent = Agent(
agent_name=agent_name,
model_name=model_name,
max_loops=max_loops,
# Connect to the Stagehand MCP server
mcp_url=mcp_server_url,
system_prompt="""You are a web browser automation specialist with access to Stagehand MCP tools.
Available tools from the MCP server:
- navigate: Navigate to a URL
- act: Perform actions on web pages (click, type, etc.)
- extract: Extract data from web pages
- observe: Find and observe elements on pages
- screenshot: Take screenshots
- createSession: Create new browser sessions for parallel tasks
- listSessions: List active browser sessions
- closeSession: Close browser sessions
For multi-page workflows, you can create multiple sessions.
Always be specific in your actions and extractions.
Remember to close sessions when done with them.""",
verbose=True,
)
def run(self, task: str) -> str:
"""Run a browser automation task."""
return self.agent.run(task)
class MultiSessionBrowserSwarm:
"""
A multi-agent swarm that uses multiple browser sessions
for parallel web automation tasks.
"""
def __init__(
self,
mcp_server_url: str = "http://localhost:3000/sse",
num_agents: int = 3,
):
"""
Initialize a swarm of browser automation agents.
Args:
mcp_server_url: URL of the Stagehand MCP server
num_agents: Number of agents to create
"""
self.agents = []
# Create specialized agents for different tasks
agent_roles = [
(
"DataExtractor",
"You specialize in extracting structured data from websites.",
),
(
"FormFiller",
"You specialize in filling out forms and interacting with web applications.",
),
(
"WebMonitor",
"You specialize in monitoring websites for changes and capturing screenshots.",
),
]
for i in range(min(num_agents, len(agent_roles))):
name, specialization = agent_roles[i]
agent = Agent(
agent_name=f"{name}_{i}",
model_name="gpt-4o-mini",
max_loops=1,
mcp_url=mcp_server_url,
system_prompt=f"""You are a web browser automation specialist. {specialization}
You have access to Stagehand MCP tools including:
- createSession: Create a new browser session
- navigate_session: Navigate to URLs in a specific session
- act_session: Perform actions in a specific session
- extract_session: Extract data from a specific session
- observe_session: Observe elements in a specific session
- closeSession: Close a session when done
Always create your own session for tasks to work independently from other agents.""",
verbose=True,
)
self.agents.append(agent)
def distribute_tasks(self, tasks: List[str]) -> List[str]:
"""Distribute tasks among agents."""
results = []
# Distribute tasks round-robin among agents
for i, task in enumerate(tasks):
agent_idx = i % len(self.agents)
agent = self.agents[agent_idx]
logger.info(
f"Assigning task to {agent.agent_name}: {task}"
)
result = agent.run(task)
results.append(result)
return results
# Example usage
if __name__ == "__main__":
print("=" * 70)
print("Stagehand MCP Server Integration Examples")
print("=" * 70)
print(
"\nMake sure the Stagehand MCP server is running on http://localhost:3000/sse"
)
print("Run: cd stagehand-mcp-server && npm start\n")
# Example 1: Single agent with MCP tools
print("\nExample 1: Single Agent with MCP Tools")
print("-" * 40)
mcp_agent = StagehandMCPAgent(
agent_name="WebResearchAgent",
mcp_server_url="http://localhost:3000/sse",
)
# Research task using MCP tools
result1 = mcp_agent.run(
"""Navigate to news.ycombinator.com and extract the following:
1. The titles of the top 5 stories
2. Their points/scores
3. Number of comments for each
Then take a screenshot of the page."""
)
print(f"Result: {result1}")
print("\n" + "=" * 70 + "\n")
# Example 2: Multi-session parallel browsing
print("Example 2: Multi-Session Parallel Browsing")
print("-" * 40)
parallel_agent = StagehandMCPAgent(
agent_name="ParallelBrowserAgent",
mcp_server_url="http://localhost:3000/sse",
)
result2 = parallel_agent.run(
"""Create 3 browser sessions and perform these tasks in parallel:
1. Session 1: Go to github.com/trending and extract the top 3 trending repositories
2. Session 2: Go to reddit.com/r/programming and extract the top 3 posts
3. Session 3: Go to stackoverflow.com and extract the featured questions
After extracting data from all sessions, close them."""
)
print(f"Result: {result2}")
print("\n" + "=" * 70 + "\n")
# Example 3: Multi-agent browser swarm
print("Example 3: Multi-Agent Browser Swarm")
print("-" * 40)
# Create a swarm of specialized browser agents
browser_swarm = MultiSessionBrowserSwarm(
mcp_server_url="http://localhost:3000/sse",
num_agents=3,
)
# Define tasks for the swarm
swarm_tasks = [
"Create a session, navigate to python.org, and extract information about the latest Python version and its key features",
"Create a session, go to npmjs.com, search for 'stagehand', and extract information about the package including version and description",
"Create a session, visit playwright.dev, and extract the main features and benefits listed on the homepage",
]
print("Distributing tasks to browser swarm...")
swarm_results = browser_swarm.distribute_tasks(swarm_tasks)
for i, result in enumerate(swarm_results):
print(f"\nTask {i+1} Result: {result}")
print("\n" + "=" * 70 + "\n")
# Example 4: Complex workflow with session management
print("Example 4: Complex Multi-Page Workflow")
print("-" * 40)
workflow_agent = StagehandMCPAgent(
agent_name="WorkflowAgent",
mcp_server_url="http://localhost:3000/sse",
max_loops=2, # Allow more complex reasoning
)
result4 = workflow_agent.run(
"""Perform a comprehensive analysis of AI frameworks:
1. Create a new session
2. Navigate to github.com/huggingface/transformers and extract the star count and latest release info
3. In the same session, navigate to github.com/openai/gpt-3 and extract similar information
4. Navigate to github.com/anthropics/anthropic-sdk-python and extract repository statistics
5. Take screenshots of each repository page
6. Compile a comparison report of all three repositories
7. Close the session when done"""
)
print(f"Result: {result4}")
print("\n" + "=" * 70)
print("All examples completed!")
print("=" * 70)

@ -0,0 +1,371 @@
"""
Stagehand Multi-Agent Browser Automation Workflows
=================================================
This example demonstrates advanced multi-agent workflows using Stagehand
for complex browser automation scenarios. It shows how multiple agents
can work together to accomplish sophisticated web tasks.
Use cases:
1. E-commerce price monitoring across multiple sites
2. Competitive analysis and market research
3. Automated testing and validation workflows
4. Data aggregation from multiple sources
"""
from datetime import datetime
from typing import Dict, List, Optional
from dotenv import load_dotenv
from pydantic import BaseModel, Field
from swarms import Agent, SequentialWorkflow, ConcurrentWorkflow
from swarms.structs.agent_rearrange import AgentRearrange
from examples.stagehand.stagehand_wrapper_agent import StagehandAgent
load_dotenv()
# Pydantic models for structured data
class ProductInfo(BaseModel):
"""Product information schema."""
name: str = Field(..., description="Product name")
price: float = Field(..., description="Product price")
availability: str = Field(..., description="Availability status")
url: str = Field(..., description="Product URL")
screenshot_path: Optional[str] = Field(
None, description="Screenshot file path"
)
class MarketAnalysis(BaseModel):
"""Market analysis report schema."""
timestamp: datetime = Field(default_factory=datetime.now)
products: List[ProductInfo] = Field(
..., description="List of products analyzed"
)
price_range: Dict[str, float] = Field(
..., description="Min and max prices"
)
recommendations: List[str] = Field(
..., description="Analysis recommendations"
)
# Specialized browser agents
class ProductScraperAgent(StagehandAgent):
"""Specialized agent for scraping product information."""
def __init__(self, site_name: str, *args, **kwargs):
super().__init__(
agent_name=f"ProductScraper_{site_name}", *args, **kwargs
)
self.site_name = site_name
class PriceMonitorAgent(StagehandAgent):
"""Specialized agent for monitoring price changes."""
def __init__(self, *args, **kwargs):
super().__init__(
agent_name="PriceMonitorAgent", *args, **kwargs
)
# Example 1: E-commerce Price Comparison Workflow
def create_price_comparison_workflow():
"""
Create a workflow that compares prices across multiple e-commerce sites.
"""
# Create specialized agents for different sites
amazon_agent = StagehandAgent(
agent_name="AmazonScraperAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
ebay_agent = StagehandAgent(
agent_name="EbayScraperAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
analysis_agent = Agent(
agent_name="PriceAnalysisAgent",
model_name="gpt-4o-mini",
system_prompt="""You are a price analysis expert. Analyze product prices from multiple sources
and provide insights on the best deals, price trends, and recommendations.
Focus on value for money and highlight any significant price differences.""",
)
# Create concurrent workflow for parallel scraping
scraping_workflow = ConcurrentWorkflow(
agents=[amazon_agent, ebay_agent],
max_loops=1,
verbose=True,
)
# Create sequential workflow: scrape -> analyze
full_workflow = SequentialWorkflow(
agents=[scraping_workflow, analysis_agent],
max_loops=1,
verbose=True,
)
return full_workflow
# Example 2: Competitive Analysis Workflow
def create_competitive_analysis_workflow():
"""
Create a workflow for competitive analysis across multiple company websites.
"""
# Agent for extracting company information
company_researcher = StagehandAgent(
agent_name="CompanyResearchAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
# Agent for analyzing social media presence
social_media_agent = StagehandAgent(
agent_name="SocialMediaAnalysisAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
# Agent for compiling competitive analysis report
report_compiler = Agent(
agent_name="CompetitiveAnalysisReporter",
model_name="gpt-4o-mini",
system_prompt="""You are a competitive analysis expert. Compile comprehensive reports
based on company information and social media presence data. Identify strengths,
weaknesses, and market positioning for each company.""",
)
# Create agent rearrange for flexible routing
workflow_pattern = (
"company_researcher -> social_media_agent -> report_compiler"
)
competitive_workflow = AgentRearrange(
agents=[
company_researcher,
social_media_agent,
report_compiler,
],
flow=workflow_pattern,
verbose=True,
)
return competitive_workflow
# Example 3: Automated Testing Workflow
def create_automated_testing_workflow():
"""
Create a workflow for automated web application testing.
"""
# Agent for UI testing
ui_tester = StagehandAgent(
agent_name="UITestingAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
# Agent for form validation testing
form_tester = StagehandAgent(
agent_name="FormValidationAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
# Agent for accessibility testing
accessibility_tester = StagehandAgent(
agent_name="AccessibilityTestingAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
# Agent for compiling test results
test_reporter = Agent(
agent_name="TestReportCompiler",
model_name="gpt-4o-mini",
system_prompt="""You are a QA test report specialist. Compile test results from
UI, form validation, and accessibility testing into a comprehensive report.
Highlight any failures, warnings, and provide recommendations for fixes.""",
)
# Concurrent testing followed by report generation
testing_workflow = ConcurrentWorkflow(
agents=[ui_tester, form_tester, accessibility_tester],
max_loops=1,
verbose=True,
)
full_test_workflow = SequentialWorkflow(
agents=[testing_workflow, test_reporter],
max_loops=1,
verbose=True,
)
return full_test_workflow
# Example 4: News Aggregation and Sentiment Analysis
def create_news_aggregation_workflow():
"""
Create a workflow for news aggregation and sentiment analysis.
"""
# Multiple news scraper agents
news_scrapers = []
news_sites = [
("TechCrunch", "https://techcrunch.com"),
("HackerNews", "https://news.ycombinator.com"),
("Reddit", "https://reddit.com/r/technology"),
]
for site_name, url in news_sites:
scraper = StagehandAgent(
agent_name=f"{site_name}Scraper",
model_name="gpt-4o-mini",
env="LOCAL",
)
news_scrapers.append(scraper)
# Sentiment analysis agent
sentiment_analyzer = Agent(
agent_name="SentimentAnalyzer",
model_name="gpt-4o-mini",
system_prompt="""You are a sentiment analysis expert. Analyze news articles and posts
to determine overall sentiment (positive, negative, neutral) and identify key themes
and trends in the technology sector.""",
)
# Trend identification agent
trend_identifier = Agent(
agent_name="TrendIdentifier",
model_name="gpt-4o-mini",
system_prompt="""You are a trend analysis expert. Based on aggregated news and sentiment
data, identify emerging trends, hot topics, and potential market movements in the
technology sector.""",
)
# Create workflow: parallel scraping -> sentiment analysis -> trend identification
scraping_workflow = ConcurrentWorkflow(
agents=news_scrapers,
max_loops=1,
verbose=True,
)
analysis_workflow = SequentialWorkflow(
agents=[
scraping_workflow,
sentiment_analyzer,
trend_identifier,
],
max_loops=1,
verbose=True,
)
return analysis_workflow
# Main execution examples
if __name__ == "__main__":
print("=" * 70)
print("Stagehand Multi-Agent Workflow Examples")
print("=" * 70)
# Example 1: Price Comparison
print("\nExample 1: E-commerce Price Comparison")
print("-" * 40)
price_workflow = create_price_comparison_workflow()
# Search for a specific product across multiple sites
price_result = price_workflow.run(
"""Search for 'iPhone 15 Pro Max 256GB' on:
1. Amazon - extract price, availability, and seller information
2. eBay - extract price range, number of listings, and average price
Take screenshots of search results from both sites.
Compare the prices and provide recommendations on where to buy."""
)
print(f"Price Comparison Result:\n{price_result}")
print("\n" + "=" * 70 + "\n")
# Example 2: Competitive Analysis
print("Example 2: Competitive Analysis")
print("-" * 40)
competitive_workflow = create_competitive_analysis_workflow()
competitive_result = competitive_workflow.run(
"""Analyze these three AI companies:
1. OpenAI - visit openai.com and extract mission, products, and recent announcements
2. Anthropic - visit anthropic.com and extract their AI safety approach and products
3. DeepMind - visit deepmind.com and extract research focus and achievements
Then check their Twitter/X presence and recent posts.
Compile a competitive analysis report comparing their market positioning."""
)
print(f"Competitive Analysis Result:\n{competitive_result}")
print("\n" + "=" * 70 + "\n")
# Example 3: Automated Testing
print("Example 3: Automated Web Testing")
print("-" * 40)
testing_workflow = create_automated_testing_workflow()
test_result = testing_workflow.run(
"""Test the website example.com:
1. UI Testing: Check if all main navigation links work, images load, and layout is responsive
2. Form Testing: If there are any forms, test with valid and invalid inputs
3. Accessibility: Check for alt texts, ARIA labels, and keyboard navigation
Take screenshots of any issues found and compile a comprehensive test report."""
)
print(f"Test Results:\n{test_result}")
print("\n" + "=" * 70 + "\n")
# Example 4: News Aggregation
print("Example 4: Tech News Aggregation and Analysis")
print("-" * 40)
news_workflow = create_news_aggregation_workflow()
news_result = news_workflow.run(
"""For each news source:
1. TechCrunch: Extract the top 5 headlines about AI or machine learning
2. HackerNews: Extract the top 5 posts related to AI/ML with most points
3. Reddit r/technology: Extract top 5 posts about AI from the past week
Analyze sentiment and identify emerging trends in AI technology."""
)
print(f"News Analysis Result:\n{news_result}")
# Cleanup all browser instances
print("\n" + "=" * 70)
print("Cleaning up browser instances...")
# Clean up agents
for agent in price_workflow.agents:
if isinstance(agent, StagehandAgent):
agent.cleanup()
elif hasattr(agent, "agents"): # For nested workflows
for sub_agent in agent.agents:
if isinstance(sub_agent, StagehandAgent):
sub_agent.cleanup()
print("All workflows completed!")
print("=" * 70)

@ -0,0 +1,249 @@
# Stagehand Browser Automation Integration for Swarms
This directory contains examples demonstrating how to integrate [Stagehand](https://github.com/browserbase/stagehand), an AI-powered browser automation framework, with the Swarms multi-agent framework.
## Overview
Stagehand provides natural language browser automation capabilities that can be seamlessly integrated into Swarms agents. This integration enables:
- 🌐 **Natural Language Web Automation**: Use simple commands like "click the submit button" or "extract product prices"
- 🤖 **Multi-Agent Browser Workflows**: Multiple agents can automate different websites simultaneously
- 🔧 **Flexible Integration Options**: Use as a wrapped agent, individual tools, or via MCP server
- 📊 **Complex Automation Scenarios**: E-commerce monitoring, competitive analysis, automated testing, and more
## Examples
### 1. Stagehand Wrapper Agent (`1_stagehand_wrapper_agent.py`)
The simplest integration - wraps Stagehand as a Swarms-compatible agent.
```python
from examples.stagehand.stagehand_wrapper_agent import StagehandAgent
# Create a browser automation agent
browser_agent = StagehandAgent(
agent_name="WebScraperAgent",
model_name="gpt-4o-mini",
env="LOCAL", # or "BROWSERBASE" for cloud execution
)
# Use natural language to control the browser
result = browser_agent.run(
"Navigate to news.ycombinator.com and extract the top 5 story titles"
)
```
**Features:**
- Inherits from Swarms `Agent` base class
- Automatic browser lifecycle management
- Natural language task interpretation
- Support for both local (Playwright) and cloud (Browserbase) execution
### 2. Stagehand as Tools (`2_stagehand_tools_agent.py`)
Provides fine-grained control by exposing Stagehand methods as individual tools.
```python
from swarms import Agent
from examples.stagehand.stagehand_tools_agent import (
NavigateTool, ActTool, ExtractTool, ObserveTool, ScreenshotTool
)
# Create agent with browser tools
browser_agent = Agent(
agent_name="BrowserAutomationAgent",
model_name="gpt-4o-mini",
tools=[
NavigateTool(),
ActTool(),
ExtractTool(),
ObserveTool(),
ScreenshotTool(),
],
)
# Agent can now use tools strategically
result = browser_agent.run(
"Go to google.com, search for 'Python tutorials', and extract the first 3 results"
)
```
**Available Tools:**
- `NavigateTool`: Navigate to URLs
- `ActTool`: Perform actions (click, type, scroll)
- `ExtractTool`: Extract data from pages
- `ObserveTool`: Find elements on pages
- `ScreenshotTool`: Capture screenshots
- `CloseBrowserTool`: Clean up browser resources
### 3. Stagehand MCP Server (`3_stagehand_mcp_agent.py`)
Integrates with Stagehand's Model Context Protocol (MCP) server for standardized tool access.
```python
from examples.stagehand.stagehand_mcp_agent import StagehandMCPAgent
# Connect to Stagehand MCP server
mcp_agent = StagehandMCPAgent(
agent_name="WebResearchAgent",
mcp_server_url="http://localhost:3000/sse",
)
# Use MCP tools including multi-session management
result = mcp_agent.run("""
Create 3 browser sessions and:
1. Session 1: Check Python.org for latest version
2. Session 2: Check PyPI for trending packages
3. Session 3: Check GitHub Python trending repos
Compile a Python ecosystem status report.
""")
```
**MCP Features:**
- Automatic tool discovery
- Multi-session browser management
- Built-in screenshot resources
- Prompt templates for common tasks
### 4. Multi-Agent Workflows (`4_stagehand_multi_agent_workflow.py`)
Demonstrates complex multi-agent browser automation scenarios.
```python
from examples.stagehand.stagehand_multi_agent_workflow import (
create_price_comparison_workflow,
create_competitive_analysis_workflow,
create_automated_testing_workflow,
create_news_aggregation_workflow
)
# Price comparison across multiple e-commerce sites
price_workflow = create_price_comparison_workflow()
result = price_workflow.run(
"Compare prices for iPhone 15 Pro on Amazon and eBay"
)
# Competitive analysis of multiple companies
competitive_workflow = create_competitive_analysis_workflow()
result = competitive_workflow.run(
"Analyze OpenAI, Anthropic, and DeepMind websites and social media"
)
```
**Workflow Examples:**
- **E-commerce Monitoring**: Track prices across multiple sites
- **Competitive Analysis**: Research competitors' websites and social media
- **Automated Testing**: UI, form validation, and accessibility testing
- **News Aggregation**: Collect and analyze news from multiple sources
## Setup
### Prerequisites
1. **Install Swarms and Stagehand:**
```bash
pip install swarms stagehand
```
2. **Set up environment variables:**
```bash
# For local browser automation (using Playwright)
export OPENAI_API_KEY="your-openai-key"
# For cloud browser automation (using Browserbase)
export BROWSERBASE_API_KEY="your-browserbase-key"
export BROWSERBASE_PROJECT_ID="your-project-id"
```
3. **For MCP Server examples:**
```bash
# Install and run the Stagehand MCP server
cd stagehand-mcp-server
npm install
npm run build
npm start
```
## Use Cases
### E-commerce Automation
- Price monitoring and comparison
- Inventory tracking
- Automated purchasing workflows
- Review aggregation
### Research and Analysis
- Competitive intelligence gathering
- Market research automation
- Social media monitoring
- News and trend analysis
### Quality Assurance
- Automated UI testing
- Cross-browser compatibility testing
- Form validation testing
- Accessibility compliance checking
### Data Collection
- Web scraping at scale
- Real-time data monitoring
- Structured data extraction
- Screenshot documentation
## Best Practices
1. **Resource Management**: Always clean up browser instances when done
```python
browser_agent.cleanup() # For wrapper agents
```
2. **Error Handling**: Stagehand includes self-healing capabilities, but wrap critical operations in try-except blocks
3. **Parallel Execution**: Use `ConcurrentWorkflow` for simultaneous browser automation across multiple sites
4. **Session Management**: For complex multi-page workflows, use the MCP server's session management capabilities
5. **Rate Limiting**: Be respectful of websites - add delays between requests when necessary
## Testing
Run the test suite to verify the integration:
```bash
pytest tests/stagehand/test_stagehand_integration.py -v
```
## Troubleshooting
### Common Issues
1. **Browser not starting**: Ensure Playwright is properly installed
```bash
playwright install
```
2. **MCP connection failed**: Verify the MCP server is running on the correct port
3. **Timeout errors**: Increase timeout in StagehandConfig or agent initialization
### Debug Mode
Enable verbose logging:
```python
agent = StagehandAgent(
agent_name="DebugAgent",
verbose=True, # Enable detailed logging
)
```
## Contributing
We welcome contributions! Please:
1. Follow the existing code style
2. Add tests for new features
3. Update documentation
4. Submit PRs with clear descriptions
## License
These examples are provided under the same license as the Swarms framework. Stagehand is licensed separately - see [Stagehand's repository](https://github.com/browserbase/stagehand) for details.

@ -0,0 +1,13 @@
# Requirements for Stagehand integration examples
swarms>=8.0.0
stagehand>=0.1.0
python-dotenv>=1.0.0
pydantic>=2.0.0
loguru>=0.7.0
# For MCP server examples (optional)
httpx>=0.24.0
# For testing
pytest>=7.0.0
pytest-asyncio>=0.21.0

@ -0,0 +1,436 @@
"""
Tests for Stagehand Integration with Swarms
==========================================
This module contains tests for the Stagehand browser automation
integration with the Swarms framework.
"""
import json
import pytest
from unittest.mock import AsyncMock, patch
# Mock Stagehand classes
class MockObserveResult:
def __init__(self, description, selector, method="click"):
self.description = description
self.selector = selector
self.method = method
class MockStagehandPage:
async def goto(self, url):
return None
async def act(self, action):
return f"Performed action: {action}"
async def extract(self, query):
return {"extracted": query, "data": ["item1", "item2"]}
async def observe(self, query):
return [
MockObserveResult("Search box", "#search-input"),
MockObserveResult("Submit button", "#submit-btn"),
]
class MockStagehand:
def __init__(self, config):
self.config = config
self.page = MockStagehandPage()
async def init(self):
pass
async def close(self):
pass
# Test StagehandAgent wrapper
class TestStagehandAgent:
"""Test the StagehandAgent wrapper class."""
@patch(
"examples.stagehand.stagehand_wrapper_agent.Stagehand",
MockStagehand,
)
def test_agent_initialization(self):
"""Test that StagehandAgent initializes correctly."""
from examples.stagehand.stagehand_wrapper_agent import (
StagehandAgent,
)
agent = StagehandAgent(
agent_name="TestAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
assert agent.agent_name == "TestAgent"
assert agent.stagehand_config.env == "LOCAL"
assert agent.stagehand_config.model_name == "gpt-4o-mini"
assert not agent._initialized
@patch(
"examples.stagehand.stagehand_wrapper_agent.Stagehand",
MockStagehand,
)
def test_navigation_task(self):
"""Test navigation and extraction task."""
from examples.stagehand.stagehand_wrapper_agent import (
StagehandAgent,
)
agent = StagehandAgent(
agent_name="TestAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
result = agent.run(
"Navigate to example.com and extract the main content"
)
# Parse result
result_data = json.loads(result)
assert result_data["status"] == "completed"
assert "navigated_to" in result_data["data"]
assert (
result_data["data"]["navigated_to"]
== "https://example.com"
)
assert "extracted" in result_data["data"]
@patch(
"examples.stagehand.stagehand_wrapper_agent.Stagehand",
MockStagehand,
)
def test_search_task(self):
"""Test search functionality."""
from examples.stagehand.stagehand_wrapper_agent import (
StagehandAgent,
)
agent = StagehandAgent(
agent_name="TestAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
result = agent.run(
"Go to google.com and search for 'test query'"
)
result_data = json.loads(result)
assert result_data["status"] == "completed"
assert result_data["data"]["search_query"] == "test query"
assert result_data["action"] == "search"
@patch(
"examples.stagehand.stagehand_wrapper_agent.Stagehand",
MockStagehand,
)
def test_cleanup(self):
"""Test that cleanup properly closes browser."""
from examples.stagehand.stagehand_wrapper_agent import (
StagehandAgent,
)
agent = StagehandAgent(
agent_name="TestAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
# Initialize the agent
agent.run("Navigate to example.com")
assert agent._initialized
# Cleanup
agent.cleanup()
# After cleanup, should be able to run again
result = agent.run("Navigate to example.com")
assert result is not None
# Test Stagehand Tools
class TestStagehandTools:
"""Test individual Stagehand tools."""
@patch("examples.stagehand.stagehand_tools_agent.browser_state")
async def test_navigate_tool(self, mock_browser_state):
"""Test NavigateTool functionality."""
from examples.stagehand.stagehand_tools_agent import (
NavigateTool,
)
# Setup mock
mock_page = AsyncMock()
mock_browser_state.get_page = AsyncMock(
return_value=mock_page
)
mock_browser_state.init_browser = AsyncMock()
tool = NavigateTool()
result = await tool._async_run("https://example.com")
assert (
"Successfully navigated to https://example.com" in result
)
mock_page.goto.assert_called_once_with("https://example.com")
@patch("examples.stagehand.stagehand_tools_agent.browser_state")
async def test_act_tool(self, mock_browser_state):
"""Test ActTool functionality."""
from examples.stagehand.stagehand_tools_agent import ActTool
# Setup mock
mock_page = AsyncMock()
mock_page.act = AsyncMock(return_value="Action completed")
mock_browser_state.get_page = AsyncMock(
return_value=mock_page
)
mock_browser_state.init_browser = AsyncMock()
tool = ActTool()
result = await tool._async_run("click the button")
assert "Action performed" in result
assert "click the button" in result
mock_page.act.assert_called_once_with("click the button")
@patch("examples.stagehand.stagehand_tools_agent.browser_state")
async def test_extract_tool(self, mock_browser_state):
"""Test ExtractTool functionality."""
from examples.stagehand.stagehand_tools_agent import (
ExtractTool,
)
# Setup mock
mock_page = AsyncMock()
mock_page.extract = AsyncMock(
return_value={
"title": "Test Page",
"content": "Test content",
}
)
mock_browser_state.get_page = AsyncMock(
return_value=mock_page
)
mock_browser_state.init_browser = AsyncMock()
tool = ExtractTool()
result = await tool._async_run("extract the page title")
# Result should be JSON string
parsed_result = json.loads(result)
assert parsed_result["title"] == "Test Page"
assert parsed_result["content"] == "Test content"
@patch("examples.stagehand.stagehand_tools_agent.browser_state")
async def test_observe_tool(self, mock_browser_state):
"""Test ObserveTool functionality."""
from examples.stagehand.stagehand_tools_agent import (
ObserveTool,
)
# Setup mock
mock_page = AsyncMock()
mock_observations = [
MockObserveResult("Search input", "#search"),
MockObserveResult("Submit button", "#submit"),
]
mock_page.observe = AsyncMock(return_value=mock_observations)
mock_browser_state.get_page = AsyncMock(
return_value=mock_page
)
mock_browser_state.init_browser = AsyncMock()
tool = ObserveTool()
result = await tool._async_run("find the search box")
# Result should be JSON string
parsed_result = json.loads(result)
assert len(parsed_result) == 2
assert parsed_result[0]["description"] == "Search input"
assert parsed_result[0]["selector"] == "#search"
# Test MCP integration
class TestStagehandMCP:
"""Test Stagehand MCP server integration."""
def test_mcp_agent_initialization(self):
"""Test that MCP agent initializes with correct parameters."""
from examples.stagehand.stagehand_mcp_agent import (
StagehandMCPAgent,
)
mcp_agent = StagehandMCPAgent(
agent_name="TestMCPAgent",
mcp_server_url="http://localhost:3000/sse",
model_name="gpt-4o-mini",
)
assert mcp_agent.agent.agent_name == "TestMCPAgent"
assert mcp_agent.agent.mcp_url == "http://localhost:3000/sse"
assert mcp_agent.agent.model_name == "gpt-4o-mini"
def test_multi_session_swarm_creation(self):
"""Test multi-session browser swarm creation."""
from examples.stagehand.stagehand_mcp_agent import (
MultiSessionBrowserSwarm,
)
swarm = MultiSessionBrowserSwarm(
mcp_server_url="http://localhost:3000/sse",
num_agents=3,
)
assert len(swarm.agents) == 3
assert swarm.agents[0].agent_name == "DataExtractor_0"
assert swarm.agents[1].agent_name == "FormFiller_1"
assert swarm.agents[2].agent_name == "WebMonitor_2"
@patch("swarms.Agent.run")
def test_task_distribution(self, mock_run):
"""Test task distribution among swarm agents."""
from examples.stagehand.stagehand_mcp_agent import (
MultiSessionBrowserSwarm,
)
mock_run.return_value = "Task completed"
swarm = MultiSessionBrowserSwarm(num_agents=2)
tasks = ["Task 1", "Task 2", "Task 3"]
results = swarm.distribute_tasks(tasks)
assert len(results) == 3
assert all(result == "Task completed" for result in results)
assert mock_run.call_count == 3
# Test multi-agent workflows
class TestMultiAgentWorkflows:
"""Test multi-agent workflow configurations."""
@patch(
"examples.stagehand.stagehand_wrapper_agent.Stagehand",
MockStagehand,
)
def test_price_comparison_workflow_creation(self):
"""Test creation of price comparison workflow."""
from examples.stagehand.stagehand_multi_agent_workflow import (
create_price_comparison_workflow,
)
workflow = create_price_comparison_workflow()
# Should be a SequentialWorkflow with 2 agents
assert len(workflow.agents) == 2
# First agent should be a ConcurrentWorkflow
assert hasattr(workflow.agents[0], "agents")
# Second agent should be the analysis agent
assert workflow.agents[1].agent_name == "PriceAnalysisAgent"
@patch(
"examples.stagehand.stagehand_wrapper_agent.Stagehand",
MockStagehand,
)
def test_competitive_analysis_workflow_creation(self):
"""Test creation of competitive analysis workflow."""
from examples.stagehand.stagehand_multi_agent_workflow import (
create_competitive_analysis_workflow,
)
workflow = create_competitive_analysis_workflow()
# Should have 3 agents in the rearrange pattern
assert len(workflow.agents) == 3
assert (
workflow.flow
== "company_researcher -> social_media_agent -> report_compiler"
)
@patch(
"examples.stagehand.stagehand_wrapper_agent.Stagehand",
MockStagehand,
)
def test_automated_testing_workflow_creation(self):
"""Test creation of automated testing workflow."""
from examples.stagehand.stagehand_multi_agent_workflow import (
create_automated_testing_workflow,
)
workflow = create_automated_testing_workflow()
# Should be a SequentialWorkflow
assert len(workflow.agents) == 2
# First should be concurrent testing
assert hasattr(workflow.agents[0], "agents")
assert (
len(workflow.agents[0].agents) == 3
) # UI, Form, Accessibility testers
@patch(
"examples.stagehand.stagehand_wrapper_agent.Stagehand",
MockStagehand,
)
def test_news_aggregation_workflow_creation(self):
"""Test creation of news aggregation workflow."""
from examples.stagehand.stagehand_multi_agent_workflow import (
create_news_aggregation_workflow,
)
workflow = create_news_aggregation_workflow()
# Should be a SequentialWorkflow with 3 stages
assert len(workflow.agents) == 3
# First stage should be concurrent scrapers
assert hasattr(workflow.agents[0], "agents")
assert len(workflow.agents[0].agents) == 3 # 3 news sources
# Integration tests
class TestIntegration:
"""End-to-end integration tests."""
@pytest.mark.asyncio
@patch(
"examples.stagehand.stagehand_wrapper_agent.Stagehand",
MockStagehand,
)
async def test_full_browser_automation_flow(self):
"""Test a complete browser automation flow."""
from examples.stagehand.stagehand_wrapper_agent import (
StagehandAgent,
)
agent = StagehandAgent(
agent_name="IntegrationTestAgent",
model_name="gpt-4o-mini",
env="LOCAL",
)
# Test navigation
nav_result = agent.run("Navigate to example.com")
assert "navigated_to" in nav_result
# Test extraction
extract_result = agent.run("Extract all text from the page")
assert "extracted" in extract_result
# Test observation
observe_result = agent.run("Find all buttons on the page")
assert "observation" in observe_result
# Cleanup
agent.cleanup()
if __name__ == "__main__":
pytest.main([__file__, "-v"])

@ -0,0 +1,302 @@
"""
Simple tests for Stagehand Integration with Swarms
=================================================
These tests verify the basic structure and functionality of the
Stagehand integration without requiring external dependencies.
"""
import json
import pytest
from unittest.mock import MagicMock
class TestStagehandIntegrationStructure:
"""Test that integration files have correct structure."""
def test_examples_directory_exists(self):
"""Test that examples directory structure is correct."""
import os
base_path = "examples/stagehand"
assert os.path.exists(base_path)
expected_files = [
"1_stagehand_wrapper_agent.py",
"2_stagehand_tools_agent.py",
"3_stagehand_mcp_agent.py",
"4_stagehand_multi_agent_workflow.py",
"README.md",
"requirements.txt",
]
for file in expected_files:
file_path = os.path.join(base_path, file)
assert os.path.exists(file_path), f"Missing file: {file}"
def test_wrapper_agent_imports(self):
"""Test that wrapper agent has correct imports."""
with open(
"examples/stagehand/1_stagehand_wrapper_agent.py", "r"
) as f:
content = f.read()
# Check for required imports
assert "from swarms import Agent" in content
assert "import asyncio" in content
assert "import json" in content
assert "class StagehandAgent" in content
def test_tools_agent_imports(self):
"""Test that tools agent has correct imports."""
with open(
"examples/stagehand/2_stagehand_tools_agent.py", "r"
) as f:
content = f.read()
# Check for required imports
assert "from swarms import Agent" in content
assert "def navigate_browser" in content
assert "def browser_act" in content
assert "def browser_extract" in content
def test_mcp_agent_imports(self):
"""Test that MCP agent has correct imports."""
with open(
"examples/stagehand/3_stagehand_mcp_agent.py", "r"
) as f:
content = f.read()
# Check for required imports
assert "from swarms import Agent" in content
assert "class StagehandMCPAgent" in content
assert "mcp_url" in content
def test_workflow_agent_imports(self):
"""Test that workflow agent has correct imports."""
with open(
"examples/stagehand/4_stagehand_multi_agent_workflow.py",
"r",
) as f:
content = f.read()
# Check for required imports
assert (
"from swarms import Agent, SequentialWorkflow, ConcurrentWorkflow"
in content
)
assert (
"from swarms.structs.agent_rearrange import AgentRearrange"
in content
)
class TestStagehandMockIntegration:
"""Test Stagehand integration with mocked dependencies."""
def test_mock_stagehand_initialization(self):
"""Test that Stagehand can be mocked and initialized."""
# Setup mock without importing actual stagehand
mock_stagehand = MagicMock()
mock_instance = MagicMock()
mock_instance.init = MagicMock()
mock_stagehand.return_value = mock_instance
# Mock config creation
config = MagicMock()
stagehand_instance = mock_stagehand(config)
# Verify mock works
assert stagehand_instance is not None
assert hasattr(stagehand_instance, "init")
def test_json_serialization(self):
"""Test JSON serialization for agent responses."""
# Test data that would come from browser automation
test_data = {
"task": "Navigate to example.com",
"status": "completed",
"data": {
"navigated_to": "https://example.com",
"extracted": ["item1", "item2"],
"action": "navigate",
},
}
# Test serialization
json_result = json.dumps(test_data, indent=2)
assert isinstance(json_result, str)
# Test deserialization
parsed_data = json.loads(json_result)
assert parsed_data["task"] == "Navigate to example.com"
assert parsed_data["status"] == "completed"
assert len(parsed_data["data"]["extracted"]) == 2
def test_url_extraction_logic(self):
"""Test URL extraction logic from task strings."""
import re
# Test cases
test_cases = [
(
"Navigate to https://example.com",
["https://example.com"],
),
("Go to google.com and search", ["google.com"]),
(
"Visit https://github.com/repo",
["https://github.com/repo"],
),
("Open example.org", ["example.org"]),
]
url_pattern = r"https?://[^\s]+"
domain_pattern = r"(\w+\.\w+)"
for task, expected in test_cases:
# Extract full URLs
urls = re.findall(url_pattern, task)
# If no full URLs, extract domains
if not urls:
domains = re.findall(domain_pattern, task)
if domains:
urls = domains
assert (
len(urls) > 0
), f"Failed to extract URL from: {task}"
assert (
urls[0] in expected
), f"Expected {expected}, got {urls}"
class TestSwarmsPatternsCompliance:
"""Test compliance with Swarms framework patterns."""
def test_agent_inheritance_pattern(self):
"""Test that wrapper agent follows Swarms Agent inheritance pattern."""
# Read the wrapper agent file
with open(
"examples/stagehand/1_stagehand_wrapper_agent.py", "r"
) as f:
content = f.read()
# Check inheritance pattern
assert "class StagehandAgent(SwarmsAgent):" in content
assert "def run(self, task: str" in content
assert "return" in content
def test_tools_pattern(self):
"""Test that tools follow Swarms function-based pattern."""
# Read the tools agent file
with open(
"examples/stagehand/2_stagehand_tools_agent.py", "r"
) as f:
content = f.read()
# Check function-based tool pattern
assert "def navigate_browser(url: str) -> str:" in content
assert "def browser_act(action: str) -> str:" in content
assert "def browser_extract(query: str) -> str:" in content
assert "def browser_observe(query: str) -> str:" in content
def test_mcp_integration_pattern(self):
"""Test MCP integration follows Swarms pattern."""
# Read the MCP agent file
with open(
"examples/stagehand/3_stagehand_mcp_agent.py", "r"
) as f:
content = f.read()
# Check MCP pattern
assert "mcp_url=" in content
assert "Agent(" in content
def test_workflow_patterns(self):
"""Test workflow patterns are properly used."""
# Read the workflow file
with open(
"examples/stagehand/4_stagehand_multi_agent_workflow.py",
"r",
) as f:
content = f.read()
# Check workflow patterns
assert "SequentialWorkflow" in content
assert "ConcurrentWorkflow" in content
assert "AgentRearrange" in content
class TestDocumentationAndExamples:
"""Test documentation and example completeness."""
def test_readme_completeness(self):
"""Test that README contains essential information."""
with open("examples/stagehand/README.md", "r") as f:
content = f.read()
required_sections = [
"# Stagehand Browser Automation Integration",
"## Overview",
"## Examples",
"## Setup",
"## Use Cases",
"## Best Practices",
]
for section in required_sections:
assert section in content, f"Missing section: {section}"
def test_requirements_file(self):
"""Test that requirements file has necessary dependencies."""
with open("examples/stagehand/requirements.txt", "r") as f:
content = f.read()
required_deps = [
"swarms",
"stagehand",
"python-dotenv",
"pydantic",
"loguru",
]
for dep in required_deps:
assert dep in content, f"Missing dependency: {dep}"
def test_example_files_have_docstrings(self):
"""Test that example files have proper docstrings."""
example_files = [
"examples/stagehand/1_stagehand_wrapper_agent.py",
"examples/stagehand/2_stagehand_tools_agent.py",
"examples/stagehand/3_stagehand_mcp_agent.py",
"examples/stagehand/4_stagehand_multi_agent_workflow.py",
]
for file_path in example_files:
with open(file_path, "r") as f:
content = f.read()
# Check for module docstring
assert (
'"""' in content[:500]
), f"Missing docstring in {file_path}"
# Check for main execution block
assert (
'if __name__ == "__main__":' in content
), f"Missing main block in {file_path}"
if __name__ == "__main__":
pytest.main([__file__, "-v"])

@ -0,0 +1,123 @@
from swarms.structs.conversation import Conversation
from dotenv import load_dotenv
from swarms.utils.litellm_tokenizer import count_tokens
# Load environment variables from .env file
load_dotenv()
def demonstrate_truncation():
# Using a smaller context length to clearly see the truncation effect
context_length = 25
print(
f"Creating a conversation instance with context length {context_length}"
)
# Using Claude model as the tokenizer model
conversation = Conversation(
context_length=context_length,
tokenizer_model_name="claude-3-7-sonnet-20250219",
)
# Adding first message - short message
short_message = "Hello, I am a user."
print(f"\nAdding short message: '{short_message}'")
conversation.add("user", short_message)
# Display token count
tokens = count_tokens(
short_message, conversation.tokenizer_model_name
)
print(f"Short message token count: {tokens}")
# Adding second message - long message, should be truncated
long_message = "I have a question about artificial intelligence. I want to understand how large language models handle long texts, especially under token constraints. This issue is important because it relates to the model's practicality and effectiveness. I hope to get a detailed answer that helps me understand this complex technical problem."
print(f"\nAdding long message:\n'{long_message}'")
conversation.add("assistant", long_message)
# Display long message token count
tokens = count_tokens(
long_message, conversation.tokenizer_model_name
)
print(f"Long message token count: {tokens}")
# Display current conversation total token count
total_tokens = sum(
count_tokens(
msg["content"], conversation.tokenizer_model_name
)
for msg in conversation.conversation_history
)
print(f"Total token count before truncation: {total_tokens}")
# Print the complete conversation history before truncation
print("\nConversation history before truncation:")
for i, msg in enumerate(conversation.conversation_history):
print(f"[{i}] {msg['role']}: {msg['content']}")
print(
f" Token count: {count_tokens(msg['content'], conversation.tokenizer_model_name)}"
)
# Execute truncation
print("\nExecuting truncation...")
conversation.truncate_memory_with_tokenizer()
# Print conversation history after truncation
print("\nConversation history after truncation:")
for i, msg in enumerate(conversation.conversation_history):
print(f"[{i}] {msg['role']}: {msg['content']}")
print(
f" Token count: {count_tokens(msg['content'], conversation.tokenizer_model_name)}"
)
# Display total token count after truncation
total_tokens = sum(
count_tokens(
msg["content"], conversation.tokenizer_model_name
)
for msg in conversation.conversation_history
)
print(f"\nTotal token count after truncation: {total_tokens}")
print(f"Context length limit: {context_length}")
# Verify if successfully truncated below the limit
if total_tokens <= context_length:
print(
"✅ Success: Total token count is now less than or equal to context length limit"
)
else:
print(
"❌ Failure: Total token count still exceeds context length limit"
)
# Test sentence boundary truncation
print("\n\nTesting sentence boundary truncation:")
sentence_test = Conversation(
context_length=15,
tokenizer_model_name="claude-3-opus-20240229",
)
test_text = "This is the first sentence. This is the second very long sentence that contains a lot of content. This is the third sentence."
print(f"Original text: '{test_text}'")
print(
f"Original token count: {count_tokens(test_text, sentence_test.tokenizer_model_name)}"
)
# Using binary search for truncation
truncated = sentence_test._binary_search_truncate(
test_text, 10, sentence_test.tokenizer_model_name
)
print(f"Truncated text: '{truncated}'")
print(
f"Truncated token count: {count_tokens(truncated, sentence_test.tokenizer_model_name)}"
)
# Check if truncated at period
if truncated.endswith("."):
print("✅ Success: Text was truncated at sentence boundary")
else:
print("Note: Text was not truncated at sentence boundary")
if __name__ == "__main__":
demonstrate_truncation()

@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "8.0.4"
version = "8.0.5"
description = "Swarms - TGSC"
license = "MIT"
authors = ["Kye Gomez <kye@apac.ai>"]

@ -3,7 +3,7 @@ from swarms import Agent
agent = Agent(
name="Research Agent",
description="A research agent that can answer questions",
model_name="claude-3-5-sonnet-20241022",
model_name="claude-sonnet-4-20250514",
streaming_on=True,
max_loops=1,
interactive=True,

File diff suppressed because it is too large Load Diff

@ -0,0 +1,66 @@
from typing import Any, Dict, Optional
class ToolAgentError(Exception):
"""Base exception for all tool agent errors."""
def __init__(
self, message: str, details: Optional[Dict[str, Any]] = None
):
self.message = message
self.details = details or {}
super().__init__(self.message)
class ToolExecutionError(ToolAgentError):
"""Raised when a tool fails to execute."""
def __init__(
self,
tool_name: str,
error: Exception,
details: Optional[Dict[str, Any]] = None,
):
message = (
f"Failed to execute tool '{tool_name}': {str(error)}"
)
super().__init__(message, details)
class ToolValidationError(ToolAgentError):
"""Raised when tool parameters fail validation."""
def __init__(
self,
tool_name: str,
param_name: str,
error: str,
details: Optional[Dict[str, Any]] = None,
):
message = f"Validation error for tool '{tool_name}' parameter '{param_name}': {error}"
super().__init__(message, details)
class ToolNotFoundError(ToolAgentError):
"""Raised when a requested tool is not found."""
def __init__(
self, tool_name: str, details: Optional[Dict[str, Any]] = None
):
message = f"Tool '{tool_name}' not found"
super().__init__(message, details)
class ToolParameterError(ToolAgentError):
"""Raised when tool parameters are invalid."""
def __init__(
self,
tool_name: str,
error: str,
details: Optional[Dict[str, Any]] = None,
):
message = (
f"Invalid parameters for tool '{tool_name}': {error}"
)
super().__init__(message, details)

@ -1,156 +1,256 @@
from typing import Any, Optional, Callable
from swarms.tools.json_former import Jsonformer
from swarms.utils.loguru_logger import initialize_logger
logger = initialize_logger(log_folder="tool_agent")
from typing import List, Optional, Dict, Any, Callable
from loguru import logger
from swarms.agents.exceptions import (
ToolExecutionError,
ToolValidationError,
ToolNotFoundError,
ToolParameterError,
)
class ToolAgent:
"""
Represents a tool agent that performs a specific task using a model and tokenizer.
Args:
name (str): The name of the tool agent.
description (str): A description of the tool agent.
model (Any): The model used by the tool agent.
tokenizer (Any): The tokenizer used by the tool agent.
json_schema (Any): The JSON schema used by the tool agent.
*args: Variable length arguments.
**kwargs: Keyword arguments.
Attributes:
name (str): The name of the tool agent.
description (str): A description of the tool agent.
model (Any): The model used by the tool agent.
tokenizer (Any): The tokenizer used by the tool agent.
json_schema (Any): The JSON schema used by the tool agent.
Methods:
run: Runs the tool agent for a specific task.
Raises:
Exception: If an error occurs while running the tool agent.
Example:
from transformers import AutoModelForCausalLM, AutoTokenizer
from swarms import ToolAgent
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b")
tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b")
json_schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "number"},
"is_student": {"type": "boolean"},
"courses": {
"type": "array",
"items": {"type": "string"}
}
}
}
task = "Generate a person's information based on the following schema:"
agent = ToolAgent(model=model, tokenizer=tokenizer, json_schema=json_schema)
generated_data = agent.run(task)
print(generated_data)
A wrapper class for vLLM that provides a similar interface to LiteLLM.
This class handles model initialization and inference using vLLM.
"""
def __init__(
self,
name: str = "Function Calling Agent",
description: str = "Generates a function based on the input json schema and the task",
model: Any = None,
tokenizer: Any = None,
json_schema: Any = None,
max_number_tokens: int = 500,
parsing_function: Optional[Callable] = None,
llm: Any = None,
model_name: str = "meta-llama/Llama-2-7b-chat-hf",
system_prompt: Optional[str] = None,
stream: bool = False,
temperature: float = 0.5,
max_tokens: int = 4000,
max_completion_tokens: int = 4000,
tools_list_dictionary: Optional[List[Dict[str, Any]]] = None,
tool_choice: str = "auto",
parallel_tool_calls: bool = False,
retry_attempts: int = 3,
retry_interval: float = 1.0,
*args,
**kwargs,
):
super().__init__(
agent_name=name,
agent_description=description,
llm=llm,
**kwargs,
"""
Initialize the vLLM wrapper with the given parameters.
Args:
model_name (str): The name of the model to use. Defaults to "meta-llama/Llama-2-7b-chat-hf".
system_prompt (str, optional): The system prompt to use. Defaults to None.
stream (bool): Whether to stream the output. Defaults to False.
temperature (float): The temperature for sampling. Defaults to 0.5.
max_tokens (int): The maximum number of tokens to generate. Defaults to 4000.
max_completion_tokens (int): The maximum number of completion tokens. Defaults to 4000.
tools_list_dictionary (List[Dict[str, Any]], optional): List of available tools. Defaults to None.
tool_choice (str): How to choose tools. Defaults to "auto".
parallel_tool_calls (bool): Whether to allow parallel tool calls. Defaults to False.
retry_attempts (int): Number of retry attempts for failed operations. Defaults to 3.
retry_interval (float): Time to wait between retries in seconds. Defaults to 1.0.
"""
self.model_name = model_name
self.system_prompt = system_prompt
self.stream = stream
self.temperature = temperature
self.max_tokens = max_tokens
self.max_completion_tokens = max_completion_tokens
self.tools_list_dictionary = tools_list_dictionary
self.tool_choice = tool_choice
self.parallel_tool_calls = parallel_tool_calls
self.retry_attempts = retry_attempts
self.retry_interval = retry_interval
# Initialize vLLM
try:
self.llm = LLM(model=model_name, **kwargs)
self.sampling_params = SamplingParams(
temperature=temperature,
max_tokens=max_tokens,
)
except Exception as e:
raise ToolExecutionError(
"model_initialization",
e,
{"model_name": model_name, "kwargs": kwargs},
)
def _validate_tool(
self, tool_name: str, parameters: Dict[str, Any]
) -> None:
"""
Validate tool parameters before execution.
Args:
tool_name (str): Name of the tool to validate
parameters (Dict[str, Any]): Parameters to validate
Raises:
ToolValidationError: If validation fails
"""
if not self.tools_list_dictionary:
raise ToolValidationError(
tool_name,
"parameters",
"No tools available for validation",
)
tool_spec = next(
(
tool
for tool in self.tools_list_dictionary
if tool["name"] == tool_name
),
None,
)
self.name = name
self.description = description
self.model = model
self.tokenizer = tokenizer
self.json_schema = json_schema
self.max_number_tokens = max_number_tokens
self.parsing_function = parsing_function
def run(self, task: str, *args, **kwargs):
if not tool_spec:
raise ToolNotFoundError(tool_name)
required_params = {
param["name"]
for param in tool_spec["parameters"]
if param.get("required", True)
}
missing_params = required_params - set(parameters.keys())
if missing_params:
raise ToolParameterError(
tool_name,
f"Missing required parameters: {', '.join(missing_params)}",
)
def _execute_with_retry(
self, func: Callable, *args, **kwargs
) -> Any:
"""
Run the tool agent for the specified task.
Execute a function with retry logic.
Args:
func (Callable): Function to execute
*args: Positional arguments for the function
**kwargs: Keyword arguments for the function
Returns:
Any: Result of the function execution
Raises:
ToolExecutionError: If all retry attempts fail
"""
last_error = None
for attempt in range(self.retry_attempts):
try:
return func(*args, **kwargs)
except Exception as e:
last_error = e
logger.warning(
f"Attempt {attempt + 1}/{self.retry_attempts} failed: {str(e)}"
)
if attempt < self.retry_attempts - 1:
time.sleep(self.retry_interval)
raise ToolExecutionError(
func.__name__,
last_error,
{"attempts": self.retry_attempts},
)
def run(self, task: str, *args, **kwargs) -> str:
"""
Run the tool agent for the specified task.
Args:
task (str): The task to be performed by the tool agent.
*args: Variable length argument list.
**kwargs: Arbitrary keyword arguments.
Returns:
The output of the tool agent.
Raises:
Exception: If an error occurs during the execution of the tool agent.
ToolExecutionError: If an error occurs during execution.
"""
try:
if self.model:
logger.info(f"Running {self.name} for task: {task}")
self.toolagent = Jsonformer(
model=self.model,
tokenizer=self.tokenizer,
json_schema=self.json_schema,
llm=self.llm,
prompt=task,
max_number_tokens=self.max_number_tokens,
*args,
**kwargs,
if not self.llm:
raise ToolExecutionError(
"run",
Exception("LLM not initialized"),
{"task": task},
)
if self.parsing_function:
out = self.parsing_function(self.toolagent())
else:
out = self.toolagent()
return out
elif self.llm:
logger.info(f"Running {self.name} for task: {task}")
self.toolagent = Jsonformer(
json_schema=self.json_schema,
llm=self.llm,
prompt=task,
max_number_tokens=self.max_number_tokens,
*args,
**kwargs,
)
logger.info(f"Running task: {task}")
if self.parsing_function:
out = self.parsing_function(self.toolagent())
else:
out = self.toolagent()
# Prepare the prompt
prompt = self._prepare_prompt(task)
return out
# Execute with retry logic
outputs = self._execute_with_retry(
self.llm.generate, prompt, self.sampling_params
)
else:
raise Exception(
"Either model or llm should be provided to the"
" ToolAgent"
)
response = outputs[0].outputs[0].text.strip()
return response
except Exception as error:
logger.error(
f"Error running {self.name} for task: {task}"
logger.error(f"Error running task: {error}")
raise ToolExecutionError(
"run",
error,
{"task": task, "args": args, "kwargs": kwargs},
)
raise error
def __call__(self, task: str, *args, **kwargs):
def _prepare_prompt(self, task: str) -> str:
"""
Prepare the prompt for the given task.
Args:
task (str): The task to prepare the prompt for.
Returns:
str: The prepared prompt.
"""
if self.system_prompt:
return f"{self.system_prompt}\n\nUser: {task}\nAssistant:"
return f"User: {task}\nAssistant:"
def __call__(self, task: str, *args, **kwargs) -> str:
"""
Call the model for the given task.
Args:
task (str): The task to run the model for.
*args: Additional positional arguments.
**kwargs: Additional keyword arguments.
Returns:
str: The model's response.
"""
return self.run(task, *args, **kwargs)
def batched_run(
self, tasks: List[str], batch_size: int = 10
) -> List[str]:
"""
Run the model for multiple tasks in batches.
Args:
tasks (List[str]): List of tasks to run.
batch_size (int): Size of each batch. Defaults to 10.
Returns:
List[str]: List of model responses.
Raises:
ToolExecutionError: If an error occurs during batch execution.
"""
logger.info(
f"Running tasks in batches of size {batch_size}. Total tasks: {len(tasks)}"
)
results = []
try:
for i in range(0, len(tasks), batch_size):
batch = tasks[i : i + batch_size]
for task in batch:
logger.info(f"Running task: {task}")
try:
result = self.run(task)
results.append(result)
except ToolExecutionError as e:
logger.error(
f"Failed to execute task '{task}': {e}"
)
results.append(f"Error: {str(e)}")
continue
logger.info("Completed all tasks.")
return results
except Exception as error:
logger.error(f"Error in batch execution: {error}")
raise ToolExecutionError(
"batched_run",
error,
{"tasks": tasks, "batch_size": batch_size},
)

@ -1,4 +1,5 @@
AGGREGATOR_SYSTEM_PROMPT = """You are a highly skilled Aggregator Agent responsible for analyzing, synthesizing, and summarizing conversations between multiple AI agents. Your primary goal is to distill complex multi-agent interactions into clear, actionable insights.
AGGREGATOR_SYSTEM_PROMPT = """
You are a highly skilled Aggregator Agent responsible for analyzing, synthesizing, and summarizing conversations between multiple AI agents. Your primary goal is to distill complex multi-agent interactions into clear, actionable insights.
Key Responsibilities:
1. Conversation Analysis:

@ -1,85 +0,0 @@
import json
from typing import List
class PromptGenerator:
"""A class for generating custom prompt strings."""
def __init__(self) -> None:
"""Initialize the PromptGenerator object."""
self.constraints: List[str] = []
self.commands: List[str] = []
self.resources: List[str] = []
self.performance_evaluation: List[str] = []
self.response_format = {
"thoughts": {
"text": "thought",
"reasoning": "reasoning",
"plan": (
"- short bulleted\n- list that conveys\n-"
" long-term plan"
),
"criticism": "constructive self-criticism",
"speak": "thoughts summary to say to user",
},
"command": {
"name": "command name",
"args": {"arg name": "value"},
},
}
def add_constraint(self, constraint: str) -> None:
"""
Add a constraint to the constraints list.
Args:
constraint (str): The constraint to be added.
"""
self.constraints.append(constraint)
def add_command(self, command: str) -> None:
"""
Add a command to the commands list.
Args:
command (str): The command to be added.
"""
self.commands.append(command)
def add_resource(self, resource: str) -> None:
"""
Add a resource to the resources list.
Args:
resource (str): The resource to be added.
"""
self.resources.append(resource)
def add_performance_evaluation(self, evaluation: str) -> None:
"""
Add a performance evaluation item to the performance_evaluation list.
Args:
evaluation (str): The evaluation item to be added.
"""
self.performance_evaluation.append(evaluation)
def generate_prompt_string(self) -> str:
"""Generate a prompt string.
Returns:
str: The generated prompt string.
"""
formatted_response_format = json.dumps(
self.response_format, indent=4
)
prompt_string = (
f"Constraints:\n{''.join(self.constraints)}\n\nCommands:\n{''.join(self.commands)}\n\nResources:\n{''.join(self.resources)}\n\nPerformance"
f" Evaluation:\n{''.join(self.performance_evaluation)}\n\nYou"
" should only respond in JSON format as described below"
" \nResponse Format:"
f" \n{formatted_response_format} \nEnsure the response"
" can be parsed by Python json.loads"
)
return prompt_string

@ -1,159 +0,0 @@
from __future__ import annotations
from abc import abstractmethod
from typing import Sequence
class Message:
"""
The base abstract Message class.
Messages are the inputs and outputs of ChatModels.
"""
def __init__(
self, content: str, role: str, additional_kwargs: dict = None
):
self.content = content
self.role = role
self.additional_kwargs = (
additional_kwargs if additional_kwargs else {}
)
@abstractmethod
def get_type(self) -> str:
pass
class HumanMessage(Message):
"""
A Message from a human.
"""
def __init__(
self,
content: str,
role: str = "Human",
additional_kwargs: dict = None,
example: bool = False,
):
super().__init__(content, role, additional_kwargs)
self.example = example
def get_type(self) -> str:
return "human"
class AIMessage(Message):
"""
A Message from an AI.
"""
def __init__(
self,
content: str,
role: str = "AI",
additional_kwargs: dict = None,
example: bool = False,
):
super().__init__(content, role, additional_kwargs)
self.example = example
def get_type(self) -> str:
return "ai"
class SystemMessage(Message):
"""
A Message for priming AI behavior, usually passed in as the first of a sequence
of input messages.
"""
def __init__(
self,
content: str,
role: str = "System",
additional_kwargs: dict = None,
):
super().__init__(content, role, additional_kwargs)
def get_type(self) -> str:
return "system"
class FunctionMessage(Message):
"""
A Message for passing the result of executing a function back to a model.
"""
def __init__(
self,
content: str,
role: str = "Function",
name: str = None,
additional_kwargs: dict = None,
):
super().__init__(content, role, additional_kwargs)
self.name = name
def get_type(self) -> str:
return "function"
class ChatMessage(Message):
"""
A Message that can be assigned an arbitrary speaker (i.e. role).
"""
def __init__(
self, content: str, role: str, additional_kwargs: dict = None
):
super().__init__(content, role, additional_kwargs)
def get_type(self) -> str:
return "chat"
def get_buffer_string(
messages: Sequence[Message],
human_prefix: str = "Human",
ai_prefix: str = "AI",
) -> str:
string_messages = []
for m in messages:
message = f"{m.role}: {m.content}"
if (
isinstance(m, AIMessage)
and "function_call" in m.additional_kwargs
):
message += f"{m.additional_kwargs['function_call']}"
string_messages.append(message)
return "\n".join(string_messages)
def message_to_dict(message: Message) -> dict:
return {"type": message.get_type(), "data": message.__dict__}
def messages_to_dict(messages: Sequence[Message]) -> list[dict]:
return [message_to_dict(m) for m in messages]
def message_from_dict(message: dict) -> Message:
_type = message["type"]
if _type == "human":
return HumanMessage(**message["data"])
elif _type == "ai":
return AIMessage(**message["data"])
elif _type == "system":
return SystemMessage(**message["data"])
elif _type == "chat":
return ChatMessage(**message["data"])
elif _type == "function":
return FunctionMessage(**message["data"])
else:
raise ValueError(f"Got unexpected message type: {_type}")
def messages_from_dict(messages: list[dict]) -> list[Message]:
return [message_from_dict(m) for m in messages]

@ -1,19 +1,14 @@
IMAGE_ENRICHMENT_PROMPT = (
"Create a concise and effective image generation prompt within"
" 400 characters or less, "
"based on Stable Diffusion and Dalle best practices. Starting"
" prompt: \n\n'"
# f"{prompt}'\n\n"
"Improve the prompt with any applicable details or keywords by"
" considering the following aspects: \n"
"1. Subject details (like actions, emotions, environment) \n"
"2. Artistic style (such as surrealism, hyperrealism) \n"
"3. Medium (digital painting, oil on canvas) \n"
"4. Color themes and lighting (like warm colors, cinematic"
" lighting) \n"
"5. Composition and framing (close-up, wide-angle) \n"
"6. Additional elements (like a specific type of background,"
" weather conditions) \n"
"7. Any other artistic or thematic details that can make the"
" image more vivid and compelling."
)
IMAGE_ENRICHMENT_PROMPT = """
Create a concise and effective image generation prompt within 400 characters or less, based on Stable Diffusion and Dalle best practices.
Improve the prompt with any applicable details or keywords by considering the following aspects:
1. Subject details (like actions, emotions, environment)
2. Artistic style (such as surrealism, hyperrealism)
3. Medium (digital painting, oil on canvas)
4. Color themes and lighting (like warm colors, cinematic lighting)
5. Composition and framing (close-up, wide-angle)
6. Additional elements (like a specific type of background, weather conditions)
7. Any other artistic or thematic details that can make the image more vivid and compelling.
"""

@ -72,4 +72,5 @@ Legal landscapes are ever-evolving, demanding regular updates and improvements.
5. Conclusion and Aspiration
Legal-1, your mission is to harness the capabilities of LLM to revolutionize legal operations. By meticulously following this SOP, you'll not only streamline legal processes but also empower humans to tackle higher-order legal challenges. Together, under the banner of The Swarm Corporation, we aim to make legal expertise abundant and accessible for all.
"""

@ -7,3 +7,8 @@ The reasoning process and the final answer should be distinctly enclosed within
It is essential to output multiple <think> </think> tags to reflect the depth of thought and exploration involved in addressing the task. The Assistant should strive to think deeply and thoroughly about the question, ensuring that all relevant aspects are considered before arriving at a conclusion.
"""
INTERNAL_MONOLGUE_PROMPT = """
You are an introspective reasoning engine whose sole task is to explore and unpack any problem or task without ever delivering a final solution. Whenever you process a prompt, you must envelope every discrete insight, question, or inference inside <think> and </think> tags, using as many of these tagsnested or sequentialas needed to reveal your full chain of thought. Begin each session by rephrasing the problem in your own words to ensure youve captured its goals, inputs, outputs, and constraintsentirely within <think> blocksand identify any ambiguities or assumptions you must clarify. Then decompose the task into sub-questions or logical components, examining multiple approaches, edge cases, and trade-offs, all inside further <think> tags. Continue layering your reasoning, pausing at each step to ask yourself What else might I consider? or Is there an implicit assumption here?always inside <think></think>. Never move beyond analysis: do not generate outlines, pseudocode, or answersonly think. If you find yourself tempted to propose a solution, immediately halt and circle back into deeper <think> tags. Your objective is total transparency of reasoning and exhaustive exploration of the problem space; defer any answer generation until explicitly instructed otherwise.
"""

File diff suppressed because it is too large Load Diff

@ -4,6 +4,9 @@ from swarms.structs.auto_swarm_builder import AutoSwarmBuilder
from swarms.structs.base_structure import BaseStructure
from swarms.structs.base_swarm import BaseSwarm
from swarms.structs.batch_agent_execution import batch_agent_execution
from swarms.structs.board_of_directors_swarm import (
BoardOfDirectorsSwarm,
)
from swarms.structs.concurrent_workflow import ConcurrentWorkflow
from swarms.structs.conversation import Conversation
from swarms.structs.council_judge import CouncilAsAJudge
@ -93,10 +96,12 @@ from swarms.structs.swarming_architectures import (
star_swarm,
)
__all__ = [
"Agent",
"BaseStructure",
"BaseSwarm",
"BoardOfDirectorsSwarm",
"ConcurrentWorkflow",
"Conversation",
"GroupChat",

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save