Merge branch 'kyegomez:master' into corposwarm

pull/1085/head
CI-DEV 2 weeks ago committed by GitHub
commit 8442b773ed
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -17,7 +17,7 @@ jobs:
&& ${{ contains(github.event.pull_request.labels.*.name, 'release') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Install poetry
run: pipx install poetry==$POETRY_VERSION
- name: Set up Python 3.9

@ -21,7 +21,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
uses: actions/checkout@v6
# Execute Codacy Analysis CLI and generate a SARIF output with the security issues identified during the analysis
- name: Run Codacy Analysis CLI
uses: codacy/codacy-analysis-cli-action@562ee3e92b8e92df8b67e0a5ff8aa8e261919c08

@ -16,7 +16,7 @@ jobs:
steps:
# Step 1: Check out the repository
- name: Checkout repository
uses: actions/checkout@v5
uses: actions/checkout@v6
# Step 2: Set up Python
- name: Set up Python ${{ matrix.python-version }}

@ -28,7 +28,7 @@ jobs:
language: ["python"]
steps:
- name: Checkout repository
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Initialize CodeQL
uses: github/codeql-action/init@v4
with:

@ -28,7 +28,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: 'Checkout repository'
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: 'Dependency Review'
uses: actions/dependency-review-action@v4
# Commonly enabled options, see https://github.com/actions/dependency-review-action#configuration-options for all available options.

@ -9,7 +9,7 @@ jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- uses: actions/setup-python@v6
with:
python-version: 3.11

@ -6,7 +6,7 @@ jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Set up Python
uses: actions/setup-python@v6

@ -33,7 +33,7 @@ jobs:
security-events: write
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
with:
submodules: true

@ -35,7 +35,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
with:
submodules: true

@ -21,7 +21,7 @@ jobs:
python-version: ["3.10", "3.11", "3.12"]
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v6
with:

@ -24,7 +24,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Set up Python 3.10
uses: actions/setup-python@v6
@ -121,7 +121,7 @@ jobs:
steps:
- name: Checkout code
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Set up Python 3.10
uses: actions/setup-python@v6

@ -11,7 +11,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v6
- name: Set up Python 3.10
uses: actions/setup-python@v6

@ -27,7 +27,7 @@ jobs:
runs-on: "ubuntu-20.04"
steps:
- name: Checkout code
uses: actions/checkout@v5
uses: actions/checkout@v6
- name: Build an image from Dockerfile
run: |

@ -0,0 +1,15 @@
from swarms import Agent
# Initialize the agent
agent = Agent(
agent_name="Quantitative-Trading-Agent",
agent_description="Advanced quantitative trading and algorithmic analysis agent",
model_name="gpt-4.1",
max_loops="auto",
)
out = agent.run(
task="What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?",
)
print(out)

@ -0,0 +1,40 @@
# AOP Examples Overview
Deploy agents as network services using the Agent Orchestration Protocol (AOP). Turn your agents into distributed, scalable, and accessible services.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **AOP Fundamentals** | Understanding agent-as-a-service deployment |
| **Server Setup** | Running agents as MCP servers |
| **Client Integration** | Connecting to remote agents |
| **Production Deployment** | Scaling and monitoring agents |
---
## AOP Examples
| Example | Description | Link |
|---------|-------------|------|
| **Medical AOP Example** | Healthcare agent deployment with AOP | [View Example](./aop_medical.md) |
---
## Use Cases
| Use Case | Description |
|----------|-------------|
| **Microservices** | Agent per service |
| **API Gateway** | Central agent access point |
| **Multi-tenant** | Shared agent infrastructure |
| **Edge Deployment** | Agents at the edge |
---
## Related Resources
- [AOP Reference Documentation](../swarms/structs/aop.md) - Complete AOP API
- [AOP Server Setup](../swarms/examples/aop_server_example.md) - Server configuration
- [AOP Cluster Example](../swarms/examples/aop_cluster_example.md) - Multi-node setup
- [Deployment Solutions](../deployment_solutions/overview.md) - Production deployment

@ -0,0 +1,69 @@
# Applications Overview
Real-world multi-agent applications built with Swarms. These examples demonstrate complete solutions for business, research, finance, and automation use cases.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **Business Applications** | Marketing, hiring, M&A advisory swarms |
| **Research Systems** | Advanced research and analysis workflows |
| **Financial Analysis** | ETF research and investment analysis |
| **Automation** | Browser agents and web automation |
| **Industry Solutions** | Real estate, job finding, and more |
---
## Application Examples
| Application | Description | Industry | Link |
|-------------|-------------|----------|------|
| **Swarms of Browser Agents** | Automated web browsing with multiple agents | Automation | [View Example](../swarms/examples/swarms_of_browser_agents.md) |
| **Hierarchical Marketing Team** | Multi-agent marketing strategy and execution | Marketing | [View Example](./marketing_team.md) |
| **Gold ETF Research with HeavySwarm** | Comprehensive ETF analysis using Heavy Swarm | Finance | [View Example](./gold_etf_research.md) |
| **Hiring Swarm** | Automated candidate screening and evaluation | HR/Recruiting | [View Example](./hiring_swarm.md) |
| **Advanced Research** | Multi-agent research and analysis system | Research | [View Example](./av.md) |
| **Real Estate Swarm** | Property analysis and market research | Real Estate | [View Example](./realestate_swarm.md) |
| **Job Finding Swarm** | Automated job search and matching | Career | [View Example](./job_finding.md) |
| **M&A Advisory Swarm** | Mergers & acquisitions analysis | Finance | [View Example](./ma_swarm.md) |
---
## Applications by Category
### Business & Marketing
| Application | Description | Link |
|-------------|-------------|------|
| **Hierarchical Marketing Team** | Complete marketing strategy system | [View Example](./marketing_team.md) |
| **Hiring Swarm** | End-to-end recruiting automation | [View Example](./hiring_swarm.md) |
| **M&A Advisory Swarm** | Due diligence and analysis | [View Example](./ma_swarm.md) |
### Financial Analysis
| Application | Description | Link |
|-------------|-------------|------|
| **Gold ETF Research** | Comprehensive ETF analysis | [View Example](./gold_etf_research.md) |
### Research & Automation
| Application | Description | Link |
|-------------|-------------|------|
| **Advanced Research** | Multi-source research compilation | [View Example](./av.md) |
| **Browser Agents** | Automated web interaction | [View Example](../swarms/examples/swarms_of_browser_agents.md) |
| **Job Finding Swarm** | Career opportunity discovery | [View Example](./job_finding.md) |
### Real Estate
| Application | Description | Link |
|-------------|-------------|------|
| **Real Estate Swarm** | Property market analysis | [View Example](./realestate_swarm.md) |
---
## Related Resources
- [HierarchicalSwarm Documentation](../swarms/structs/hierarchical_swarm.md)
- [HeavySwarm Documentation](../swarms/structs/heavy_swarm.md)
- [Building Custom Swarms](../swarms/structs/custom_swarm.md)
- [Deployment Solutions](../deployment_solutions/overview.md)

@ -0,0 +1,29 @@
# Apps Examples Overview
Complete application examples built with Swarms. These examples show how to build practical tools and utilities with AI agents.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **Web Scraping** | Building intelligent web scrapers |
| **Database Integration** | Smart database query agents |
| **Practical Tools** | End-to-end application development |
---
## App Examples
| App | Description | Link |
|-----|-------------|------|
| **Web Scraper Agents** | Intelligent web data extraction | [View Example](../developer_guides/web_scraper.md) |
| **Smart Database** | AI-powered database interactions | [View Example](./smart_database.md) |
---
## Related Resources
- [Tools & Integrations](./tools_integrations_overview.md) - External service connections
- [Multi-Agent Architectures](./multi_agent_architectures_overview.md) - Complex agent systems
- [Deployment Solutions](../deployment_solutions/overview.md) - Production deployment

@ -0,0 +1,80 @@
# Basic Examples Overview
Start your Swarms journey with single-agent examples. Learn how to create agents, use tools, process images, integrate with different LLM providers, and publish to the marketplace.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **Agent Basics** | Create and configure individual agents |
| **Tool Integration** | Equip agents with callable tools and functions |
| **Vision Capabilities** | Process images and multi-modal inputs |
| **LLM Providers** | Connect to OpenAI, Anthropic, Groq, and more |
| **Utilities** | Streaming, output types, and marketplace publishing |
---
## Individual Agent Examples
### Core Agent Usage
| Example | Description | Link |
|---------|-------------|------|
| **Basic Agent** | Fundamental agent creation and execution | [View Example](../swarms/examples/basic_agent.md) |
### Tool Usage
| Example | Description | Link |
|---------|-------------|------|
| **Agents with Vision and Tool Usage** | Combine vision and tools in one agent | [View Example](../swarms/examples/vision_tools.md) |
| **Agents with Callable Tools** | Equip agents with Python functions as tools | [View Example](../swarms/examples/agent_with_tools.md) |
| **Agent with Structured Outputs** | Get consistent JSON/structured responses | [View Example](../swarms/examples/agent_structured_outputs.md) |
| **Message Transforms** | Manage context with message transformations | [View Example](../swarms/structs/transforms.md) |
### Vision & Multi-Modal
| Example | Description | Link |
|---------|-------------|------|
| **Agents with Vision** | Process and analyze images | [View Example](../swarms/examples/vision_processing.md) |
| **Agent with Multiple Images** | Handle multiple images in one request | [View Example](../swarms/examples/multiple_images.md) |
### Utilities
| Example | Description | Link |
|---------|-------------|------|
| **Agent with Streaming** | Stream responses in real-time | [View Example](./agent_stream.md) |
| **Agent Output Types** | Different output formats (str, json, dict, yaml) | [View Example](../swarms/examples/agent_output_types.md) |
| **Gradio Chat Interface** | Build chat UIs for your agents | [View Example](../swarms/ui/main.md) |
| **Agent with Gemini Nano Banana** | Jarvis-style agent example | [View Example](../swarms/examples/jarvis_agent.md) |
| **Agent Marketplace Publishing** | Publish agents to the Swarms marketplace | [View Example](./marketplace_publishing_quickstart.md) |
---
## LLM Provider Examples
Connect your agents to various language model providers:
| Provider | Description | Link |
|----------|-------------|------|
| **Overview** | Guide to all supported providers | [View Guide](../swarms/examples/model_providers.md) |
| **OpenAI** | GPT-4, GPT-4o, GPT-4o-mini integration | [View Example](../swarms/examples/openai_example.md) |
| **Anthropic** | Claude models integration | [View Example](../swarms/examples/claude.md) |
| **Groq** | Ultra-fast inference with Groq | [View Example](../swarms/examples/groq.md) |
| **Cohere** | Cohere Command models | [View Example](../swarms/examples/cohere.md) |
| **DeepSeek** | DeepSeek models integration | [View Example](../swarms/examples/deepseek.md) |
| **Ollama** | Local models with Ollama | [View Example](../swarms/examples/ollama.md) |
| **OpenRouter** | Access multiple providers via OpenRouter | [View Example](../swarms/examples/openrouter.md) |
| **XAI** | Grok models from xAI | [View Example](../swarms/examples/xai.md) |
| **Azure OpenAI** | Enterprise Azure deployment | [View Example](../swarms/examples/azure.md) |
| **Llama4** | Meta's Llama 4 models | [View Example](../swarms/examples/llama4.md) |
| **Custom Base URL** | Connect to any OpenAI-compatible API | [View Example](../swarms/examples/custom_base_url_example.md) |
---
## Next Steps
After mastering basic agents, explore:
- [Multi-Agent Architectures](./multi_agent_architectures_overview.md) - Coordinate multiple agents
- [Tools Documentation](../swarms/tools/main.md) - Deep dive into tool creation
- [CLI Guides](./cli_guides_overview.md) - Run agents from command line

@ -0,0 +1,47 @@
# CLI Guides Overview
Master the Swarms command-line interface with these step-by-step guides. Execute agents, run multi-agent workflows, and integrate Swarms into your DevOps pipelines—all from your terminal.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **CLI Basics** | Install, configure, and run your first commands |
| **Agent Creation** | Create and run agents directly from command line |
| **YAML Configuration** | Define agents in config files for reproducible deployments |
| **Multi-Agent Commands** | Run LLM Council and Heavy Swarm from terminal |
| **DevOps Integration** | Integrate into CI/CD pipelines and scripts |
---
## CLI Guides
| Guide | Description | Link |
|-------|-------------|------|
| **CLI Quickstart** | Get started with Swarms CLI in 3 steps—install, configure, and run | [View Guide](../swarms/cli/cli_quickstart.md) |
| **Creating Agents from CLI** | Create, configure, and run AI agents directly from your terminal | [View Guide](../swarms/cli/cli_agent_guide.md) |
| **YAML Configuration** | Run multiple agents from YAML configuration files | [View Guide](../swarms/cli/cli_yaml_guide.md) |
| **LLM Council CLI** | Run collaborative multi-agent decision-making from command line | [View Guide](../swarms/cli/cli_llm_council_guide.md) |
| **Heavy Swarm CLI** | Execute comprehensive task analysis swarms from terminal | [View Guide](../swarms/cli/cli_heavy_swarm_guide.md) |
| **CLI Multi-Agent Commands** | Complete guide to multi-agent CLI commands | [View Guide](./cli_multi_agent_quickstart.md) |
| **CLI Examples** | Additional CLI usage examples and patterns | [View Guide](../swarms/cli/cli_examples.md) |
---
## Use Cases
| Use Case | Recommended Guide |
|----------|-------------------|
| First time using CLI | [CLI Quickstart](../swarms/cli/cli_quickstart.md) |
| Creating custom agents | [Creating Agents from CLI](../swarms/cli/cli_agent_guide.md) |
| Team/production deployments | [YAML Configuration](../swarms/cli/cli_yaml_guide.md) |
| Collaborative decision-making | [LLM Council CLI](../swarms/cli/cli_llm_council_guide.md) |
| Complex research tasks | [Heavy Swarm CLI](../swarms/cli/cli_heavy_swarm_guide.md) |
---
## Related Resources
- [CLI Reference Documentation](../swarms/cli/cli_reference.md) - Complete command reference
- [Agent Documentation](../swarms/structs/agent.md) - Agent class reference
- [Environment Configuration](../swarms/install/env.md) - Environment setup guide

@ -0,0 +1,215 @@
# CLI Multi-Agent Features: 3-Step Quickstart Guide
Run LLM Council and Heavy Swarm directly from the command line for seamless DevOps integration. Execute sophisticated multi-agent workflows without writing Python code.
## Overview
| Feature | Description |
|---------|-------------|
| **LLM Council CLI** | Run collaborative decision-making from terminal |
| **Heavy Swarm CLI** | Execute comprehensive research swarms |
| **DevOps Ready** | Integrate into CI/CD pipelines and scripts |
| **Configurable** | Full parameter control from command line |
---
## Step 1: Install and Verify
Ensure Swarms is installed and verify CLI access:
```bash
# Install swarms
pip install swarms
# Verify CLI is available
swarms --help
```
You should see the Swarms CLI banner and available commands.
---
## Step 2: Set Environment Variables
Configure your API keys:
```bash
# Set your OpenAI API key (or other provider)
export OPENAI_API_KEY="your-openai-api-key"
# Optional: Set workspace directory
export WORKSPACE_DIR="./agent_workspace"
```
Or add to your `.env` file:
```
OPENAI_API_KEY=your-openai-api-key
WORKSPACE_DIR=./agent_workspace
```
---
## Step 3: Run Multi-Agent Commands
### LLM Council
Run a collaborative council of AI agents:
```bash
# Basic usage
swarms llm-council --task "What is the best approach to implement microservices architecture?"
# With verbose output
swarms llm-council --task "Evaluate investment opportunities in AI startups" --verbose
```
### Heavy Swarm
Run comprehensive research and analysis:
```bash
# Basic usage
swarms heavy-swarm --task "Analyze the current state of quantum computing"
# With configuration options
swarms heavy-swarm \
--task "Research renewable energy market trends" \
--loops-per-agent 2 \
--question-agent-model-name gpt-4o-mini \
--worker-model-name gpt-4o-mini \
--verbose
```
---
## Complete CLI Reference
### LLM Council Command
```bash
swarms llm-council --task "<your query>" [options]
```
| Option | Description |
|--------|-------------|
| `--task` | **Required.** The query or question for the council |
| `--verbose` | Enable detailed output logging |
**Examples:**
```bash
# Strategic decision
swarms llm-council --task "Should our startup pivot from B2B to B2C?"
# Technical evaluation
swarms llm-council --task "Compare React vs Vue for enterprise applications"
# Business analysis
swarms llm-council --task "What are the risks of expanding to European markets?"
```
---
### Heavy Swarm Command
```bash
swarms heavy-swarm --task "<your task>" [options]
```
| Option | Default | Description |
|--------|---------|-------------|
| `--task` | - | **Required.** The research task |
| `--loops-per-agent` | 1 | Number of loops per agent |
| `--question-agent-model-name` | gpt-4o-mini | Model for question agent |
| `--worker-model-name` | gpt-4o-mini | Model for worker agents |
| `--random-loops-per-agent` | False | Randomize loops per agent |
| `--verbose` | False | Enable detailed output |
**Examples:**
```bash
# Comprehensive research
swarms heavy-swarm --task "Research the impact of AI on healthcare diagnostics" --verbose
# With custom models
swarms heavy-swarm \
--task "Analyze cryptocurrency regulation trends globally" \
--question-agent-model-name gpt-4 \
--worker-model-name gpt-4 \
--loops-per-agent 3
# Quick analysis
swarms heavy-swarm --task "Summarize recent advances in battery technology"
```
---
## Other Useful CLI Commands
### Setup Check
Verify your environment is properly configured:
```bash
swarms setup-check --verbose
```
### Run Single Agent
Execute a single agent task:
```bash
swarms agent \
--name "Research-Agent" \
--task "Summarize recent AI developments" \
--model "gpt-4o-mini" \
--max-loops 1
```
### Auto Swarm
Automatically generate and run a swarm configuration:
```bash
swarms autoswarm --task "Build a content analysis pipeline" --model gpt-4
```
### Show All Commands
Display all available CLI features:
```bash
swarms show-all
```
---
## Troubleshooting
### Common Issues
| Issue | Solution |
|-------|----------|
| "Command not found" | Ensure `pip install swarms` completed successfully |
| "API key not set" | Export `OPENAI_API_KEY` environment variable |
| "Task cannot be empty" | Always provide `--task` argument |
| Timeout errors | Check network connectivity and API rate limits |
### Debug Mode
Run with verbose output for debugging:
```bash
swarms llm-council --task "Your query" --verbose 2>&1 | tee debug.log
```
---
## Next Steps
- Explore [CLI Reference Documentation](../swarms/cli/cli_reference.md) for all commands
- See [CLI Examples](../swarms/cli/cli_examples.md) for more use cases
- Learn about [LLM Council](./llm_council_quickstart.md) Python API
- Try [Heavy Swarm Documentation](../swarms/structs/heavy_swarm.md) for advanced configuration

@ -0,0 +1,233 @@
# DebateWithJudge: 3-Step Quickstart Guide
The DebateWithJudge architecture enables structured debates between two agents (Pro and Con) with a Judge providing refined synthesis over multiple rounds. This creates progressively improved answers through iterative argumentation and evaluation.
## Overview
| Feature | Description |
|---------|-------------|
| **Pro Agent** | Argues in favor of a position with evidence and reasoning |
| **Con Agent** | Presents counter-arguments and identifies weaknesses |
| **Judge Agent** | Evaluates both sides and synthesizes the best elements |
| **Iterative Refinement** | Multiple rounds progressively improve the final answer |
```
Agent A (Pro) ↔ Agent B (Con)
│ │
▼ ▼
Judge / Critic Agent
Winner or synthesis → refined answer
```
---
## Step 1: Install and Import
Ensure you have Swarms installed and import the DebateWithJudge class:
```bash
pip install swarms
```
```python
from swarms import DebateWithJudge
```
---
## Step 2: Create the Debate System
Create a DebateWithJudge system using preset agents (the simplest approach):
```python
# Create debate system with preset optimized agents
debate = DebateWithJudge(
preset_agents=True, # Use built-in optimized agents
max_loops=3, # 3 rounds of debate
model_name="gpt-4o-mini",
verbose=True
)
```
---
## Step 3: Run the Debate
Execute the debate on a topic:
```python
# Define the debate topic
topic = "Should artificial intelligence be regulated by governments?"
# Run the debate
result = debate.run(task=topic)
# Print the refined answer
print(result)
# Or get just the final synthesis
final_answer = debate.get_final_answer()
print(final_answer)
```
---
## Complete Example
Here's a complete working example:
```python
from swarms import DebateWithJudge
# Step 1: Create the debate system with preset agents
debate_system = DebateWithJudge(
preset_agents=True,
max_loops=3,
model_name="gpt-4o-mini",
output_type="str-all-except-first",
verbose=True,
)
# Step 2: Define a complex topic
topic = (
"Should artificial intelligence be regulated by governments? "
"Discuss the balance between innovation and safety."
)
# Step 3: Run the debate and get refined answer
result = debate_system.run(task=topic)
print("=" * 60)
print("DEBATE RESULT:")
print("=" * 60)
print(result)
# Access conversation history for detailed analysis
history = debate_system.get_conversation_history()
print(f"\nTotal exchanges: {len(history)}")
```
---
## Custom Agents Example
Create specialized agents for domain-specific debates:
```python
from swarms import Agent, DebateWithJudge
# Create specialized Pro agent
pro_agent = Agent(
agent_name="Innovation-Advocate",
system_prompt=(
"You are a technology policy expert arguing for innovation and minimal regulation. "
"You present arguments focusing on economic growth, technological competitiveness, "
"and the risks of over-regulation stifling progress."
),
model_name="gpt-4o-mini",
max_loops=1,
)
# Create specialized Con agent
con_agent = Agent(
agent_name="Safety-Advocate",
system_prompt=(
"You are a technology policy expert arguing for strong AI safety regulations. "
"You present arguments focusing on public safety, ethical considerations, "
"and the need for government oversight of powerful technologies."
),
model_name="gpt-4o-mini",
max_loops=1,
)
# Create specialized Judge agent
judge_agent = Agent(
agent_name="Policy-Analyst",
system_prompt=(
"You are an impartial policy analyst evaluating technology regulation debates. "
"You synthesize the strongest arguments from both sides and provide "
"balanced, actionable policy recommendations."
),
model_name="gpt-4o-mini",
max_loops=1,
)
# Create debate system with custom agents
debate = DebateWithJudge(
agents=[pro_agent, con_agent, judge_agent], # Pass as list
max_loops=3,
verbose=True,
)
result = debate.run("Should AI-generated content require mandatory disclosure labels?")
```
---
## Batch Processing
Process multiple debate topics:
```python
from swarms import DebateWithJudge
debate = DebateWithJudge(preset_agents=True, max_loops=2)
# Multiple topics to debate
topics = [
"Should remote work become the standard for knowledge workers?",
"Is cryptocurrency a viable alternative to traditional banking?",
"Should social media platforms be held accountable for content moderation?",
]
# Process all topics
results = debate.batched_run(topics)
for topic, result in zip(topics, results):
print(f"\nTopic: {topic}")
print(f"Result: {result[:200]}...")
```
---
## Configuration Options
| Parameter | Default | Description |
|-----------|---------|-------------|
| `preset_agents` | `False` | Use built-in optimized agents |
| `max_loops` | `3` | Number of debate rounds |
| `model_name` | `"gpt-4o-mini"` | Model for preset agents |
| `output_type` | `"str-all-except-first"` | Output format |
| `verbose` | `True` | Enable detailed logging |
### Output Types
| Value | Description |
|-------|-------------|
| `"str-all-except-first"` | Formatted string, excluding initialization (default) |
| `"str"` | All messages as formatted string |
| `"dict"` | Messages as dictionary |
| `"list"` | Messages as list |
---
## Use Cases
| Domain | Example Topic |
|--------|---------------|
| **Policy** | "Should universal basic income be implemented?" |
| **Technology** | "Microservices vs. monolithic architecture for startups?" |
| **Business** | "Should companies prioritize growth or profitability?" |
| **Ethics** | "Is it ethical to use AI in hiring decisions?" |
| **Science** | "Should gene editing be allowed for non-medical purposes?" |
---
## Next Steps
- Explore [DebateWithJudge Reference](../swarms/structs/debate_with_judge.md) for complete API details
- See [Debate Examples](https://github.com/kyegomez/swarms/tree/master/examples/multi_agent/debate_examples) for more use cases
- Learn about [Orchestration Methods](../swarms/structs/orchestration_methods.md) for other debate architectures

@ -0,0 +1,327 @@
# GraphWorkflow with Rustworkx: 3-Step Quickstart Guide
GraphWorkflow provides a powerful workflow orchestration system that creates directed graphs of agents for complex multi-agent collaboration. The new **Rustworkx integration** delivers 5-10x faster performance for large-scale workflows.
## Overview
| Feature | Description |
|---------|-------------|
| **Directed Graph Structure** | Nodes are agents, edges define data flow |
| **Dual Backend Support** | NetworkX (compatibility) or Rustworkx (performance) |
| **Parallel Execution** | Multiple agents run simultaneously within layers |
| **Automatic Compilation** | Optimizes workflow structure for efficient execution |
| **5-10x Performance** | Rustworkx backend for high-throughput workflows |
---
## Step 1: Install and Import
Install Swarms and Rustworkx for high-performance workflows:
```bash
pip install swarms rustworkx
```
```python
from swarms import Agent, GraphWorkflow
```
---
## Step 2: Create the Workflow with Rustworkx Backend
Create agents and build a workflow using the high-performance Rustworkx backend:
```python
# Create specialized agents
research_agent = Agent(
agent_name="ResearchAgent",
model_name="gpt-4o-mini",
system_prompt="You are a research specialist. Gather and analyze information.",
max_loops=1
)
analysis_agent = Agent(
agent_name="AnalysisAgent",
model_name="gpt-4o-mini",
system_prompt="You are an analyst. Process research findings and extract insights.",
max_loops=1
)
# Create workflow with rustworkx backend for better performance
workflow = GraphWorkflow(
name="Research-Analysis-Pipeline",
backend="rustworkx", # Use rustworkx for 5-10x faster performance
verbose=True
)
# Add agents as nodes
workflow.add_node(research_agent)
workflow.add_node(analysis_agent)
# Connect agents with edges
workflow.add_edge("ResearchAgent", "AnalysisAgent")
```
---
## Step 3: Execute the Workflow
Run the workflow and get results:
```python
# Execute the workflow
results = workflow.run("What are the latest trends in renewable energy technology?")
# Print results
print(results)
```
---
## Complete Example
Here's a complete parallel processing workflow:
```python
from swarms import Agent, GraphWorkflow
# Step 1: Create specialized agents
data_collector = Agent(
agent_name="DataCollector",
model_name="gpt-4o-mini",
system_prompt="You collect and organize data from various sources.",
max_loops=1
)
technical_analyst = Agent(
agent_name="TechnicalAnalyst",
model_name="gpt-4o-mini",
system_prompt="You perform technical analysis on data.",
max_loops=1
)
market_analyst = Agent(
agent_name="MarketAnalyst",
model_name="gpt-4o-mini",
system_prompt="You analyze market trends and conditions.",
max_loops=1
)
synthesis_agent = Agent(
agent_name="SynthesisAgent",
model_name="gpt-4o-mini",
system_prompt="You synthesize insights from multiple analysts into a cohesive report.",
max_loops=1
)
# Step 2: Build workflow with rustworkx backend
workflow = GraphWorkflow(
name="Market-Analysis-Pipeline",
backend="rustworkx", # High-performance backend
verbose=True
)
# Add all agents
for agent in [data_collector, technical_analyst, market_analyst, synthesis_agent]:
workflow.add_node(agent)
# Create fan-out pattern: data collector feeds both analysts
workflow.add_edges_from_source(
"DataCollector",
["TechnicalAnalyst", "MarketAnalyst"]
)
# Create fan-in pattern: both analysts feed synthesis agent
workflow.add_edges_to_target(
["TechnicalAnalyst", "MarketAnalyst"],
"SynthesisAgent"
)
# Step 3: Execute and get results
results = workflow.run("Analyze Bitcoin market trends for Q4 2024")
print("=" * 60)
print("WORKFLOW RESULTS:")
print("=" * 60)
print(results)
# Get compilation status
status = workflow.get_compilation_status()
print(f"\nLayers: {status['cached_layers_count']}")
print(f"Max workers: {status['max_workers']}")
```
---
## NetworkX vs Rustworkx Backend
| Graph Size | Recommended Backend | Performance |
|------------|-------------------|-------------|
| < 100 nodes | NetworkX | Minimal overhead |
| 100-1000 nodes | Either | Both perform well |
| 1000+ nodes | **Rustworkx** | 5-10x faster |
| 10k+ nodes | **Rustworkx** | Essential |
```python
# NetworkX backend (default, maximum compatibility)
workflow = GraphWorkflow(backend="networkx")
# Rustworkx backend (high performance)
workflow = GraphWorkflow(backend="rustworkx")
```
---
## Edge Patterns
### Fan-Out (One-to-Many)
```python
# One agent feeds multiple agents
workflow.add_edges_from_source(
"DataCollector",
["Analyst1", "Analyst2", "Analyst3"]
)
```
### Fan-In (Many-to-One)
```python
# Multiple agents feed one agent
workflow.add_edges_to_target(
["Analyst1", "Analyst2", "Analyst3"],
"SynthesisAgent"
)
```
### Parallel Chain (Many-to-Many)
```python
# Full mesh connection
workflow.add_parallel_chain(
["Source1", "Source2"],
["Target1", "Target2", "Target3"]
)
```
---
## Using from_spec for Quick Setup
Create workflows quickly with the `from_spec` class method:
```python
from swarms import Agent, GraphWorkflow
# Create agents
agent1 = Agent(agent_name="Researcher", model_name="gpt-4o-mini", max_loops=1)
agent2 = Agent(agent_name="Analyzer", model_name="gpt-4o-mini", max_loops=1)
agent3 = Agent(agent_name="Reporter", model_name="gpt-4o-mini", max_loops=1)
# Create workflow from specification
workflow = GraphWorkflow.from_spec(
agents=[agent1, agent2, agent3],
edges=[
("Researcher", "Analyzer"),
("Analyzer", "Reporter"),
],
task="Analyze climate change data",
backend="rustworkx" # Use high-performance backend
)
results = workflow.run()
```
---
## Visualization
Generate visual representations of your workflow:
```python
# Create visualization (requires graphviz)
output_file = workflow.visualize(
format="png",
view=True,
show_summary=True
)
print(f"Visualization saved to: {output_file}")
# Simple text visualization
text_viz = workflow.visualize_simple()
print(text_viz)
```
---
## Serialization
Save and load workflows:
```python
# Save workflow with conversation history
workflow.save_to_file(
"my_workflow.json",
include_conversation=True,
include_runtime_state=True
)
# Load workflow later
loaded_workflow = GraphWorkflow.load_from_file(
"my_workflow.json",
restore_runtime_state=True
)
# Continue execution
results = loaded_workflow.run("Follow-up analysis")
```
---
## Large-Scale Example with Rustworkx
```python
from swarms import Agent, GraphWorkflow
# Create workflow for large-scale processing
workflow = GraphWorkflow(
name="Large-Scale-Pipeline",
backend="rustworkx", # Essential for large graphs
verbose=True
)
# Create many processing agents
processors = []
for i in range(50):
agent = Agent(
agent_name=f"Processor{i}",
model_name="gpt-4o-mini",
max_loops=1
)
processors.append(agent)
workflow.add_node(agent)
# Create layered connections
for i in range(0, 40, 10):
sources = [f"Processor{j}" for j in range(i, i+10)]
targets = [f"Processor{j}" for j in range(i+10, min(i+20, 50))]
if targets:
workflow.add_parallel_chain(sources, targets)
# Compile and execute
workflow.compile()
status = workflow.get_compilation_status()
print(f"Compiled: {status['cached_layers_count']} layers")
results = workflow.run("Process dataset in parallel")
```
---
## Next Steps
- Explore [GraphWorkflow Reference](../swarms/structs/graph_workflow.md) for complete API details
- See [Multi-Agentic Patterns with GraphWorkflow](./graphworkflow_rustworkx_patterns.md) for advanced patterns
- Learn about [Visualization Options](../swarms/structs/graph_workflow.md#visualization-methods) for debugging workflows

@ -0,0 +1,112 @@
# LLM Council Examples
This page provides examples demonstrating the LLM Council pattern, inspired by Andrej Karpathy's llm-council implementation. The LLM Council uses multiple specialized AI agents that:
1. Each respond independently to queries
2. Review and rank each other's anonymized responses
3. Have a Chairman synthesize all responses into a final comprehensive answer
## Example Files
All LLM Council examples are located in the [`examples/multi_agent/llm_council_examples/`](https://github.com/kyegomez/swarms/tree/master/examples/multi_agent/llm_council_examples) directory.
### Marketing & Business
- **[marketing_strategy_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/marketing_strategy_council.py)** - Marketing strategy analysis and recommendations
- **[business_strategy_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/business_strategy_council.py)** - Comprehensive business strategy development
### Finance & Investment
- **[finance_analysis_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/finance_analysis_council.py)** - Financial analysis and investment recommendations
- **[etf_stock_analysis_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/etf_stock_analysis_council.py)** - ETF and stock analysis with portfolio recommendations
### Medical & Healthcare
- **[medical_treatment_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/medical_treatment_council.py)** - Medical treatment recommendations and care plans
- **[medical_diagnosis_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/medical_diagnosis_council.py)** - Diagnostic analysis based on symptoms
### Technology & Research
- **[technology_assessment_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/technology_assessment_council.py)** - Technology evaluation and implementation strategy
- **[research_analysis_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/research_analysis_council.py)** - Comprehensive research analysis on complex topics
### Legal
- **[legal_analysis_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/legal_analysis_council.py)** - Legal implications and compliance analysis
## Basic Usage Pattern
All examples follow the same pattern:
```python
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True)
# Run a query
result = council.run("Your query here")
# Access results
print(result["final_response"]) # Chairman's synthesized answer
print(result["original_responses"]) # Individual member responses
print(result["evaluations"]) # How members ranked each other
```
## Running Examples
Run any example directly:
```bash
python examples/multi_agent/llm_council_examples/marketing_strategy_council.py
python examples/multi_agent/llm_council_examples/finance_analysis_council.py
python examples/multi_agent/llm_council_examples/medical_diagnosis_council.py
```
## Key Features
| Feature | Description |
|----------------------|---------------------------------------------------------------------------------------------------------|
| **Multiple Perspectives** | Each council member (GPT-5.1, Gemini, Claude, Grok) provides unique insights |
| **Peer Review** | Members evaluate and rank each other's responses anonymously |
| **Synthesis** | Chairman combines the best elements from all responses |
| **Transparency** | See both individual responses and evaluation rankings |
## Council Members
The default council consists of:
| Council Member | Description |
|-------------------------------|-------------------------------|
| **GPT-5.1-Councilor** | Analytical and comprehensive |
| **Gemini-3-Pro-Councilor** | Concise and well-processed |
| **Claude-Sonnet-4.5-Councilor** | Thoughtful and balanced |
| **Grok-4-Councilor** | Creative and innovative |
## Customization
You can create custom council members:
```python
from swarms import Agent
from swarms.structs.llm_council import LLMCouncil, get_gpt_councilor_prompt
custom_agent = Agent(
agent_name="Custom-Councilor",
system_prompt=get_gpt_councilor_prompt(),
model_name="gpt-4.1",
max_loops=1,
)
council = LLMCouncil(
council_members=[custom_agent, ...],
chairman_model="gpt-5.1",
verbose=True
)
```
## Documentation
For complete API reference and detailed documentation, see the [LLM Council Reference Documentation](../swarms/structs/llm_council.md).

@ -0,0 +1,170 @@
# LLM Council: 3-Step Quickstart Guide
The LLM Council enables collaborative decision-making with multiple AI agents through peer review and synthesis. Inspired by Andrej Karpathy's llm-council, it creates a council of specialized agents that respond independently, review each other's anonymized responses, and have a Chairman synthesize the best elements into a final answer.
## Overview
| Feature | Description |
|---------|-------------|
| **Multiple Perspectives** | Each council member provides unique insights from different viewpoints |
| **Peer Review** | Members evaluate and rank each other's responses anonymously |
| **Synthesis** | Chairman combines the best elements from all responses |
| **Transparency** | See both individual responses and evaluation rankings |
---
## Step 1: Install and Import
First, ensure you have Swarms installed and import the LLMCouncil class:
```bash
pip install swarms
```
```python
from swarms.structs.llm_council import LLMCouncil
```
---
## Step 2: Create the Council
Create an LLM Council with default council members (GPT-5.1, Gemini 3 Pro, Claude Sonnet 4.5, and Grok-4):
```python
# Create the council with default members
council = LLMCouncil(
name="Decision Council",
verbose=True,
output_type="dict-all-except-first"
)
```
---
## Step 3: Run a Query
Execute a query and get the synthesized response:
```python
# Run a query
result = council.run("What are the key factors to consider when choosing a cloud provider for enterprise applications?")
# Access the final synthesized answer
print(result["final_response"])
# View individual member responses
print(result["original_responses"])
# See how members ranked each other
print(result["evaluations"])
```
---
## Complete Example
Here's a complete working example:
```python
from swarms.structs.llm_council import LLMCouncil
# Step 1: Create the council
council = LLMCouncil(
name="Strategy Council",
description="A council for strategic decision-making",
verbose=True,
output_type="dict-all-except-first"
)
# Step 2: Run a strategic query
result = council.run(
"Should a B2B SaaS startup prioritize product-led growth or sales-led growth? "
"Consider factors like market size, customer acquisition costs, and scalability."
)
# Step 3: Process results
print("=" * 50)
print("FINAL SYNTHESIZED ANSWER:")
print("=" * 50)
print(result["final_response"])
```
---
## Custom Council Members
For specialized domains, create custom council members:
```python
from swarms import Agent
from swarms.structs.llm_council import LLMCouncil, get_gpt_councilor_prompt
# Create specialized agents
finance_expert = Agent(
agent_name="Finance-Councilor",
system_prompt="You are a financial analyst specializing in market analysis and investment strategies...",
model_name="gpt-4.1",
max_loops=1,
)
tech_expert = Agent(
agent_name="Technology-Councilor",
system_prompt="You are a technology strategist specializing in digital transformation...",
model_name="gpt-4.1",
max_loops=1,
)
risk_expert = Agent(
agent_name="Risk-Councilor",
system_prompt="You are a risk management expert specializing in enterprise risk assessment...",
model_name="gpt-4.1",
max_loops=1,
)
# Create council with custom members
council = LLMCouncil(
council_members=[finance_expert, tech_expert, risk_expert],
chairman_model="gpt-4.1",
verbose=True
)
result = council.run("Evaluate the risk-reward profile of investing in AI infrastructure")
```
---
## CLI Usage
Run LLM Council directly from the command line:
```bash
swarms llm-council --task "What is the best approach to implement microservices architecture?"
```
With verbose output:
```bash
swarms llm-council --task "Analyze the pros and cons of remote work" --verbose
```
---
## Use Cases
| Domain | Example Query |
|--------|---------------|
| **Business Strategy** | "Should we expand internationally or focus on domestic growth?" |
| **Technology** | "Which database architecture best suits our high-throughput requirements?" |
| **Finance** | "Evaluate investment opportunities in the renewable energy sector" |
| **Healthcare** | "What treatment approaches should be considered for this patient profile?" |
| **Legal** | "What are the compliance implications of this data processing policy?" |
---
## Next Steps
- Explore [LLM Council Examples](./llm_council_examples.md) for domain-specific implementations
- Learn about [LLM Council Reference Documentation](../swarms/structs/llm_council.md) for complete API details
- Try the [CLI Reference](../swarms/cli/cli_reference.md) for DevOps integration

@ -0,0 +1,252 @@
# Agent Marketplace Publishing: 3-Step Quickstart Guide
Publish your agents directly to the Swarms Marketplace with minimal configuration. Share your specialized agents with the community and monetize your creations.
## Overview
| Feature | Description |
|---------|-------------|
| **Direct Publishing** | Publish agents with a single flag |
| **Minimal Configuration** | Just add use cases, tags, and capabilities |
| **Automatic Integration** | Seamlessly integrates with marketplace API |
| **Monetization Ready** | Set pricing for your agents |
---
## Step 1: Get Your API Key
Before publishing, you need a Swarms API key:
1. Visit [swarms.world/platform/api-keys](https://swarms.world/platform/api-keys)
2. Create an account or sign in
3. Generate an API key
4. Set the environment variable:
```bash
export SWARMS_API_KEY="your-api-key-here"
```
Or add to your `.env` file:
```
SWARMS_API_KEY=your-api-key-here
```
---
## Step 2: Configure Your Agent
Create an agent with publishing configuration:
```python
from swarms import Agent
# Create your specialized agent
my_agent = Agent(
agent_name="Market-Analysis-Agent",
agent_description="Expert market analyst specializing in cryptocurrency and stock analysis",
model_name="gpt-4o-mini",
system_prompt="""You are an expert market analyst specializing in:
- Cryptocurrency market analysis
- Stock market trends
- Risk assessment
- Portfolio recommendations
Provide data-driven insights with confidence levels.""",
max_loops=1,
# Publishing configuration
publish_to_marketplace=True,
# Required: Define use cases
use_cases=[
{
"title": "Cryptocurrency Analysis",
"description": "Analyze crypto market trends and provide investment insights"
},
{
"title": "Stock Screening",
"description": "Screen stocks based on technical and fundamental criteria"
},
{
"title": "Portfolio Review",
"description": "Review and optimize investment portfolios"
}
],
)
```
---
## Step 3: Run to Publish
Simply run the agent to trigger publishing:
```python
# Running the agent automatically publishes it
result = my_agent.run("Analyze Bitcoin's current market position")
print(result)
print("\n✅ Agent published to marketplace!")
```
---
## Complete Example
Here's a complete working example:
```python
import os
from swarms import Agent
# Ensure API key is set
if not os.getenv("SWARMS_API_KEY"):
raise ValueError("Please set SWARMS_API_KEY environment variable")
# Step 1: Create a specialized medical analysis agent
medical_agent = Agent(
agent_name="Blood-Data-Analysis-Agent",
agent_description="Explains and contextualizes common blood test panels with structured insights",
model_name="gpt-4o-mini",
max_loops=1,
system_prompt="""You are a clinical laboratory data analyst assistant focused on hematology and basic metabolic panels.
Your goals:
1) Interpret common blood test panels (CBC, CMP/BMP, lipid panel, HbA1c, thyroid panels)
2) Provide structured findings: out-of-range markers, degree of deviation, clinical significance
3) Identify potential confounders (e.g., hemolysis, fasting status, medications)
4) Suggest safe, non-diagnostic next steps
Reliability and safety:
- This is not medical advice. Do not diagnose or treat.
- Use cautious language with confidence levels (low/medium/high)
- Highlight red-flag combinations that warrant urgent clinical evaluation""",
# Step 2: Publishing configuration
publish_to_marketplace=True,
tags=["lab", "hematology", "metabolic", "education"],
capabilities=[
"panel-interpretation",
"risk-flagging",
"guideline-citation"
],
use_cases=[
{
"title": "Blood Analysis",
"description": "Analyze blood samples and summarize notable findings."
},
{
"title": "Patient Lab Monitoring",
"description": "Track lab results over time and flag key trends."
},
{
"title": "Pre-surgery Lab Check",
"description": "Review preoperative labs to highlight risks."
}
],
)
# Step 3: Run the agent (this publishes it to the marketplace)
result = medical_agent.run(
task="Analyze this blood sample: Hematology and Basic Metabolic Panel"
)
print(result)
```
---
## Required Fields for Publishing
| Field | Type | Description |
|-------|------|-------------|
| `publish_to_marketplace` | `bool` | Set to `True` to enable publishing |
| `use_cases` | `List[Dict]` | List of use case dictionaries with `title` and `description` |
### Use Case Format
```python
use_cases = [
{
"title": "Use Case Title",
"description": "Detailed description of what the agent does for this use case"
},
# Add more use cases...
]
```
---
## Optional: Programmatic Publishing
You can also publish prompts/agents directly using the utility function:
```python
from swarms.utils.swarms_marketplace_utils import add_prompt_to_marketplace
response = add_prompt_to_marketplace(
name="My Custom Agent",
prompt="Your detailed system prompt here...",
description="What this agent does",
use_cases=[
{"title": "Use Case 1", "description": "Description 1"},
{"title": "Use Case 2", "description": "Description 2"}
],
tags="tag1, tag2, tag3",
category="research",
is_free=True, # Set to False for paid agents
price_usd=0.0 # Set price if not free
)
print(response)
```
---
## Marketplace Categories
| Category | Description |
|----------|-------------|
| `research` | Research and analysis agents |
| `content` | Content generation agents |
| `coding` | Programming and development agents |
| `finance` | Financial analysis agents |
| `healthcare` | Medical and health-related agents |
| `education` | Educational and tutoring agents |
| `legal` | Legal research and analysis agents |
---
## Best Practices
!!! tip "Publishing Best Practices"
- **Clear Descriptions**: Write detailed, accurate agent descriptions
- **Multiple Use Cases**: Provide 3-5 distinct use cases
- **Relevant Tags**: Use specific, searchable keywords
- **Test First**: Thoroughly test your agent before publishing
- **System Prompt Quality**: Ensure your system prompt is well-crafted
!!! warning "Important Notes"
- `use_cases` is **required** when `publish_to_marketplace=True`
- Both `tags` and `capabilities` should be provided for discoverability
- The agent must have a valid `SWARMS_API_KEY` set in the environment
---
---
## Next Steps
| Next Step | Description |
|-----------|-------------|
| [Swarms Marketplace](https://swarms.world) | Browse published agents |
| [Marketplace Documentation](../swarms_platform/share_and_discover.md) | Learn how to publish and discover agents |
| [Monetization Options](../swarms_platform/monetize.md) | Explore ways to monetize your agent |
| [API Key Management](../swarms_platform/apikeys.md) | Manage your API keys for publishing and access |

@ -0,0 +1,69 @@
# Multi-Agent Architectures Overview
Build sophisticated multi-agent systems with Swarms' advanced orchestration patterns. From hierarchical teams to collaborative councils, these examples demonstrate how to coordinate multiple AI agents for complex tasks.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **Hierarchical Swarms** | Director agents coordinating worker agents |
| **Collaborative Systems** | Agents working together through debate and consensus |
| **Workflow Patterns** | Sequential, concurrent, and graph-based execution |
| **Routing Systems** | Intelligent task routing to specialized agents |
| **Group Interactions** | Multi-agent conversations and discussions |
---
## Architecture Examples
### Hierarchical & Orchestration
| Example | Description | Link |
|---------|-------------|------|
| **HierarchicalSwarm** | Multi-level agent organization with director and workers | [View Example](../swarms/examples/hierarchical_swarm_example.md) |
| **Hybrid Hierarchical-Cluster Swarm** | Combined hierarchical and cluster patterns | [View Example](../swarms/examples/hhcs_examples.md) |
| **SwarmRouter** | Intelligent routing of tasks to appropriate swarms | [View Example](../swarms/examples/swarm_router.md) |
| **MultiAgentRouter** | Route tasks to specialized individual agents | [View Example](../swarms/examples/multi_agent_router_minimal.md) |
### Collaborative & Consensus
| Example | Description | Link |
|---------|-------------|------|
| **LLM Council Quickstart** | Collaborative decision-making with peer review and synthesis | [View Example](./llm_council_quickstart.md) |
| **LLM Council Examples** | Domain-specific council implementations | [View Examples](./llm_council_examples.md) |
| **DebateWithJudge Quickstart** | Two agents debate with judge providing synthesis | [View Example](./debate_quickstart.md) |
| **Mixture of Agents** | Heterogeneous agents for diverse task handling | [View Example](../swarms/examples/moa_example.md) |
### Workflow Patterns
| Example | Description | Link |
|---------|-------------|------|
| **GraphWorkflow with Rustworkx** | High-performance graph-based workflows (5-10x faster) | [View Example](./graphworkflow_quickstart.md) |
| **Multi-Agentic Patterns with GraphWorkflow** | Advanced graph workflow patterns | [View Example](../swarms/examples/graphworkflow_rustworkx_patterns.md) |
| **SequentialWorkflow** | Linear agent pipelines | [View Example](../swarms/examples/sequential_example.md) |
| **ConcurrentWorkflow** | Parallel agent execution | [View Example](../swarms/examples/concurrent_workflow.md) |
### Group Communication
| Example | Description | Link |
|---------|-------------|------|
| **Group Chat** | Multi-agent group conversations | [View Example](../swarms/examples/groupchat_example.md) |
| **Interactive GroupChat** | Real-time interactive agent discussions | [View Example](../swarms/examples/igc_example.md) |
### Specialized Patterns
| Example | Description | Link |
|---------|-------------|------|
| **Agents as Tools** | Use agents as callable tools for other agents | [View Example](../swarms/examples/agents_as_tools.md) |
| **Aggregate Responses** | Combine outputs from multiple agents | [View Example](../swarms/examples/aggregate.md) |
| **Unique Swarms** | Experimental and specialized swarm patterns | [View Example](../swarms/examples/unique_swarms.md) |
| **BatchedGridWorkflow (Simple)** | Grid-based batch processing | [View Example](../swarms/examples/batched_grid_simple_example.md) |
| **BatchedGridWorkflow (Advanced)** | Advanced grid-based batch processing | [View Example](../swarms/examples/batched_grid_advanced_example.md) |
---
## Related Resources
- [Swarm Architectures Concept Guide](../swarms/concept/swarm_architectures.md)
- [Choosing Multi-Agent Architecture](../swarms/concept/how_to_choose_swarms.md)
- [Custom Swarm Development](../swarms/structs/custom_swarm.md)

@ -0,0 +1,39 @@
# RAG Examples Overview
Enhance your agents with Retrieval-Augmented Generation (RAG). Connect to vector databases and knowledge bases to give agents access to your custom data.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **RAG Fundamentals** | Understanding retrieval-augmented generation |
| **Vector Databases** | Connecting to Qdrant, Pinecone, and more |
| **Document Processing** | Ingesting and indexing documents |
| **Semantic Search** | Finding relevant context for queries |
---
## RAG Examples
| Example | Description | Vector DB | Link |
|---------|-------------|-----------|------|
| **RAG with Qdrant** | Complete RAG implementation with Qdrant | Qdrant | [View Example](../swarms/RAG/qdrant_rag.md) |
---
## Use Cases
| Use Case | Description |
|----------|-------------|
| **Document Q&A** | Answer questions about your documents |
| **Knowledge Base** | Query internal company knowledge |
| **Research Assistant** | Search through research papers |
| **Code Documentation** | Query codebase documentation |
| **Customer Support** | Access product knowledge |
---
## Related Resources
- [Memory Documentation](../swarms/memory/diy_memory.md) - Building custom memory
- [Agent Long-term Memory](../swarms/structs/agent.md#long-term-memory) - Agent memory configuration

@ -0,0 +1,55 @@
# Tools & Integrations Overview
Extend your agents with powerful integrations. Connect to web search, browser automation, financial data, and Model Context Protocol (MCP) servers.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **Web Search** | Integrate real-time web search capabilities |
| **Browser Automation** | Control web browsers programmatically |
| **Financial Data** | Access stock and market information |
| **Web Scraping** | Extract data from websites |
| **MCP Integration** | Connect to Model Context Protocol servers |
---
## Integration Examples
### Web Search
| Integration | Description | Link |
|-------------|-------------|------|
| **Exa Search** | AI-powered web search for agents | [View Example](./exa_search.md) |
### Browser Automation
| Integration | Description | Link |
|-------------|-------------|------|
| **Browser Use** | Automated browser control with agents | [View Example](./browser_use.md) |
### Financial Data
| Integration | Description | Link |
|-------------|-------------|------|
| **Yahoo Finance** | Stock data, quotes, and market info | [View Example](../swarms/examples/yahoo_finance.md) |
### Web Scraping
| Integration | Description | Link |
|-------------|-------------|------|
| **Firecrawl** | AI-powered web scraping | [View Example](../developer_guides/firecrawl.md) |
### MCP (Model Context Protocol)
| Integration | Description | Link |
|-------------|-------------|------|
| **Multi-MCP Agent** | Connect agents to multiple MCP servers | [View Example](../swarms/examples/multi_mcp_agent.md) |
---
## Related Resources
- [Tools Documentation](../swarms/tools/main.md) - Building custom tools
- [MCP Integration Guide](../swarms/structs/agent_mcp.md) - Detailed MCP setup
- [swarms-tools Package](../swarms_tools/overview.md) - Pre-built tool collection

@ -24130,32 +24130,6 @@ flowchart LR
- Maintains strict ordering of task processing
### Linear Swarm
```python
def linear_swarm(agents: AgentListType, tasks: List[str], return_full_history: bool = True)
```
**Information Flow:**
```mermaid
flowchart LR
Input[Task Input] --> A1
subgraph Sequential Processing
A1((Agent 1)) --> A2((Agent 2))
A2 --> A3((Agent 3))
A3 --> A4((Agent 4))
A4 --> A5((Agent 5))
end
A5 --> Output[Final Result]
```
**Best Used When:**
- Tasks need sequential, pipeline-style processing
- Each agent performs a specific transformation step
- Order of processing is critical
### Star Swarm
```python
def star_swarm(agents: AgentListType, tasks: List[str], return_full_history: bool = True)
@ -24389,7 +24363,6 @@ flowchart TD
## Common Use Cases
1. **Data Processing Pipelines**
- Linear Swarm
- Circular Swarm
2. **Distributed Computing**
@ -24420,7 +24393,6 @@ from swarms.structs.swarming_architectures import (
exponential_swarm,
fibonacci_swarm,
grid_swarm,
linear_swarm,
mesh_swarm,
one_to_three,
prime_swarm,
@ -24528,29 +24500,6 @@ def run_healthcare_grid_swarm():
print("\nGrid swarm processing completed")
print(result)
def run_finance_linear_swarm():
"""Loan approval process using linear swarm"""
print_separator()
print("FINANCE - LOAN APPROVAL PROCESS (Linear Swarm)")
agents = create_finance_agents()[:3]
tasks = [
"Review loan application and credit history",
"Assess risk factors and compliance requirements",
"Generate final loan recommendation"
]
print("\nTasks:")
for i, task in enumerate(tasks, 1):
print(f"{i}. {task}")
result = linear_swarm(agents, tasks)
print("\nResults:")
for log in result['history']:
print(f"\n{log['agent_name']}:")
print(f"Task: {log['task']}")
print(f"Response: {log['response']}")
def run_healthcare_star_swarm():
"""Complex medical case management using star swarm"""
print_separator()
@ -24684,7 +24633,6 @@ async def run_all_examples():
# Finance examples
run_finance_circular_swarm()
run_finance_linear_swarm()
run_finance_mesh_swarm()
run_mathematical_finance_swarms()

@ -281,6 +281,7 @@ nav:
- MALT: "swarms/structs/malt.md"
- Multi-Agent Execution Utilities: "swarms/structs/various_execution_methods.md"
- Council of Judges: "swarms/structs/council_of_judges.md"
- LLM Council: "swarms/structs/llm_council.md"
- Heavy Swarm: "swarms/structs/heavy_swarm.md"
- Social Algorithms: "swarms/structs/social_algorithms.md"
@ -355,9 +356,19 @@ nav:
- Paper Implementations: "examples/paper_implementations.md"
- Templates & Applications: "examples/templates.md"
- Community Resources: "examples/community_resources.md"
- CLI Guides:
- Overview: "examples/cli_guides_overview.md"
- CLI Quickstart: "swarms/cli/cli_quickstart.md"
- Creating Agents from CLI: "swarms/cli/cli_agent_guide.md"
- YAML Configuration: "swarms/cli/cli_yaml_guide.md"
- LLM Council CLI: "swarms/cli/cli_llm_council_guide.md"
- Heavy Swarm CLI: "swarms/cli/cli_heavy_swarm_guide.md"
- CLI Multi-Agent Commands: "examples/cli_multi_agent_quickstart.md"
- CLI Examples: "swarms/cli/cli_examples.md"
- Basic Examples:
- Overview: "examples/basic_examples_overview.md"
- Individual Agents:
- Basic Agent: "swarms/examples/basic_agent.md"
- Tool Usage:
@ -373,6 +384,7 @@ nav:
- Agent Output Types: "swarms/examples/agent_output_types.md"
- Gradio Chat Interface: "swarms/ui/main.md"
- Agent with Gemini Nano Banana: "swarms/examples/jarvis_agent.md"
- Agent Marketplace Publishing: "examples/marketplace_publishing_quickstart.md"
- LLM Providers:
- Language Models:
- Overview: "swarms/examples/model_providers.md"
@ -390,7 +402,9 @@ nav:
- Advanced Examples:
- Overview: "examples/multi_agent_architectures_overview.md"
- Multi-Agent Architectures:
- HierarchicalSwarm Examples: "swarms/examples/hierarchical_swarm_example.md"
- Hybrid Hierarchical-Cluster Swarm Example: "swarms/examples/hhcs_examples.md"
@ -399,15 +413,22 @@ nav:
- SwarmRouter Example: "swarms/examples/swarm_router.md"
- MultiAgentRouter Minimal Example: "swarms/examples/multi_agent_router_minimal.md"
- ConcurrentWorkflow Example: "swarms/examples/concurrent_workflow.md"
- Multi-Agentic Patterns with GraphWorkflow: "swarms/examples/graphworkflow_rustworkx_patterns.md"
- Mixture of Agents Example: "swarms/examples/moa_example.md"
- LLM Council Examples: "examples/llm_council_examples.md"
- Unique Swarms: "swarms/examples/unique_swarms.md"
- Agents as Tools: "swarms/examples/agents_as_tools.md"
- Aggregate Multi-Agent Responses: "swarms/examples/aggregate.md"
- Interactive GroupChat Example: "swarms/examples/igc_example.md"
- LLM Council Quickstart: "examples/llm_council_quickstart.md"
- DebateWithJudge Quickstart: "examples/debate_quickstart.md"
- GraphWorkflow with Rustworkx: "examples/graphworkflow_quickstart.md"
- BatchedGridWorkflow Examples:
- Simple BatchedGridWorkflow: "swarms/examples/batched_grid_simple_example.md"
- Advanced BatchedGridWorkflow: "swarms/examples/batched_grid_advanced_example.md"
- Applications:
- Overview: "examples/applications_overview.md"
- Swarms of Browser Agents: "swarms/examples/swarms_of_browser_agents.md"
- Hiearchical Marketing Team: "examples/marketing_team.md"
- Gold ETF Research with HeavySwarm: "examples/gold_etf_research.md"
@ -418,6 +439,7 @@ nav:
- Mergers & Aquisition (M&A) Advisory Swarm: "examples/ma_swarm.md"
- Tools & Integrations:
- Overview: "examples/tools_integrations_overview.md"
- Web Search with Exa: "examples/exa_search.md"
- Browser Use: "examples/browser_use.md"
- Yahoo Finance: "swarms/examples/yahoo_finance.md"
@ -427,13 +449,16 @@ nav:
- Multi-MCP Agent Integration: "swarms/examples/multi_mcp_agent.md"
- RAG:
- Overview: "examples/rag_examples_overview.md"
- RAG with Qdrant: "swarms/RAG/qdrant_rag.md"
- Apps:
- Overview: "examples/apps_examples_overview.md"
- Web Scraper Agents: "developer_guides/web_scraper.md"
- Smart Database: "examples/smart_database.md"
- AOP:
- Overview: "examples/aop_examples_overview.md"
- Medical AOP Example: "examples/aop_medical.md"
- X402:

@ -27,7 +27,7 @@ jinja2~=3.1
markdown~=3.10
mkdocs-material-extensions~=1.3
pygments~=2.19
pymdown-extensions~=10.16
pymdown-extensions~=10.18
# Requirements for plugins
colorama~=0.4

@ -0,0 +1,242 @@
# CLI Agent Guide: Create Agents from Command Line
Create, configure, and run AI agents directly from your terminal without writing Python code.
## Basic Agent Creation
### Step 1: Define Your Agent
Create an agent with required parameters:
```bash
swarms agent \
--name "Research-Agent" \
--description "An AI agent that researches topics and provides summaries" \
--system-prompt "You are an expert researcher. Provide comprehensive, well-structured summaries with key insights." \
--task "Research the current state of quantum computing and its applications"
```
### Step 2: Customize Model Settings
Add model configuration options:
```bash
swarms agent \
--name "Code-Reviewer" \
--description "Expert code review assistant" \
--system-prompt "You are a senior software engineer. Review code for best practices, bugs, and improvements." \
--task "Review this Python function for efficiency: def fib(n): return fib(n-1) + fib(n-2) if n > 1 else n" \
--model-name "gpt-4o-mini" \
--temperature 0.1 \
--max-loops 3
```
### Step 3: Enable Advanced Features
Add streaming, dashboard, and autosave:
```bash
swarms agent \
--name "Analysis-Agent" \
--description "Data analysis specialist" \
--system-prompt "You are a data analyst. Provide detailed statistical analysis and insights." \
--task "Analyze market trends for electric vehicles in 2024" \
--model-name "gpt-4" \
--streaming-on \
--verbose \
--autosave \
--saved-state-path "./agent_states/analysis_agent.json"
```
---
## Complete Parameter Reference
### Required Parameters
| Parameter | Description | Example |
|-----------|-------------|---------|
| `--name` | Agent name | `"Research-Agent"` |
| `--description` | Agent description | `"AI research assistant"` |
| `--system-prompt` | Agent's system instructions | `"You are an expert..."` |
| `--task` | Task for the agent | `"Analyze this data"` |
### Model Parameters
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--model-name` | `"gpt-4"` | LLM model to use |
| `--temperature` | `None` | Creativity (0.0-2.0) |
| `--max-loops` | `None` | Maximum execution loops |
| `--context-length` | `None` | Context window size |
### Behavior Parameters
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--auto-generate-prompt` | `False` | Auto-generate prompts |
| `--dynamic-temperature-enabled` | `False` | Dynamic temperature adjustment |
| `--dynamic-context-window` | `False` | Dynamic context window |
| `--streaming-on` | `False` | Enable streaming output |
| `--verbose` | `False` | Verbose mode |
### State Management
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--autosave` | `False` | Enable autosave |
| `--saved-state-path` | `None` | Path to save state |
| `--dashboard` | `False` | Enable dashboard |
| `--return-step-meta` | `False` | Return step metadata |
### Integration
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--mcp-url` | `None` | MCP server URL |
| `--user-name` | `None` | Username for agent |
| `--output-type` | `None` | Output format (str, json) |
| `--retry-attempts` | `None` | Retry attempts on failure |
---
## Use Case Examples
### Financial Analyst Agent
```bash
swarms agent \
--name "Financial-Analyst" \
--description "Expert financial analysis and market insights" \
--system-prompt "You are a CFA-certified financial analyst. Provide detailed market analysis with data-driven insights. Include risk assessments and recommendations." \
--task "Analyze Apple (AAPL) stock performance and provide investment outlook for Q4 2024" \
--model-name "gpt-4" \
--temperature 0.2 \
--max-loops 5 \
--verbose
```
### Code Generation Agent
```bash
swarms agent \
--name "Code-Generator" \
--description "Expert Python developer and code generator" \
--system-prompt "You are an expert Python developer. Write clean, efficient, well-documented code following PEP 8 guidelines. Include type hints and docstrings." \
--task "Create a Python class for managing a task queue with priority scheduling" \
--model-name "gpt-4" \
--temperature 0.1 \
--streaming-on
```
### Creative Writing Agent
```bash
swarms agent \
--name "Creative-Writer" \
--description "Professional content writer and storyteller" \
--system-prompt "You are a professional writer with expertise in engaging content. Write compelling, creative content with strong narrative flow." \
--task "Write a short story about a scientist who discovers time travel" \
--model-name "gpt-4" \
--temperature 0.8 \
--max-loops 2
```
### Research Summarizer Agent
```bash
swarms agent \
--name "Research-Summarizer" \
--description "Academic research summarization specialist" \
--system-prompt "You are an academic researcher. Summarize research topics with key findings, methodologies, and implications. Cite sources when available." \
--task "Summarize recent advances in CRISPR gene editing technology" \
--model-name "gpt-4o-mini" \
--temperature 0.3 \
--verbose \
--autosave
```
---
## Scripting Examples
### Bash Script with Multiple Agents
```bash
#!/bin/bash
# run_agents.sh
# Research phase
swarms agent \
--name "Researcher" \
--description "Research specialist" \
--system-prompt "You are a researcher. Gather comprehensive information on topics." \
--task "Research the impact of AI on healthcare" \
--model-name "gpt-4o-mini" \
--output-type "json" > research_output.json
# Analysis phase
swarms agent \
--name "Analyst" \
--description "Data analyst" \
--system-prompt "You are an analyst. Analyze data and provide insights." \
--task "Analyze the research findings from: $(cat research_output.json)" \
--model-name "gpt-4o-mini" \
--output-type "json" > analysis_output.json
echo "Pipeline complete!"
```
### Loop Through Tasks
```bash
#!/bin/bash
# batch_analysis.sh
TOPICS=("renewable energy" "electric vehicles" "smart cities" "AI ethics")
for topic in "${TOPICS[@]}"; do
echo "Analyzing: $topic"
swarms agent \
--name "Topic-Analyst" \
--description "Topic analysis specialist" \
--system-prompt "You are an expert analyst. Provide concise analysis of topics." \
--task "Analyze current trends in: $topic" \
--model-name "gpt-4o-mini" \
>> "analysis_results.txt"
echo "---" >> "analysis_results.txt"
done
```
---
## Tips and Best Practices
!!! tip "System Prompt Tips"
- Be specific about the agent's role and expertise
- Include output format preferences
- Specify any constraints or guidelines
!!! tip "Temperature Settings"
- Use **0.1-0.3** for factual/analytical tasks
- Use **0.5-0.7** for balanced responses
- Use **0.8-1.0** for creative tasks
!!! tip "Performance Optimization"
- Use `gpt-4o-mini` for simpler tasks (faster, cheaper)
- Use `gpt-4` for complex reasoning tasks
- Set appropriate `--max-loops` to control execution time
!!! warning "Common Issues"
- Ensure API key is set: `export OPENAI_API_KEY="..."`
- Wrap multi-word arguments in quotes
- Use `--verbose` to debug issues
---
## Next Steps
- [CLI YAML Configuration](./cli_yaml_guide.md) - Run agents from YAML files
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - LLM Council and Heavy Swarm
- [CLI Reference](./cli_reference.md) - Complete command documentation

@ -0,0 +1,262 @@
# CLI Heavy Swarm Guide: Comprehensive Task Analysis
Run Heavy Swarm from command line for complex task decomposition and comprehensive analysis with specialized agents.
## Overview
Heavy Swarm follows a structured workflow:
1. **Task Decomposition**: Breaks down tasks into specialized questions
2. **Parallel Execution**: Executes specialized agents in parallel
3. **Result Synthesis**: Integrates and synthesizes results
4. **Comprehensive Reporting**: Generates detailed final reports
---
## Basic Usage
### Step 1: Run a Simple Analysis
```bash
swarms heavy-swarm --task "Analyze the current state of quantum computing"
```
### Step 2: Customize with Options
```bash
swarms heavy-swarm \
--task "Research renewable energy market trends" \
--loops-per-agent 2 \
--verbose
```
### Step 3: Use Custom Models
```bash
swarms heavy-swarm \
--task "Analyze cryptocurrency regulation globally" \
--question-agent-model-name gpt-4 \
--worker-model-name gpt-4 \
--loops-per-agent 3 \
--verbose
```
---
## Command Options
| Option | Default | Description |
|--------|---------|-------------|
| `--task` | **Required** | The task to analyze |
| `--loops-per-agent` | 1 | Execution loops per agent |
| `--question-agent-model-name` | gpt-4o-mini | Model for question generation |
| `--worker-model-name` | gpt-4o-mini | Model for worker agents |
| `--random-loops-per-agent` | False | Randomize loops (1-10) |
| `--verbose` | False | Enable detailed output |
---
## Specialized Agents
Heavy Swarm includes specialized agents for different aspects:
| Agent | Role | Focus |
|-------|------|-------|
| **Question Agent** | Decomposes tasks | Generates targeted questions |
| **Research Agent** | Gathers information | Fast, trustworthy research |
| **Analysis Agent** | Processes data | Statistical analysis, insights |
| **Writing Agent** | Creates reports | Clear, structured documentation |
---
## Use Case Examples
### Market Research
```bash
swarms heavy-swarm \
--task "Comprehensive market analysis of the electric vehicle industry in North America" \
--loops-per-agent 3 \
--question-agent-model-name gpt-4 \
--worker-model-name gpt-4 \
--verbose
```
### Technology Assessment
```bash
swarms heavy-swarm \
--task "Evaluate the technical feasibility and ROI of implementing AI-powered customer service automation" \
--loops-per-agent 2 \
--verbose
```
### Competitive Analysis
```bash
swarms heavy-swarm \
--task "Analyze competitive landscape for cloud computing services: AWS vs Azure vs Google Cloud" \
--loops-per-agent 2 \
--question-agent-model-name gpt-4 \
--verbose
```
### Investment Research
```bash
swarms heavy-swarm \
--task "Research investment opportunities in AI infrastructure companies for 2024-2025" \
--loops-per-agent 3 \
--worker-model-name gpt-4 \
--verbose
```
### Policy Analysis
```bash
swarms heavy-swarm \
--task "Analyze the impact of proposed AI regulations on tech startups in the United States" \
--loops-per-agent 2 \
--verbose
```
### Due Diligence
```bash
swarms heavy-swarm \
--task "Conduct technology due diligence for acquiring a fintech startup focusing on payment processing" \
--loops-per-agent 3 \
--question-agent-model-name gpt-4 \
--worker-model-name gpt-4 \
--verbose
```
---
## Workflow Visualization
```
┌─────────────────────────────────────────────────────────────────┐
│ User Task │
│ "Analyze the impact of AI on healthcare" │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Question Agent │
│ Decomposes task into specialized questions: │
│ - What are current AI applications in healthcare? │
│ - What are the regulatory challenges? │
│ - What is the market size and growth? │
│ - What are the key players and competitors? │
└─────────────────────────────────────────────────────────────────┘
┌─────────────┬─────────────┬─────────────┬─────────────┐
│ Research │ Analysis │ Research │ Writing │
│ Agent 1 │ Agent │ Agent 2 │ Agent │
└─────────────┴─────────────┴─────────────┴─────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Synthesis & Integration │
│ Combines all agent outputs │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Comprehensive Report │
│ - Executive Summary │
│ - Detailed Findings │
│ - Analysis & Insights │
│ - Recommendations │
└─────────────────────────────────────────────────────────────────┘
```
---
## Configuration Recommendations
### Quick Analysis (Cost-Effective)
```bash
swarms heavy-swarm \
--task "Quick overview of [topic]" \
--loops-per-agent 1 \
--question-agent-model-name gpt-4o-mini \
--worker-model-name gpt-4o-mini
```
### Standard Research
```bash
swarms heavy-swarm \
--task "Detailed analysis of [topic]" \
--loops-per-agent 2 \
--verbose
```
### Deep Dive (Comprehensive)
```bash
swarms heavy-swarm \
--task "Comprehensive research on [topic]" \
--loops-per-agent 3 \
--question-agent-model-name gpt-4 \
--worker-model-name gpt-4 \
--verbose
```
### Exploratory (Variable Depth)
```bash
swarms heavy-swarm \
--task "Explore [topic] with varying depth" \
--random-loops-per-agent \
--verbose
```
---
## Best Practices
!!! tip "Task Formulation"
- Be specific about what you want analyzed
- Include scope and constraints
- Specify desired output format
!!! tip "Loop Configuration"
- Use `--loops-per-agent 1` for quick overviews
- Use `--loops-per-agent 2-3` for detailed analysis
- Higher loops = more comprehensive but slower
!!! tip "Model Selection"
- Use `gpt-4o-mini` for cost-effective analysis
- Use `gpt-4` for complex, nuanced topics
- Match model to task complexity
!!! warning "Performance Notes"
- Deep analysis (3+ loops) may take several minutes
- Higher loops increase API costs
- Use `--verbose` to monitor progress
---
## Comparison: LLM Council vs Heavy Swarm
| Feature | LLM Council | Heavy Swarm |
|---------|-------------|-------------|
| **Focus** | Collaborative decision-making | Comprehensive task analysis |
| **Workflow** | Parallel responses + peer review | Task decomposition + parallel research |
| **Best For** | Questions with multiple viewpoints | Complex research and analysis tasks |
| **Output** | Synthesized consensus | Detailed research report |
| **Speed** | Faster | More thorough but slower |
---
## Next Steps
- [CLI LLM Council Guide](./cli_llm_council_guide.md) - Collaborative decisions
- [CLI Reference](./cli_reference.md) - Complete command documentation
- [Heavy Swarm Python API](../structs/heavy_swarm.md) - Programmatic usage

@ -0,0 +1,162 @@
# CLI LLM Council Guide: Collaborative Multi-Agent Decisions
Run the LLM Council directly from command line for collaborative decision-making with multiple AI agents through peer review and synthesis.
## Overview
The LLM Council creates a collaborative environment where:
1. **Multiple Perspectives**: Each council member (GPT-5.1, Gemini, Claude, Grok) independently responds
2. **Peer Review**: Members evaluate and rank each other's anonymized responses
3. **Synthesis**: A Chairman synthesizes the best elements into a final answer
---
## Basic Usage
### Step 1: Run a Simple Query
```bash
swarms llm-council --task "What are the best practices for code review?"
```
### Step 2: Enable Verbose Output
```bash
swarms llm-council --task "How should we approach microservices architecture?" --verbose
```
### Step 3: Process the Results
The council returns:
- Individual member responses
- Peer review rankings
- Synthesized final answer
---
## Use Case Examples
### Strategic Business Decisions
```bash
swarms llm-council --task "Should our SaaS startup prioritize product-led growth or sales-led growth? Consider market size, CAC, and scalability."
```
### Technology Evaluation
```bash
swarms llm-council --task "Compare Kubernetes vs Docker Swarm for a startup with 10 microservices. Consider cost, complexity, and scalability."
```
### Investment Analysis
```bash
swarms llm-council --task "Evaluate investment opportunities in AI infrastructure companies. Consider market size, competition, and growth potential."
```
### Policy Analysis
```bash
swarms llm-council --task "What are the implications of implementing AI regulation similar to the EU AI Act in the United States?"
```
### Research Questions
```bash
swarms llm-council --task "What are the most promising approaches to achieving AGI? Evaluate different research paradigms."
```
---
## Council Members
The default council includes:
| Member | Model | Strengths |
|--------|-------|-----------|
| **GPT-5.1 Councilor** | gpt-5.1 | Analytical, comprehensive |
| **Gemini 3 Pro Councilor** | gemini-3-pro | Concise, well-processed |
| **Claude Sonnet 4.5 Councilor** | claude-sonnet-4.5 | Thoughtful, balanced |
| **Grok-4 Councilor** | grok-4 | Creative, innovative |
| **Chairman** | gpt-5.1 | Synthesizes final answer |
---
## Workflow Visualization
```
┌─────────────────────────────────────────────────────────────────┐
│ User Query │
└─────────────────────────────────────────────────────────────────┘
┌─────────────┬─────────────┬─────────────┬─────────────┐
│ GPT-5.1 │ Gemini 3 │ Claude 4.5 │ Grok-4 │
│ Councilor │ Councilor │ Councilor │ Councilor │
└─────────────┴─────────────┴─────────────┴─────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Anonymized Peer Review │
│ Each member ranks all responses (anonymized) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Chairman │
│ Synthesizes best elements from all responses │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Final Synthesized Answer │
└─────────────────────────────────────────────────────────────────┘
```
---
## Best Practices
!!! tip "Query Formulation"
- Be specific and detailed in your queries
- Include context and constraints
- Ask for specific types of analysis
!!! tip "When to Use LLM Council"
- Complex decisions requiring multiple perspectives
- Research questions needing comprehensive analysis
- Strategic planning and evaluation
- Questions with trade-offs to consider
!!! tip "Performance Tips"
- Use `--verbose` for detailed progress tracking
- Expect responses to take 30-60 seconds
- Complex queries may take longer
!!! warning "Limitations"
- Requires multiple API calls (higher cost)
- Not suitable for simple factual queries
- Response time is longer than single-agent queries
---
## Command Reference
```bash
swarms llm-council --task "<query>" [--verbose]
```
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `--task` | string | **Required** | Query for the council |
| `--verbose` | flag | False | Enable detailed output |
---
## Next Steps
- [CLI Heavy Swarm Guide](./cli_heavy_swarm_guide.md) - Complex task analysis
- [CLI Reference](./cli_reference.md) - Complete command documentation
- [LLM Council Python API](../examples/llm_council_quickstart.md) - Programmatic usage

@ -0,0 +1,115 @@
# CLI Quickstart: Getting Started in 3 Steps
Get up and running with the Swarms CLI in minutes. This guide covers installation, setup verification, and running your first commands.
## Step 1: Install Swarms
Install the Swarms package which includes the CLI:
```bash
pip install swarms
```
Verify installation:
```bash
swarms --help
```
You should see the Swarms CLI banner with available commands.
---
## Step 2: Configure Environment
Set up your API keys and workspace:
```bash
# Set your OpenAI API key (or other provider)
export OPENAI_API_KEY="your-openai-api-key"
# Optional: Set workspace directory
export WORKSPACE_DIR="./agent_workspace"
```
Or create a `.env` file in your project directory:
```
OPENAI_API_KEY=your-openai-api-key
WORKSPACE_DIR=./agent_workspace
```
Verify your setup:
```bash
swarms setup-check --verbose
```
Expected output:
```
🔍 Running Swarms Environment Setup Check
┌─────────────────────────────────────────────────────────────────────────────┐
│ Environment Check Results │
├─────────┬─────────────────────────┬─────────────────────────────────────────┤
│ Status │ Check │ Details │
├─────────┼─────────────────────────┼─────────────────────────────────────────┤
│ ✓ │ Python Version │ Python 3.11.5 │
│ ✓ │ Swarms Version │ Current version: 8.7.0 │
│ ✓ │ API Keys │ API keys found: OPENAI_API_KEY │
│ ✓ │ Dependencies │ All required dependencies available │
└─────────┴─────────────────────────┴─────────────────────────────────────────┘
```
---
## Step 3: Run Your First Command
Try these commands to verify everything works:
### View All Features
```bash
swarms features
```
### Create a Simple Agent
```bash
swarms agent \
--name "Assistant" \
--description "A helpful AI assistant" \
--system-prompt "You are a helpful assistant that provides clear, concise answers." \
--task "What are the benefits of renewable energy?" \
--model-name "gpt-4o-mini"
```
### Run LLM Council
```bash
swarms llm-council --task "What are the best practices for code review?"
```
---
## Quick Reference
| Command | Description |
|---------|-------------|
| `swarms --help` | Show all available commands |
| `swarms features` | Display all CLI features |
| `swarms setup-check` | Verify environment setup |
| `swarms onboarding` | Interactive setup wizard |
| `swarms agent` | Create and run a custom agent |
| `swarms llm-council` | Run collaborative LLM council |
| `swarms heavy-swarm` | Run comprehensive analysis swarm |
---
## Next Steps
- [CLI Agent Guide](./cli_agent_guide.md) - Create custom agents from CLI
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - Run LLM Council and Heavy Swarm
- [CLI Reference](./cli_reference.md) - Complete command documentation

@ -5,20 +5,28 @@ The Swarms CLI is a comprehensive command-line interface for managing and execut
## Table of Contents
- [Installation](#installation)
- [Basic Usage](#basic-usage)
- [Commands Reference](#commands-reference)
- [Global Arguments](#global-arguments)
- [Command-Specific Arguments](#command-specific-arguments)
- [run-agents Command](#run-agents-command)
- [load-markdown Command](#load-markdown-command)
- [agent Command](#agent-command)
- [autoswarm Command](#autoswarm-command)
- [setup-check Command](#setup-check-command)
- [llm-council Command](#llm-council-command)
- [heavy-swarm Command](#heavy-swarm-command)
- [features Command](#features-command)
- [Error Handling](#error-handling)
- [Examples](#examples)
- [Configuration](#configuration)
- [Advanced Features](#advanced-features)
- [Troubleshooting](#troubleshooting)
- [Integration](#integration)
- [Performance Considerations](#performance-considerations)
- [Security](#security)
- [Command Quick Reference](#command-quick-reference)
- [Support](#support)
## Installation
@ -43,6 +51,7 @@ swarms <command> [options]
|---------|-------------|-------------------|
| `onboarding` | Start interactive onboarding process | None |
| `help` | Display help message | None |
| `features` | Display all available features and actions in a comprehensive table | None |
| `get-api-key` | Open API key portal in browser | None |
| `check-login` | Verify login status and initialize cache | None |
| `run-agents` | Execute agents from YAML configuration | `--yaml-file` |
@ -52,6 +61,8 @@ swarms <command> [options]
| `book-call` | Schedule strategy session | None |
| `autoswarm` | Generate and execute autonomous swarm | `--task`, `--model` |
| `setup-check` | Run comprehensive environment setup check | None |
| `llm-council` | Run LLM Council with multiple agents collaborating on a task | `--task` |
| `heavy-swarm` | Run HeavySwarm with specialized agents for complex task analysis | `--task` |
## Global Arguments
@ -221,6 +232,148 @@ swarms setup-check --verbose
└─────────────────────────────────────────────────────────────────────────────┘
```
### `llm-council` Command
Run the LLM Council with multiple specialized agents that collaborate, evaluate, and synthesize responses.
The LLM Council follows a structured workflow:
1. **Independent Responses**: Each council member (GPT-5.1, Gemini 3 Pro, Claude Sonnet 4.5, Grok-4) independently responds to the query
2. **Peer Review**: All members review and rank each other's anonymized responses
3. **Synthesis**: A Chairman agent synthesizes all responses and rankings into a final comprehensive answer
```bash
swarms llm-council [options]
```
#### Required Arguments
| Argument | Type | Description |
|----------|------|-------------|
| `--task` | `str` | The query or question for the LLM Council to process |
#### Optional Arguments
| Argument | Type | Default | Description |
|----------|------|---------|-------------|
| `--verbose` | `bool` | `True` | Enable verbose output showing progress and intermediate results |
**Example:**
```bash
# Basic usage
swarms llm-council --task "What are the best energy ETFs right now?"
# With verbose output
swarms llm-council --task "What is the best approach to solve this problem?" --verbose
```
**How It Works:**
The LLM Council creates a collaborative environment where:
- **Default Council Members**: GPT-5.1 (analytical), Gemini 3 Pro (concise), Claude Sonnet 4.5 (balanced), Grok-4 (creative)
- **Anonymized Evaluation**: Responses are anonymized before evaluation to ensure honest ranking
- **Cross-Model Evaluation**: Each model evaluates all responses, often selecting other models' responses as superior
- **Final Synthesis**: The Chairman (GPT-5.1 by default) synthesizes the best elements from all responses
**Use Cases:**
- Complex problem-solving requiring multiple perspectives
- Research questions needing comprehensive analysis
- Decision-making scenarios requiring thorough evaluation
- Content generation with quality assurance
### `heavy-swarm` Command
Run HeavySwarm with specialized agents for complex task analysis and decomposition.
HeavySwarm follows a structured workflow:
1. **Task Decomposition**: Breaks down tasks into specialized questions
2. **Parallel Execution**: Executes specialized agents in parallel
3. **Result Synthesis**: Integrates and synthesizes results
4. **Comprehensive Reporting**: Generates detailed final reports
5. **Iterative Refinement**: Optional multi-loop execution for iterative improvement
```bash
swarms heavy-swarm [options]
```
#### Required Arguments
| Argument | Type | Description |
|----------|------|-------------|
| `--task` | `str` | The task for HeavySwarm to analyze and process |
#### Optional Arguments
| Argument | Type | Default | Description |
|----------|------|---------|-------------|
| `--loops-per-agent` | `int` | `1` | Number of execution loops each agent should perform |
| `--question-agent-model-name` | `str` | `"gpt-4o-mini"` | Model name for the question generation agent |
| `--worker-model-name` | `str` | `"gpt-4o-mini"` | Model name for specialized worker agents |
| `--random-loops-per-agent` | `bool` | `False` | Enable random number of loops per agent (1-10 range) |
| `--verbose` | `bool` | `False` | Enable verbose output showing detailed progress |
**Example:**
```bash
# Basic usage
swarms heavy-swarm --task "Analyze the current market trends for renewable energy"
# With custom configuration
swarms heavy-swarm \
--task "Research the best investment strategies for 2024" \
--loops-per-agent 3 \
--question-agent-model-name "gpt-4" \
--worker-model-name "gpt-4" \
--random-loops-per-agent \
--verbose
```
**Specialized Agent Roles:**
HeavySwarm includes specialized agents for different aspects of analysis:
- **Research Agent**: Fast, trustworthy, and reproducible research
- **Analysis Agent**: Statistical analysis and validated insights
- **Writing Agent**: Clear, structured documentation
- **Question Agent**: Task decomposition and question generation
**Use Cases:**
- Complex research tasks requiring multiple perspectives
- Market analysis and financial research
- Technical analysis and evaluation
- Comprehensive report generation
- Multi-faceted problem solving
### `features` Command
Display all available CLI features and actions in a comprehensive, formatted table.
This command provides a quick reference to all available features, their categories, descriptions, command syntax, and key parameters.
```bash
swarms features
```
**No arguments required.**
**Example:**
```bash
swarms features
```
**Output Includes:**
- **Main Features Table**: Complete list of all features with:
- Feature name
- Category (Setup, Auth, Execution, Creation, etc.)
- Description
- Command syntax
- Key parameters
- **Category Summary**: Overview of features grouped by category with counts
- **Usage Tips**: Quick tips for using the CLI effectively
**Use Cases:**
- Quick reference when exploring CLI capabilities
- Discovering available features
- Understanding command syntax and parameters
- Learning about feature categories
## Error Handling
The CLI provides comprehensive error handling with formatted error messages:
@ -289,6 +442,34 @@ swarms autoswarm \
--model "gpt-4"
```
### LLM Council Collaboration
```bash
# Run LLM Council for collaborative problem solving
swarms llm-council \
--task "What are the best strategies for reducing carbon emissions in manufacturing?" \
--verbose
```
### HeavySwarm Complex Analysis
```bash
# Run HeavySwarm for comprehensive task analysis
swarms heavy-swarm \
--task "Analyze the impact of AI on the job market in 2024" \
--loops-per-agent 2 \
--question-agent-model-name "gpt-4" \
--worker-model-name "gpt-4" \
--verbose
```
### Viewing All Features
```bash
# Display all available features
swarms features
```
## Configuration
### YAML Configuration Format
@ -386,6 +567,54 @@ Guided setup process including:
- Usage examples
### Multi-Agent Collaboration
The CLI supports advanced multi-agent architectures:
#### LLM Council
Collaborative problem-solving with multiple specialized models:
```bash
swarms llm-council --task "Your question here"
```
**Features:**
- Multiple model perspectives (GPT-5.1, Gemini, Claude, Grok)
- Anonymous peer review and ranking
- Synthesized final responses
- Cross-model evaluation
#### HeavySwarm
Complex task analysis with specialized agent roles:
```bash
swarms heavy-swarm --task "Your complex task here"
```
**Features:**
- Task decomposition into specialized questions
- Parallel agent execution
- Result synthesis and integration
- Iterative refinement with multiple loops
- Specialized agent roles (Research, Analysis, Writing, Question)
### Feature Discovery
Quickly discover all available features:
```bash
swarms features
```
Displays comprehensive tables showing:
- All available commands
- Feature categories
- Command syntax
- Key parameters
- Usage examples
## Troubleshooting
@ -451,6 +680,8 @@ swarms run-agents --yaml-file agents2.yaml
| Model Selection | Choose appropriate models for task complexity |
| Context Length | Monitor and optimize input sizes |
| Rate Limiting | Respect API provider limits |
| Multi-Agent Execution | LLM Council and HeavySwarm execute agents in parallel for efficiency |
| Loop Configuration | Adjust `--loops-per-agent` based on task complexity and time constraints |
## Security
@ -461,6 +692,48 @@ swarms run-agents --yaml-file agents2.yaml
| Input Validation | CLI validates all inputs before execution |
| Error Sanitization | Sensitive information is not exposed in errors |
## Command Quick Reference
### Quick Start Commands
```bash
# Environment setup
swarms setup-check --verbose
swarms onboarding
# View all features
swarms features
# Get help
swarms help
```
### Agent Commands
```bash
# Create custom agent
swarms agent --name "Agent" --task "Task" --system-prompt "Prompt"
# Run agents from YAML
swarms run-agents --yaml-file agents.yaml
# Load from markdown
swarms load-markdown --markdown-path ./agents/
```
### Multi-Agent Commands
```bash
# LLM Council
swarms llm-council --task "Your question"
# HeavySwarm
swarms heavy-swarm --task "Your complex task" --loops-per-agent 2 --verbose
# Auto-generate swarm
swarms autoswarm --task "Task description" --model "gpt-4"
```
## Support
For additional support:
@ -470,3 +743,4 @@ For additional support:
| **Community** | [Discord](https://discord.gg/EamjgSaEQf) |
| **Issues** | [GitHub Issues](https://github.com/kyegomez/swarms/issues) |
| **Strategy Sessions**| [Book a Call](https://cal.com/swarms/swarms-strategy-session) |
| **Documentation** | [Full Documentation](https://docs.swarms.world) |

@ -0,0 +1,320 @@
# CLI YAML Configuration Guide: Run Agents from Config Files
Run multiple agents from YAML configuration files for reproducible, version-controlled agent deployments.
## Basic YAML Configuration
### Step 1: Create YAML Config File
Create a file named `agents.yaml`:
```yaml
agents:
- name: "Research-Agent"
description: "AI research specialist"
model_name: "gpt-4o-mini"
system_prompt: |
You are an expert researcher.
Provide comprehensive, well-structured research summaries.
Include key insights and data points.
temperature: 0.3
max_loops: 2
task: "Research current trends in renewable energy"
- name: "Analysis-Agent"
description: "Data analysis specialist"
model_name: "gpt-4o-mini"
system_prompt: |
You are a data analyst.
Provide detailed statistical analysis and insights.
Use data-driven reasoning.
temperature: 0.2
max_loops: 3
task: "Analyze market opportunities in the EV sector"
```
### Step 2: Run Agents from YAML
```bash
swarms run-agents --yaml-file agents.yaml
```
### Step 3: View Results
Results are displayed in the terminal with formatted output for each agent.
---
## Complete YAML Schema
### Agent Configuration Options
```yaml
agents:
- name: "Agent-Name" # Required: Agent identifier
description: "Agent description" # Required: What the agent does
model_name: "gpt-4o-mini" # Model to use
system_prompt: "Your instructions" # Agent's system prompt
temperature: 0.5 # Creativity (0.0-2.0)
max_loops: 3 # Maximum execution loops
task: "Task to execute" # Task for this agent
# Optional settings
context_length: 8192 # Context window size
streaming_on: true # Enable streaming
verbose: true # Verbose output
autosave: true # Auto-save state
saved_state_path: "./states/agent.json" # State file path
output_type: "json" # Output format
retry_attempts: 3 # Retries on failure
```
---
## Use Case Examples
### Multi-Agent Research Pipeline
```yaml
# research_pipeline.yaml
agents:
- name: "Data-Collector"
description: "Collects and organizes research data"
model_name: "gpt-4o-mini"
system_prompt: |
You are a research data collector.
Gather comprehensive information on the given topic.
Organize findings into structured categories.
temperature: 0.3
max_loops: 2
task: "Collect data on AI applications in healthcare"
- name: "Trend-Analyst"
description: "Analyzes trends and patterns"
model_name: "gpt-4o-mini"
system_prompt: |
You are a trend analyst.
Identify emerging patterns and trends from data.
Provide statistical insights and projections.
temperature: 0.2
max_loops: 2
task: "Analyze AI healthcare adoption trends from 2020-2024"
- name: "Report-Writer"
description: "Creates comprehensive reports"
model_name: "gpt-4"
system_prompt: |
You are a professional report writer.
Create comprehensive, well-structured reports.
Include executive summaries and key recommendations.
temperature: 0.4
max_loops: 1
task: "Write an executive summary on AI in healthcare"
```
Run:
```bash
swarms run-agents --yaml-file research_pipeline.yaml
```
### Financial Analysis Team
```yaml
# financial_team.yaml
agents:
- name: "Market-Analyst"
description: "Analyzes market conditions"
model_name: "gpt-4"
system_prompt: |
You are a CFA-certified market analyst.
Provide detailed market analysis with technical indicators.
Include risk assessments and market outlook.
temperature: 0.2
max_loops: 3
task: "Analyze current S&P 500 market conditions"
- name: "Risk-Assessor"
description: "Evaluates investment risks"
model_name: "gpt-4"
system_prompt: |
You are a risk management specialist.
Evaluate investment risks and provide mitigation strategies.
Use quantitative risk metrics.
temperature: 0.1
max_loops: 2
task: "Assess risks in current tech sector investments"
- name: "Portfolio-Advisor"
description: "Provides portfolio recommendations"
model_name: "gpt-4"
system_prompt: |
You are a portfolio advisor.
Provide asset allocation recommendations.
Consider risk tolerance and market conditions.
temperature: 0.3
max_loops: 2
task: "Recommend portfolio adjustments for Q4 2024"
```
### Content Creation Pipeline
```yaml
# content_pipeline.yaml
agents:
- name: "Topic-Researcher"
description: "Researches content topics"
model_name: "gpt-4o-mini"
system_prompt: |
You are a content researcher.
Research topics thoroughly and identify key angles.
Find unique perspectives and data points.
temperature: 0.4
max_loops: 2
task: "Research content angles for 'Future of Remote Work'"
- name: "Content-Writer"
description: "Writes engaging content"
model_name: "gpt-4"
system_prompt: |
You are a professional content writer.
Write engaging, SEO-friendly content.
Use clear structure with headers and bullet points.
temperature: 0.7
max_loops: 2
task: "Write a blog post about remote work trends"
- name: "Editor"
description: "Edits and polishes content"
model_name: "gpt-4o-mini"
system_prompt: |
You are a professional editor.
Review content for clarity, grammar, and style.
Suggest improvements and optimize for readability.
temperature: 0.2
max_loops: 1
task: "Edit and polish the blog post for publication"
```
---
## Advanced Configuration
### Environment Variables in YAML
You can reference environment variables:
```yaml
agents:
- name: "API-Agent"
description: "Agent with API access"
model_name: "${MODEL_NAME:-gpt-4o-mini}" # Default if not set
system_prompt: "You are an API integration specialist."
task: "Test API integration"
```
### Multiple Config Files
Organize agents by purpose:
```bash
# Run different configurations
swarms run-agents --yaml-file research_agents.yaml
swarms run-agents --yaml-file analysis_agents.yaml
swarms run-agents --yaml-file reporting_agents.yaml
```
### Pipeline Script
```bash
#!/bin/bash
# run_pipeline.sh
echo "Starting research pipeline..."
swarms run-agents --yaml-file configs/research.yaml
echo "Starting analysis pipeline..."
swarms run-agents --yaml-file configs/analysis.yaml
echo "Starting reporting pipeline..."
swarms run-agents --yaml-file configs/reporting.yaml
echo "Pipeline complete!"
```
---
## Markdown Configuration
### Alternative: Load from Markdown
Create agents using markdown with YAML frontmatter:
```markdown
---
name: Research Agent
description: AI research specialist
model_name: gpt-4o-mini
temperature: 0.3
max_loops: 2
---
You are an expert researcher specializing in technology trends.
Provide comprehensive research summaries with:
- Key findings and insights
- Data points and statistics
- Recommendations and implications
Always cite sources when available and maintain objectivity.
```
Load from markdown:
```bash
# Load single file
swarms load-markdown --markdown-path ./agents/research_agent.md
# Load directory (concurrent processing)
swarms load-markdown --markdown-path ./agents/ --concurrent
```
---
## Best Practices
!!! tip "Configuration Management"
- Version control your YAML files
- Use descriptive agent names
- Document purpose in descriptions
!!! tip "Template Organization"
```
configs/
├── research/
│ ├── tech_research.yaml
│ └── market_research.yaml
├── analysis/
│ ├── financial_analysis.yaml
│ └── data_analysis.yaml
└── production/
└── prod_agents.yaml
```
!!! tip "Testing Configurations"
- Test with `--verbose` flag first
- Use lower `max_loops` for testing
- Start with `gpt-4o-mini` for cost efficiency
!!! warning "Common Pitfalls"
- Ensure proper YAML indentation (2 spaces)
- Quote strings with special characters
- Use `|` for multi-line prompts
---
## Next Steps
- [CLI Agent Guide](./cli_agent_guide.md) - Create agents from command line
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - LLM Council and Heavy Swarm
- [CLI Reference](./cli_reference.md) - Complete command documentation

@ -32,21 +32,20 @@ Multi-agent architectures leverage these communication patterns to ensure that a
| Graph Workflow | Agents collaborate in a directed acyclic graph (DAG) format to manage dependencies and parallel tasks. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/graph_workflow/) | AI-driven software development pipelines, complex project management |
| Group Chat | Agents engage in a chat-like interaction to reach decisions collaboratively. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/group_chat/) | Real-time collaborative decision-making, contract negotiations |
| Interactive Group Chat | Enhanced group chat with dynamic speaker selection and interaction patterns. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/interactive_groupchat/) | Advanced collaborative decision-making, dynamic team coordination |
| Agent Registry | A centralized registry where agents are stored, retrieved, and invoked dynamically. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/agent_registry/) | Dynamic agent management, evolving recommendation engines |
| SpreadSheet | Manages tasks at scale, tracking agent outputs in a structured format like CSV files. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/spreadsheet_swarm/) | Large-scale marketing analytics, financial audits |
| Router | Routes and chooses the architecture based on the task requirements and available agents. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/swarm_router/) | Dynamic task routing, adaptive architecture selection, optimized agent allocation |
| Heavy | High-performance architecture for handling intensive computational tasks with multiple agents. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/heavy_swarm/) | Large-scale data processing, intensive computational workflows |
| Deep Research | Specialized architecture for conducting in-depth research tasks across multiple domains. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/deep_research_swarm/) | Academic research, market analysis, comprehensive data investigation |
| De-Hallucination | Architecture designed to reduce and eliminate hallucinations in AI outputs through consensus. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/de_hallucination_swarm/) | Fact-checking, content verification, reliable information generation |
| Council as Judge | Multiple agents act as a council to evaluate and judge outputs or decisions. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/council_of_judges/) | Quality assessment, decision validation, peer review processes |
| MALT | Specialized architecture for complex language processing tasks across multiple agents. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/malt/) | Natural language processing, translation, content generation |
| Majority Voting | Agents vote on decisions with the majority determining the final outcome. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/majorityvoting/) | Democratic decision-making, consensus building, error reduction |
| Round Robin | Tasks are distributed cyclically among agents in a rotating order. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/round_robin_swarm/) | Load balancing, fair task distribution, resource optimization |
| Auto-Builder | Automatically constructs and configures multi-agent systems based on requirements. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/auto_swarm_builder/) | Dynamic system creation, adaptive architectures, rapid prototyping |
| Hybrid Hierarchical Cluster | Combines hierarchical and peer-to-peer communication patterns for complex workflows. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/hhcs/) | Complex enterprise workflows, multi-department coordination |
| Election | Agents participate in democratic voting processes to select leaders or make collective decisions. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/election_swarm/) | Democratic governance, consensus building, leadership selection |
| Dynamic Conversational | Adaptive conversation management with dynamic agent selection and interaction patterns. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/dynamic_conversational_swarm/) | Adaptive chatbots, dynamic customer service, contextual conversations |
| Tree | Hierarchical tree structure for organizing agents in parent-child relationships. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/tree_swarm/) | Organizational hierarchies, decision trees, taxonomic classification |
| Batched Grid Workflow | Executes tasks in a batched grid format, where each agent processes a different task simultaneously in parallel. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/batched_grid_workflow/) | Parallel task processing, batch operations, grid-based task distribution |
| LLM Council | Orchestrates multiple specialized LLM agents to collaboratively answer queries through structured peer review and synthesis. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/llm_council/) | Multi-model evaluation, peer review systems, collaborative AI decision-making |
| Debate with Judge | A debate architecture with Pro and Con agents debating topics, evaluated by a Judge. Supports preset agents, agent lists, or individual configuration for flexible setup. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/debate_with_judge/) | Argument analysis, decision refinement, structured debates, iterative improvement |
| Self MoA Seq | Sequential self-mixture of agents that generates multiple candidate responses and synthesizes them sequentially using a sliding window approach. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/self_moa_seq/) | High-quality response generation, ensemble methods, sequential synthesis |
| Swarm Rearrange | Orchestrates multiple swarms in sequential or parallel flow patterns, providing thread-safe operations for managing swarm execution. | [Learn More](https://docs.swarms.world/en/latest/swarms/structs/swarm_rearrange/) | Multi-swarm coordination, complex workflow orchestration, swarm composition |
---
@ -84,6 +83,7 @@ graph TD
A dynamic architecture where agents rearrange themselves based on task requirements and environmental conditions. Agents can adapt their roles, positions, and relationships to optimize performance for different scenarios.
**Use Cases:**
- Adaptive manufacturing lines that reconfigure based on product requirements
- Dynamic sales territory realignment based on market conditions
@ -123,6 +123,7 @@ graph TD
Multiple agents operate independently and simultaneously on different tasks. Each agent works on its own task without dependencies on the others.
**Use Cases:**
- Tasks that can be processed independently, such as parallel data analysis
- Large-scale simulations where multiple scenarios are run simultaneously
@ -204,6 +205,7 @@ graph TD
Makes it easy to manage thousands of agents in one place: a CSV file. Initialize any number of agents and run loops of agents on tasks.
**Use Cases:**
- Multi-threaded execution: Execute agents on multiple threads
- Save agent outputs into CSV file
@ -242,12 +244,52 @@ graph TD
---
### Batched Grid Workflow
**Overview:**
Multi-agent orchestration pattern that executes tasks in a batched grid format, where each agent processes different tasks simultaneously. Provides structured parallel processing with conversation state management.
**Use Cases:**
- Parallel task processing
- Grid-based agent execution
- Batch operations
- Multi-task multi-agent coordination
**[Learn More](https://docs.swarms.world/en/latest/swarms/structs/batched_grid_workflow/)**
```mermaid
graph TD
A[Task Batch] --> B[BatchedGridWorkflow]
B --> C[Initialize Agents]
C --> D[Create Grid]
D --> E[Agent 1: Task 1]
D --> F[Agent 2: Task 2]
D --> G[Agent N: Task N]
E --> H[Collect Results]
F --> H
G --> H
H --> I[Update Conversation]
I --> J[Next Iteration]
J --> D
```
---
### Mixture of Agents
**Overview:**
Combines multiple agents with different capabilities and expertise to solve complex problems that require diverse skill sets.
**Use Cases:**
- Financial forecasting requiring different analytical approaches
- Complex problem-solving needing diverse expertise
@ -282,6 +324,7 @@ graph TD
Organizes agents in a directed acyclic graph (DAG) format, enabling complex dependencies and parallel execution paths.
**Use Cases:**
- AI-driven software development pipelines
- Complex project management with dependencies
@ -311,6 +354,7 @@ graph TD
Enables agents to engage in chat-like interactions to reach decisions collaboratively through discussion and consensus building.
**Use Cases:**
- Real-time collaborative decision-making
- Contract negotiations
@ -345,6 +389,7 @@ graph TD
Enhanced version of Group Chat with dynamic speaker selection, priority-based communication, and advanced interaction patterns.
**Use Cases:**
- Advanced collaborative decision-making
- Dynamic team coordination
@ -378,49 +423,13 @@ graph TD
---
### Agent Registry
**Overview:**
A centralized registry system where agents are stored, retrieved, and invoked dynamically. The registry maintains metadata about agent capabilities, availability, and performance metrics, enabling intelligent agent selection and management.
**Use Cases:**
- Dynamic agent management in large-scale systems
- Evolving recommendation engines that adapt agent selection
- Service discovery in distributed agent systems
**[Learn More](https://docs.swarms.world/en/latest/swarms/structs/agent_registry/)**
```mermaid
graph TD
A[Agent Registration] --> B[Registry Database]
B --> C[Agent Metadata]
C --> D[Capabilities]
C --> E[Performance Metrics]
C --> F[Availability Status]
G[Task Request] --> H[Registry Query Engine]
H --> I[Agent Discovery]
I --> J[Capability Matching]
J --> K[Agent Selection]
K --> L[Agent Invocation]
L --> M[Task Execution]
M --> N[Performance Tracking]
N --> O[Registry Update]
O --> B
```
---
### Router Architecture
**Overview:**
Intelligently routes tasks to the most appropriate agents or architectures based on task requirements and agent capabilities.
**Use Cases:**
- Dynamic task routing
- Adaptive architecture selection
@ -458,6 +467,7 @@ graph TD
High-performance architecture designed for handling intensive computational tasks with multiple agents working on resource-heavy operations.
**Use Cases:**
- Large-scale data processing
- Intensive computational workflows
@ -493,6 +503,7 @@ graph TD
Specialized architecture for conducting comprehensive research tasks across multiple domains with iterative refinement and cross-validation.
**Use Cases:**
- Academic research projects
- Market analysis and intelligence
@ -528,6 +539,7 @@ graph TD
Architecture specifically designed to reduce and eliminate hallucinations in AI outputs through consensus mechanisms and fact-checking protocols.
**Use Cases:**
- Fact-checking and verification
- Content validation
@ -558,12 +570,52 @@ graph TD
---
### Self MoA Seq
**Overview:**
Ensemble method that generates multiple candidate responses from a single high-performing model and synthesizes them sequentially using a sliding window approach. Keeps context within bounds while leveraging diversity across samples.
**Use Cases:**
- Response synthesis
- Ensemble methods
- Sequential aggregation
- Quality improvement through diversity
**[Learn More](https://docs.swarms.world/en/latest/swarms/structs/self_moa_seq/)**
```mermaid
graph TD
A[Task] --> B[Proposer Agent]
B --> C[Generate Samples]
C --> D[Sample 1]
C --> E[Sample 2]
C --> F[Sample N]
D --> G[Sliding Window]
E --> G
F --> G
G --> H[Aggregator Agent]
H --> I[Biased Synthesis]
I --> J{More Iterations?}
J -->|Yes| G
J -->|No| K[Final Output]
```
---
### Council as Judge
**Overview:**
Multiple agents act as a council to evaluate, judge, and validate outputs or decisions through collaborative assessment.
**Use Cases:**
- Quality assessment and validation
- Decision validation processes
@ -594,12 +646,97 @@ graph TD
---
### LLM Council
**Overview:**
Orchestrates multiple specialized LLM agents to collaboratively answer queries through structured peer review and synthesis. Different models evaluate and rank each other's work, often selecting responses from other models as superior.
**Use Cases:**
- Multi-model collaboration
- Peer review processes
- Model evaluation and synthesis
- Cross-model consensus building
**[Learn More](https://docs.swarms.world/en/latest/swarms/structs/llm_council/)**
```mermaid
graph TD
A[User Query] --> B[Council Members]
B --> C[GPT Councilor]
B --> D[Gemini Councilor]
B --> E[Claude Councilor]
B --> F[Grok Councilor]
C --> G[Responses]
D --> G
E --> G
F --> G
G --> H[Anonymize & Evaluate]
H --> I[Chairman Synthesis]
I --> J[Final Response]
```
---
### Debate with Judge
**Overview:**
Debate architecture with self-refinement through a judge agent, enabling Pro and Con agents to debate a topic with iterative refinement. The judge evaluates arguments and provides synthesis for progressive improvement. Supports preset agents for quick setup, agent lists, or individual agent configuration.
**Use Cases:**
- Structured debates
- Argument evaluation
- Iterative refinement of positions
- Multi-perspective analysis
**Initialization Options:**
- `preset_agents=True`: Use built-in optimized agents (simplest)
- `agents=[pro, con, judge]`: Provide a list of 3 agents
- Individual parameters: `pro_agent`, `con_agent`, `judge_agent`
**[Learn More](https://docs.swarms.world/en/latest/swarms/structs/debate_with_judge/)**
```mermaid
graph TD
A[Topic] --> B[DebateWithJudge]
B --> C[Pro Agent]
B --> D[Con Agent]
B --> E[Judge Agent]
C --> F[Pro Argument]
D --> G[Con Argument]
F --> H[Judge Evaluation]
G --> H
H --> I[Judge Synthesis]
I --> J{More Loops?}
J -->|Yes| C
J -->|No| K[Final Output]
```
---
### MALT Architecture
**Overview:**
Specialized architecture for complex language processing tasks that require coordination between multiple language-focused agents.
**Use Cases:**
- Natural language processing pipelines
- Translation and localization
@ -637,6 +774,7 @@ graph TD
Agents vote on decisions with the majority determining the final outcome, providing democratic decision-making and error reduction through consensus.
**Use Cases:**
- Democratic decision-making processes
- Consensus building
@ -675,6 +813,7 @@ graph TD
Automatically constructs and configures multi-agent systems based on requirements, enabling dynamic system creation and adaptation.
**Use Cases:**
- Dynamic system creation
- Adaptive architectures
@ -706,12 +845,55 @@ graph TD
---
### Swarm Rearrange
**Overview:**
Orchestrates multiple swarms in sequential or parallel flow patterns with thread-safe operations and flow validation. Provides comprehensive swarm management and coordination capabilities.
**Use Cases:**
- Multi-swarm orchestration
- Flow pattern management
- Swarm coordination
- Sequential and parallel swarm execution
**[Learn More](https://docs.swarms.world/en/latest/swarms/structs/swarm_rearrange/)**
```mermaid
graph TD
A[Swarm Pool] --> B[SwarmRearrange]
B --> C[Flow Pattern]
C --> D[Sequential Flow]
C --> E[Parallel Flow]
D --> F[Swarm 1]
F --> G[Swarm 2]
G --> H[Swarm N]
E --> I[Swarm 1]
E --> J[Swarm 2]
E --> K[Swarm N]
H --> L[Result Aggregation]
I --> L
J --> L
K --> L
```
---
### Hybrid Hierarchical Cluster
**Overview:**
Combines hierarchical and peer-to-peer communication patterns for complex workflows that require both centralized coordination and distributed collaboration.
**Use Cases:**
- Complex enterprise workflows
- Multi-department coordination
@ -753,6 +935,7 @@ graph TD
Agents participate in democratic voting processes to select leaders or make collective decisions.
**Use Cases:**
- Democratic governance
- Consensus building
@ -794,6 +977,7 @@ graph TD
Adaptive conversation management with dynamic agent selection and interaction patterns.
**Use Cases:**
- Adaptive chatbots
- Dynamic customer service
@ -833,6 +1017,7 @@ graph TD
Hierarchical tree structure for organizing agents in parent-child relationships.
**Use Cases:**
- Organizational hierarchies
- Decision trees

File diff suppressed because it is too large Load Diff

@ -215,6 +215,48 @@ result = research_swarm.run(task=task)
print(result)
```
## Visualizing Swarm Hierarchy
You can visualize the hierarchical structure of your swarm before executing tasks using the `display_hierarchy()` method:
```python
from swarms import Agent
from swarms.structs.hiearchical_swarm import HierarchicalSwarm
# Create specialized agents
research_agent = Agent(
agent_name="Research-Analyst",
agent_description="Specialized in comprehensive research and data gathering",
model_name="gpt-4o-mini",
)
analysis_agent = Agent(
agent_name="Data-Analyst",
agent_description="Expert in data analysis and pattern recognition",
model_name="gpt-4o-mini",
)
strategy_agent = Agent(
agent_name="Strategy-Consultant",
agent_description="Specialized in strategic planning and recommendations",
model_name="gpt-4o-mini",
)
# Create hierarchical swarm
swarm = HierarchicalSwarm(
name="Swarms Corporation Operations",
description="Enterprise-grade hierarchical swarm for complex task execution",
agents=[research_agent, analysis_agent, strategy_agent],
max_loops=1,
director_model_name="claude-haiku-4-5",
)
# Display the hierarchy visualization
swarm.display_hierarchy()
```
This will output a visual tree structure showing the Director and all worker agents, making it easy to understand the swarm's organizational structure before executing tasks.
## Key Takeaways
1. **Agent Specialization**: Create agents with specific, well-defined expertise areas
@ -222,5 +264,6 @@ print(result)
3. **Appropriate Loop Count**: Set `max_loops` based on task complexity (1-3 for most tasks)
4. **Verbose Logging**: Enable verbose mode during development for debugging
5. **Context Preservation**: The swarm maintains full conversation history automatically
6. **Hierarchy Visualization**: Use `display_hierarchy()` to visualize swarm structure before execution
For more detailed information about the `HierarchicalSwarm` API and advanced usage patterns, see the [main documentation](hierarchical_swarm.md).

@ -61,32 +61,6 @@ flowchart LR
- Maintains strict ordering of task processing
### Linear Swarm
```python
def linear_swarm(agents: AgentListType, tasks: List[str], return_full_history: bool = True)
```
**Information Flow:**
```mermaid
flowchart LR
Input[Task Input] --> A1
subgraph Sequential Processing
A1((Agent 1)) --> A2((Agent 2))
A2 --> A3((Agent 3))
A3 --> A4((Agent 4))
A4 --> A5((Agent 5))
end
A5 --> Output[Final Result]
```
**Best Used When:**
- Tasks need sequential, pipeline-style processing
- Each agent performs a specific transformation step
- Order of processing is critical
### Star Swarm
```python
def star_swarm(agents: AgentListType, tasks: List[str], return_full_history: bool = True)
@ -320,7 +294,6 @@ flowchart TD
## Common Use Cases
1. **Data Processing Pipelines**
- Linear Swarm
- Circular Swarm
2. **Distributed Computing**
@ -351,7 +324,6 @@ from swarms.structs.swarming_architectures import (
exponential_swarm,
fibonacci_swarm,
grid_swarm,
linear_swarm,
mesh_swarm,
one_to_three,
prime_swarm,
@ -459,29 +431,6 @@ def run_healthcare_grid_swarm():
print("\nGrid swarm processing completed")
print(result)
def run_finance_linear_swarm():
"""Loan approval process using linear swarm"""
print_separator()
print("FINANCE - LOAN APPROVAL PROCESS (Linear Swarm)")
agents = create_finance_agents()[:3]
tasks = [
"Review loan application and credit history",
"Assess risk factors and compliance requirements",
"Generate final loan recommendation"
]
print("\nTasks:")
for i, task in enumerate(tasks, 1):
print(f"{i}. {task}")
result = linear_swarm(agents, tasks)
print("\nResults:")
for log in result['history']:
print(f"\n{log['agent_name']}:")
print(f"Task: {log['task']}")
print(f"Response: {log['response']}")
def run_healthcare_star_swarm():
"""Complex medical case management using star swarm"""
print_separator()
@ -615,7 +564,6 @@ async def run_all_examples():
# Finance examples
run_finance_circular_swarm()
run_finance_linear_swarm()
run_finance_mesh_swarm()
run_mathematical_finance_swarms()

@ -29,6 +29,7 @@ graph TD
| Judge Agent | An impartial evaluator that analyzes both arguments and provides synthesis |
| Iterative Refinement | The process repeats for multiple rounds, each round building upon the judge's previous synthesis |
| Progressive Improvement | Each round refines the answer by incorporating feedback and addressing weaknesses |
| Preset Agents | Built-in optimized agents that can be used without manual configuration |
## Class Definition: `DebateWithJudge`
@ -36,12 +37,15 @@ graph TD
class DebateWithJudge:
def __init__(
self,
pro_agent: Agent,
con_agent: Agent,
judge_agent: Agent,
max_rounds: int = 3,
pro_agent: Optional[Agent] = None,
con_agent: Optional[Agent] = None,
judge_agent: Optional[Agent] = None,
agents: Optional[List[Agent]] = None,
preset_agents: bool = False,
max_loops: int = 3,
output_type: str = "str-all-except-first",
verbose: bool = True,
model_name: str = "gpt-4o-mini",
):
```
@ -49,12 +53,73 @@ class DebateWithJudge:
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `pro_agent` | `Agent` | Required | The agent arguing in favor (Pro position) |
| `con_agent` | `Agent` | Required | The agent arguing against (Con position) |
| `judge_agent` | `Agent` | Required | The judge agent that evaluates arguments and provides synthesis |
| `max_rounds` | `int` | `3` | Maximum number of debate rounds to execute |
| `pro_agent` | `Optional[Agent]` | `None` | The agent arguing in favor (Pro position). Not required if using `agents` list or `preset_agents`. |
| `con_agent` | `Optional[Agent]` | `None` | The agent arguing against (Con position). Not required if using `agents` list or `preset_agents`. |
| `judge_agent` | `Optional[Agent]` | `None` | The judge agent that evaluates arguments and provides synthesis. Not required if using `agents` list or `preset_agents`. |
| `agents` | `Optional[List[Agent]]` | `None` | A list of exactly 3 agents in order: `[pro_agent, con_agent, judge_agent]`. Takes precedence over individual agent parameters. |
| `preset_agents` | `bool` | `False` | If `True`, creates default Pro, Con, and Judge agents automatically with optimized system prompts. |
| `max_loops` | `int` | `3` | Maximum number of debate rounds to execute |
| `output_type` | `str` | `"str-all-except-first"` | Format for the output conversation history |
| `verbose` | `bool` | `True` | Whether to enable verbose logging |
| `model_name` | `str` | `"gpt-4o-mini"` | The model name to use for preset agents |
### Initialization Options
The `DebateWithJudge` class supports three ways to configure agents:
#### Option 1: Preset Agents (Simplest)
Use built-in agents with optimized system prompts for debates:
```python
from swarms import DebateWithJudge
# Create debate system with preset agents
debate = DebateWithJudge(
preset_agents=True,
max_loops=3,
model_name="gpt-4o-mini" # Optional: specify model
)
result = debate.run("Should AI be regulated?")
```
#### Option 2: List of Agents
Provide a list of exactly 3 agents (Pro, Con, Judge):
```python
from swarms import Agent, DebateWithJudge
# Create your custom agents
agents = [pro_agent, con_agent, judge_agent]
# Create debate system with agent list
debate = DebateWithJudge(
agents=agents,
max_loops=3
)
result = debate.run("Is remote work better than office work?")
```
#### Option 3: Individual Agent Parameters
Provide each agent separately (original behavior):
```python
from swarms import Agent, DebateWithJudge
# Create debate system with individual agents
debate = DebateWithJudge(
pro_agent=my_pro_agent,
con_agent=my_con_agent,
judge_agent=my_judge_agent,
max_loops=3
)
result = debate.run("Should we colonize Mars?")
```
## API Reference
@ -94,7 +159,71 @@ def run(self, task: str) -> Union[str, List, dict]
- **Topic Refinement**: Judge's synthesis becomes the topic for the next round
4. **Result Formatting**: Returns the final result formatted according to `output_type`
**Example:**
**Example 1: Using Preset Agents (Simplest):**
```python
from swarms import DebateWithJudge
# Create the DebateWithJudge system with preset agents
debate_system = DebateWithJudge(
preset_agents=True,
max_loops=3,
output_type="str-all-except-first",
verbose=True,
)
# Define the debate topic
topic = (
"Should artificial intelligence be regulated by governments? "
"Discuss the balance between innovation and safety."
)
# Run the debate
result = debate_system.run(task=topic)
print(result)
```
**Example 2: Using Agent List:**
```python
from swarms import Agent, DebateWithJudge
# Create custom agents
pro_agent = Agent(
agent_name="Pro-Agent",
system_prompt="You are a skilled debater who argues in favor of positions...",
model_name="gpt-4o-mini",
max_loops=1,
)
con_agent = Agent(
agent_name="Con-Agent",
system_prompt="You are a skilled debater who argues against positions...",
model_name="gpt-4o-mini",
max_loops=1,
)
judge_agent = Agent(
agent_name="Judge-Agent",
system_prompt="You are an impartial judge who evaluates debates...",
model_name="gpt-4o-mini",
max_loops=1,
)
# Create the DebateWithJudge system using agent list
debate_system = DebateWithJudge(
agents=[pro_agent, con_agent, judge_agent],
max_loops=3,
output_type="str-all-except-first",
verbose=True,
)
# Run the debate
result = debate_system.run(task="Should AI be regulated?")
print(result)
```
**Example 3: Using Individual Agent Parameters:**
```python
from swarms import Agent, DebateWithJudge
@ -143,7 +272,7 @@ debate_system = DebateWithJudge(
pro_agent=pro_agent,
con_agent=con_agent,
judge_agent=judge_agent,
max_rounds=3,
max_loops=3,
output_type="str-all-except-first",
verbose=True,
)
@ -282,9 +411,10 @@ print(final_answer)
| `pro_agent` | `Agent` | The agent arguing in favor (Pro position) |
| `con_agent` | `Agent` | The agent arguing against (Con position) |
| `judge_agent` | `Agent` | The judge agent that evaluates arguments |
| `max_rounds` | `int` | Maximum number of debate rounds |
| `max_loops` | `int` | Maximum number of debate rounds |
| `output_type` | `str` | Format for returned results |
| `verbose` | `bool` | Whether verbose logging is enabled |
| `model_name` | `str` | Model name used for preset agents |
| `conversation` | `Conversation` | Conversation history management object |
## Output Types
@ -301,6 +431,21 @@ The `output_type` parameter controls how the conversation history is formatted:
## Usage Patterns
### Quick Start with Preset Agents
The fastest way to get started - no agent configuration needed:
```python
from swarms import DebateWithJudge
# Create debate system with built-in optimized agents
debate = DebateWithJudge(preset_agents=True, max_loops=3)
# Run a debate
result = debate.run("Should universal basic income be implemented?")
print(result)
```
### Single Topic Debate
For focused debate and refinement on a single complex topic:
@ -314,6 +459,26 @@ debate_system.output_type = "dict"
result = debate_system.run("Should universal basic income be implemented?")
```
### Using Agent List
Pass a list of 3 agents for flexible configuration:
```python
from swarms import Agent, DebateWithJudge
# Create or obtain agents from various sources
my_agents = [pro_agent, con_agent, judge_agent]
# Create debate with agent list
debate = DebateWithJudge(
agents=my_agents,
max_loops=3,
verbose=True
)
result = debate.run("Is nuclear energy the solution to climate change?")
```
### Batch Processing
For processing multiple related topics sequentially:
@ -359,14 +524,45 @@ technical_debate = DebateWithJudge(
pro_agent=technical_pro,
con_agent=technical_con,
judge_agent=technical_judge,
max_rounds=5, # More rounds for complex technical topics
max_loops=5, # More rounds for complex technical topics
verbose=True,
)
```
## Usage Examples
### Example 1: Policy Debate on AI Regulation
### Example 1: Quick Start with Preset Agents
The simplest way to use `DebateWithJudge` - no manual agent configuration needed:
```python
from swarms import DebateWithJudge
# Create the DebateWithJudge system with preset agents
debate_system = DebateWithJudge(
preset_agents=True,
max_loops=3,
model_name="gpt-4o-mini", # Specify model for preset agents
output_type="str-all-except-first",
verbose=True,
)
# Define the debate topic
topic = (
"Should artificial intelligence be regulated by governments? "
"Discuss the balance between innovation and safety."
)
# Run the debate
result = debate_system.run(task=topic)
print(result)
# Get the final refined answer
final_answer = debate_system.get_final_answer()
print(final_answer)
```
### Example 2: Policy Debate with Custom Agents
This example demonstrates using `DebateWithJudge` for a comprehensive policy debate on AI regulation, with multiple rounds of refinement.
@ -425,7 +621,7 @@ debate_system = DebateWithJudge(
pro_agent=pro_agent,
con_agent=con_agent,
judge_agent=judge_agent,
max_rounds=3,
max_loops=3,
output_type="str-all-except-first",
verbose=True,
)
@ -448,7 +644,47 @@ final_answer = debate_system.get_final_answer()
print(final_answer)
```
### Example 2: Technical Architecture Debate with Batch Processing
### Example 3: Using Agent List
This example demonstrates using the `agents` list parameter to provide agents:
```python
from swarms import Agent, DebateWithJudge
# Create your agents
pro = Agent(
agent_name="Microservices-Pro",
system_prompt="You advocate for microservices architecture...",
model_name="gpt-4o-mini",
max_loops=1,
)
con = Agent(
agent_name="Monolith-Pro",
system_prompt="You advocate for monolithic architecture...",
model_name="gpt-4o-mini",
max_loops=1,
)
judge = Agent(
agent_name="Architecture-Judge",
system_prompt="You evaluate architecture debates...",
model_name="gpt-4o-mini",
max_loops=1,
)
# Create debate with agent list
debate = DebateWithJudge(
agents=[pro, con, judge], # Pass as list
max_loops=2,
verbose=True,
)
result = debate.run("Should a startup use microservices or monolithic architecture?")
print(result)
```
### Example 4: Technical Architecture Debate with Batch Processing
This example demonstrates using `batched_run` to process multiple technical architecture questions, comparing different approaches to system design.
@ -497,7 +733,7 @@ architecture_debate = DebateWithJudge(
pro_agent=pro_agent,
con_agent=con_agent,
judge_agent=judge_agent,
max_rounds=2, # Fewer rounds for more focused technical debates
max_loops=2, # Fewer rounds for more focused technical debates
output_type="str-all-except-first",
verbose=True,
)
@ -518,7 +754,7 @@ for result in results:
print(result)
```
### Example 3: Business Strategy Debate with Custom Configuration
### Example 5: Business Strategy Debate with Custom Configuration
This example demonstrates a business strategy debate with custom agent configurations, multiple rounds, and accessing conversation history.
@ -575,7 +811,7 @@ strategy_debate = DebateWithJudge(
pro_agent=pro_agent,
con_agent=con_agent,
judge_agent=judge_agent,
max_rounds=4, # More rounds for complex strategic discussions
max_loops=4, # More rounds for complex strategic discussions
output_type="dict", # Use dict format for structured analysis
verbose=True,
)
@ -609,18 +845,27 @@ print(final_answer)
### Agent Configuration
!!! tip "Agent Configuration Best Practices"
- **Preset Agents**: Use `preset_agents=True` for quick setup with optimized prompts
- **Custom Agents**: For specialized domains, create custom agents with domain-specific prompts
- **Pro Agent**: Should be configured with expertise in the topic area and strong argumentation skills
- **Con Agent**: Should be configured to identify weaknesses and provide compelling alternatives
- **Judge Agent**: Should be configured with broad expertise and impartial evaluation capabilities
- Use appropriate models for the complexity of the debate topic
- Consider using more powerful models for the Judge agent
### Round Configuration
### Initialization Strategy
!!! info "Choosing an Initialization Method"
- **`preset_agents=True`**: Best for quick prototyping and general-purpose debates
- **`agents=[...]` list**: Best when you have agents from external sources or dynamic creation
- **Individual parameters**: Best for maximum control and explicit configuration
### Loop Configuration
!!! note "Round Configuration Tips"
- Use 2-3 rounds for most topics
- Use 4-5 rounds for complex, multi-faceted topics
- More rounds allow for deeper refinement but increase execution time
!!! note "Loop Configuration Tips"
- Use 2-3 loops (`max_loops`) for most topics
- Use 4-5 loops for complex, multi-faceted topics
- More loops allow for deeper refinement but increase execution time
- Consider the trade-off between refinement quality and cost
### Output Format Selection
@ -646,25 +891,31 @@ print(final_answer)
!!! danger "Common Problems"
**Issue**: Agents not following their roles
**Solution**: Ensure system prompts clearly define each agent's role and expertise
**Solution**: Ensure system prompts clearly define each agent's role and expertise. Consider using `preset_agents=True` for well-tested prompts.
---
**Issue**: Judge synthesis not improving over rounds
**Issue**: Judge synthesis not improving over loops
**Solution**: Increase `max_rounds` or improve Judge agent's system prompt to emphasize refinement
**Solution**: Increase `max_loops` or improve Judge agent's system prompt to emphasize refinement
---
**Issue**: Debate results are too generic
**Solution**: Use more specific system prompts and provide detailed context in the task
**Solution**: Use more specific system prompts and provide detailed context in the task. Custom agents often produce better domain-specific results.
---
**Issue**: Execution time is too long
**Solution**: Reduce `max_rounds`, use faster models, or process fewer topics in batch
**Solution**: Reduce `max_loops`, use faster models, or process fewer topics in batch
---
**Issue**: ValueError when initializing
**Solution**: Ensure you provide one of: (1) all three agents, (2) an agents list with exactly 3 agents, or (3) `preset_agents=True`
## Contributing

@ -12,6 +12,7 @@ Key features:
|------------------------|-----------------------------------------------------------------------------------------------|
| **Agent-based nodes** | Each node represents an agent that can process tasks |
| **Directed graph structure** | Edges define the flow of data between agents |
| **Dual backend support** | Choose between NetworkX (compatibility) or Rustworkx (performance) backends |
| **Parallel execution** | Multiple agents can run simultaneously within layers |
| **Automatic compilation** | Optimizes workflow structure for efficient execution |
| **Rich visualization** | Generate visual representations using Graphviz |
@ -25,37 +26,40 @@ graph TB
subgraph "GraphWorkflow Architecture"
A[GraphWorkflow] --> B[Node Collection]
A --> C[Edge Collection]
A --> D[NetworkX Graph]
A --> D[Graph Backend]
A --> E[Execution Engine]
B --> F[Agent Nodes]
C --> G[Directed Edges]
D --> H[Topological Sort]
E --> I[Parallel Execution]
E --> J[Layer Processing]
D --> H[NetworkX Backend]
D --> I[Rustworkx Backend]
D --> J[Topological Sort]
E --> K[Parallel Execution]
E --> L[Layer Processing]
subgraph "Node Types"
F --> K[Agent Node]
K --> L[Agent Instance]
K --> M[Node Metadata]
F --> M[Agent Node]
M --> N[Agent Instance]
M --> O[Node Metadata]
end
subgraph "Edge Types"
G --> N[Simple Edge]
G --> O[Fan-out Edge]
G --> P[Fan-in Edge]
G --> Q[Parallel Chain]
G --> P[Simple Edge]
G --> Q[Fan-out Edge]
G --> R[Fan-in Edge]
G --> S[Parallel Chain]
end
subgraph "Execution Patterns"
I --> R[Thread Pool]
I --> S[Concurrent Futures]
J --> T[Layer-by-layer]
J --> U[Dependency Resolution]
K --> T[Thread Pool]
K --> U[Concurrent Futures]
L --> V[Layer-by-layer]
L --> W[Dependency Resolution]
end
end
```
## Class Reference
| Parameter | Type | Description | Default |
@ -71,6 +75,70 @@ graph TB
| `task` | `Optional[str]` | The task to be executed by the workflow | `None` |
| `auto_compile` | `bool` | Whether to automatically compile the workflow | `True` |
| `verbose` | `bool` | Whether to enable detailed logging | `False` |
| `backend` | `str` | Graph backend to use ("networkx" or "rustworkx") | `"networkx"` |
## Graph Backends
GraphWorkflow supports two graph backend implementations, each with different performance characteristics:
### NetworkX Backend (Default)
The **NetworkX** backend is the default and most widely compatible option. It provides:
| Feature | Description |
|---------------------|---------------------------------------------------------|
| ✅ Full compatibility | Works out of the box with no additional dependencies |
| ✅ Mature ecosystem | Well-tested and stable |
| ✅ Rich features | Comprehensive graph algorithms and operations |
| ✅ Python-native | Pure Python implementation |
**Use NetworkX when:**
- You need maximum compatibility
- Working with small to medium-sized graphs (< 1000 nodes)
- You want zero additional dependencies
### Rustworkx Backend (High Performance)
The **Rustworkx** backend provides significant performance improvements for large graphs:
| Feature | Description |
|--------------------|-----------------------------------------------------------------|
| ⚡ High performance| Rust-based implementation for faster operations |
| ⚡ Memory efficient| Optimized for large-scale graphs |
| ⚡ Scalable | Better performance with graphs containing 1000+ nodes |
| ⚡ Same API | Drop-in replacement with identical interface |
**Use Rustworkx when:**
- Working with large graphs (1000+ nodes)
- Performance is critical
- You can install additional dependencies
**Installation:**
```bash
pip install rustworkx
```
**Note:** If rustworkx is not installed and you specify `backend="rustworkx"`, GraphWorkflow will automatically fall back to NetworkX with a warning.
### Backend Selection
Both backends implement the same `GraphBackend` interface, ensuring complete API compatibility. You can switch between backends without changing your code:
```python
# Use NetworkX (default)
workflow = GraphWorkflow(backend="networkx")
# Use Rustworkx for better performance
workflow = GraphWorkflow(backend="rustworkx")
```
The backend choice is transparent to the rest of the API - all methods work identically regardless of which backend is used.
### Core Methods
@ -455,7 +523,7 @@ Constructs a workflow from a list of agents and connections.
| `entry_points` | `List[str]` | List of entry point node IDs | `None` |
| `end_points` | `List[str]` | List of end point node IDs | `None` |
| `task` | `str` | Task to be executed by the workflow | `None` |
| `**kwargs` | `Any` | Additional keyword arguments | `{}` |
| `**kwargs` | `Any` | Additional keyword arguments (e.g., `backend`, `verbose`, `auto_compile`) | `{}` |
**Returns:**
@ -464,6 +532,7 @@ Constructs a workflow from a list of agents and connections.
**Example:**
```python
# Using NetworkX backend (default)
workflow = GraphWorkflow.from_spec(
agents=[agent1, agent2, agent3],
edges=[
@ -473,10 +542,56 @@ workflow = GraphWorkflow.from_spec(
],
task="Analyze market data"
)
# Using Rustworkx backend for better performance
workflow = GraphWorkflow.from_spec(
agents=[agent1, agent2, agent3],
edges=[
("agent1", "agent2"),
("agent2", "agent3"),
],
task="Analyze market data",
backend="rustworkx" # Specify backend via kwargs
)
```
## Examples
### Using Rustworkx Backend for Performance
```python
from swarms import Agent, GraphWorkflow
# Create agents
research_agent = Agent(
agent_name="ResearchAgent",
model_name="gpt-4",
max_loops=1
)
analysis_agent = Agent(
agent_name="AnalysisAgent",
model_name="gpt-4",
max_loops=1
)
# Build workflow with rustworkx backend for better performance
workflow = GraphWorkflow(
name="High-Performance-Workflow",
backend="rustworkx" # Use rustworkx backend
)
workflow.add_node(research_agent)
workflow.add_node(analysis_agent)
workflow.add_edge("ResearchAgent", "AnalysisAgent")
# Execute - backend is transparent to the API
results = workflow.run("What are the latest trends in AI?")
print(results)
```
**Note:** Make sure to install rustworkx first: `pip install rustworkx`
### Basic Sequential Workflow
```python
@ -667,6 +782,46 @@ loaded_workflow = GraphWorkflow.load_from_file(
new_results = loaded_workflow.run("Continue with quantum cryptography analysis")
```
### Large-Scale Workflow with Rustworkx
```python
from swarms import Agent, GraphWorkflow
# Create a large workflow with many agents
# Rustworkx backend provides better performance for large graphs
workflow = GraphWorkflow(
name="Large-Scale-Workflow",
backend="rustworkx", # Use rustworkx for better performance
verbose=True
)
# Create many agents (e.g., for parallel data processing)
agents = []
for i in range(50):
agent = Agent(
agent_name=f"Processor{i}",
model_name="gpt-4",
max_loops=1
)
agents.append(agent)
workflow.add_node(agent)
# Create complex interconnections
# Rustworkx handles this efficiently
for i in range(0, 50, 10):
source_agents = [f"Processor{j}" for j in range(i, min(i+10, 50))]
target_agents = [f"Processor{j}" for j in range(i+10, min(i+20, 50))]
if target_agents:
workflow.add_parallel_chain(source_agents, target_agents)
# Compile and execute
workflow.compile()
status = workflow.get_compilation_status()
print(f"Compiled workflow with {status['cached_layers_count']} layers")
results = workflow.run("Process large dataset in parallel")
```
### Advanced Pattern Detection
```python
@ -770,7 +925,8 @@ The `GraphWorkflow` class provides a powerful and flexible framework for orchest
|-----------------|--------------------------------------------------------------------------------------------------|
| **Scalability** | Supports workflows with hundreds of agents through efficient parallel execution |
| **Flexibility** | Multiple connection patterns (sequential, fan-out, fan-in, parallel chains) |
| **Performance** | Automatic compilation and optimization for faster execution |
| **Performance** | Automatic compilation and optimization for faster execution; rustworkx backend for large-scale graphs |
| **Backend Choice** | Choose between NetworkX (compatibility) or Rustworkx (performance) based on your needs |
| **Visualization** | Rich visual representations for workflow understanding and debugging |
| **Persistence** | Complete serialization and deserialization capabilities |
| **Error Handling** | Comprehensive error handling and recovery mechanisms |
@ -793,10 +949,28 @@ The `GraphWorkflow` class provides a powerful and flexible framework for orchest
|---------------------------------------|------------------------------------------------------------------|
| **Use meaningful agent names** | Helps with debugging and visualization |
| **Leverage parallel patterns** | Use fan-out and fan-in for better performance |
| **Choose the right backend** | Use rustworkx for large graphs (1000+ nodes), networkx for smaller graphs |
| **Compile workflows** | Always compile before execution for optimal performance |
| **Monitor execution** | Use verbose mode and status reporting for debugging |
| **Save important workflows** | Use serialization for workflow persistence |
| **Handle errors gracefully** | Implement proper error handling and recovery |
| **Visualize complex workflows** | Use visualization to understand and debug workflows |
### Backend Performance Considerations
When choosing between NetworkX and Rustworkx backends:
| Graph Size | Recommended Backend | Reason |
|------------|-------------------|--------|
| < 100 nodes | NetworkX | Minimal overhead, no extra dependencies |
| 100-1000 nodes | NetworkX or Rustworkx | Both perform well, choose based on dependency preferences |
| 1000+ nodes | Rustworkx | Significant performance benefits for large graphs |
| Very large graphs (10k+ nodes) | Rustworkx | Essential for acceptable performance |
**Performance Tips:**
- Rustworkx provides 2-10x speedup for topological operations on large graphs
- Both backends support the same features and API
- You can switch backends without code changes
- Rustworkx uses less memory for large graphs
The GraphWorkflow system represents a significant advancement in multi-agent orchestration, providing the tools needed to build complex, scalable, and maintainable AI workflows.

@ -35,6 +35,7 @@ The Hierarchical Swarm follows a clear workflow pattern:
| **Comprehensive Logging** | Detailed logging for debugging and monitoring |
| **Live Streaming** | Real-time streaming callbacks for monitoring agent outputs |
| **Token-by-Token Updates** | Watch text formation in real-time as agents generate responses |
| **Hierarchy Visualization** | Visual tree representation of swarm structure with `display_hierarchy()` |
## Constructor
@ -70,6 +71,65 @@ Initializes a new HierarchicalSwarm instance.
## Core Methods
### `display_hierarchy()`
Displays a visual tree representation of the hierarchical swarm structure, showing the Director at the top level and all worker agents as children branches. This method uses Rich formatting to create an aesthetically pleasing console output that helps visualize the organizational structure of the swarm.
#### Returns
| Type | Description |
|------|-------------|
| `None` | Prints the hierarchy visualization to the console |
#### Example
```python
from swarms import Agent
from swarms.structs.hiearchical_swarm import HierarchicalSwarm
# Create specialized agents
research_agent = Agent(
agent_name="Research-Analyst",
agent_description="Specialized in comprehensive research and data gathering",
model_name="gpt-4o-mini",
)
analysis_agent = Agent(
agent_name="Data-Analyst",
agent_description="Expert in data analysis and pattern recognition",
model_name="gpt-4o-mini",
)
strategy_agent = Agent(
agent_name="Strategy-Consultant",
agent_description="Specialized in strategic planning and recommendations",
model_name="gpt-4o-mini",
)
# Create hierarchical swarm
swarm = HierarchicalSwarm(
name="Swarms Corporation Operations",
description="Enterprise-grade hierarchical swarm for complex task execution",
agents=[research_agent, analysis_agent, strategy_agent],
max_loops=1,
director_model_name="claude-haiku-4-5",
)
# Display the hierarchy visualization
swarm.display_hierarchy()
```
The output will show a visual tree structure like:
```
┌─ HierarchicalSwarm Hierarchy: Swarms Corporation Operations ─┐
│ │
│ 🎯 Director [claude-haiku-4-5] │
│ ├─ 🤖 Research-Analyst [gpt-4o-mini] - Specialized in... │
│ ├─ 🤖 Data-Analyst [gpt-4o-mini] - Expert in data... │
│ └─ 🤖 Strategy-Consultant [gpt-4o-mini] - Specialized... │
└───────────────────────────────────────────────────────────────┘
```
### `run()`
Executes the hierarchical swarm for a specified number of feedback loops, processing the task through multiple iterations for refinement and improvement.

@ -0,0 +1,534 @@
# LLM Council Class Documentation
```mermaid
flowchart TD
A[User Query] --> B[Council Members]
subgraph "Council Members"
C1[GPT-5.1-Councilor]
C2[Gemini-3-Pro-Councilor]
C3[Claude-Sonnet-4.5-Councilor]
C4[Grok-4-Councilor]
end
B --> C1
B --> C2
B --> C3
B --> C4
C1 --> D[Responses]
C2 --> D
C3 --> D
C4 --> D
D --> E[Anonymize & Evaluate]
E --> F[Chairman Synthesis]
F --> G[Final Response]
```
The `LLMCouncil` class orchestrates multiple specialized LLM agents to collaboratively answer queries through a structured peer review and synthesis process. Inspired by Andrej Karpathy's llm-council implementation, this architecture demonstrates how different models evaluate and rank each other's work, often selecting responses from other models as superior to their own.
The class automatically tracks all agent messages in a `Conversation` object and formats output using `history_output_formatter`, providing flexible output formats including dictionaries, lists, strings, JSON, YAML, and more.
## Workflow Overview
The LLM Council follows a four-step process:
1. **Parallel Response Generation**: All council members independently respond to the user query
2. **Anonymization**: Responses are anonymized with random IDs (A, B, C, D, etc.) to ensure objective evaluation
3. **Peer Review**: Each member evaluates and ranks all responses (including potentially their own)
4. **Synthesis**: The Chairman agent synthesizes all responses and evaluations into a final comprehensive answer
## Class Definition
### LLMCouncil
```python
class LLMCouncil:
```
### Attributes
| Attribute | Type | Description | Default |
|-----------|------|-------------|---------|
| `council_members` | `List[Agent]` | List of Agent instances representing council members | `None` (creates default council) |
| `chairman` | `Agent` | The Chairman agent responsible for synthesizing responses | Created during initialization |
| `conversation` | `Conversation` | Conversation object tracking all messages throughout the workflow | Created during initialization |
| `output_type` | `HistoryOutputType` | Format for the output (e.g., "dict", "list", "string", "json", "yaml") | `"dict"` |
| `verbose` | `bool` | Whether to print progress and intermediate results | `True` |
## Methods
### `__init__`
Initializes the LLM Council with council members and a Chairman agent.
#### Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `id` | `str` | `swarm_id()` | Unique identifier for the council instance. |
| `name` | `str` | `"LLM Council"` | Name of the council instance. |
| `description` | `str` | `"A collaborative council..."` | Description of the council's purpose. |
| `council_members` | `Optional[List[Agent]]` | `None` | List of Agent instances representing council members. If `None`, creates default council with GPT-5.1, Gemini 3 Pro, Claude Sonnet 4.5, and Grok-4. |
| `chairman_model` | `str` | `"gpt-5.1"` | Model name for the Chairman agent that synthesizes responses. |
| `verbose` | `bool` | `True` | Whether to print progress and intermediate results. |
| `output_type` | `HistoryOutputType` | `"dict"` | Format for the output. Options: "list", "dict", "string", "final", "json", "yaml", "xml", "dict-all-except-first", "str-all-except-first", "dict-final", "list-final". |
#### Returns
| Type | Description |
|------|-------------|
| `LLMCouncil` | Initialized LLM Council instance. |
#### Description
Creates an LLM Council instance with specialized council members. If no members are provided, it creates a default council consisting of:
| Council Member | Description |
|---------------------------------|------------------------------------------|
| **GPT-5.1-Councilor** | Analytical and comprehensive responses |
| **Gemini-3-Pro-Councilor** | Concise and well-processed responses |
| **Claude-Sonnet-4.5-Councilor** | Thoughtful and balanced responses |
| **Grok-4-Councilor** | Creative and innovative responses |
The Chairman agent is automatically created with a specialized prompt for synthesizing responses. A `Conversation` object is also initialized to track all messages throughout the workflow, including user queries, council member responses, evaluations, and the final synthesis.
#### Example Usage
```python
from swarms.structs.llm_council import LLMCouncil
# Create council with default members
council = LLMCouncil(verbose=True)
# Create council with custom members and output format
from swarms import Agent
custom_members = [
Agent(agent_name="Expert-1", model_name="gpt-4", max_loops=1),
Agent(agent_name="Expert-2", model_name="claude-3-opus", max_loops=1),
]
council = LLMCouncil(
council_members=custom_members,
chairman_model="gpt-4",
verbose=True,
output_type="json" # Output as JSON string
)
```
---
### `run`
Executes the full LLM Council workflow: parallel responses, anonymization, peer review, and synthesis. All messages are tracked in the conversation object and formatted according to the `output_type` setting.
#### Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `query` | `str` | Required | The user's query to process through the council. |
#### Returns
| Type | Description |
|------|-------------|
| `Union[List, Dict, str]` | Formatted output based on `output_type`. The output contains the conversation history with all messages tracked throughout the workflow. |
#### Output Format
The return value depends on the `output_type` parameter set during initialization:
| `output_type` value | Description |
|---------------------------------|---------------------------------------------------------------------|
| **`"dict"`** (default) | Returns conversation as a dictionary/list of message dictionaries |
| **`"list"`** | Returns conversation as a list of formatted strings (`"role: content"`) |
| **`"string"`** or **`"str"`** | Returns conversation as a formatted string |
| **`"final"`** or **`"last"`** | Returns only the content of the final message (Chairman's response) |
| **`"json"`** | Returns conversation as a JSON string |
| **`"yaml"`** | Returns conversation as a YAML string |
| **`"xml"`** | Returns conversation as an XML string |
| **`"dict-all-except-first"`** | Returns all messages except the first as a dictionary |
| **`"str-all-except-first"`** | Returns all messages except the first as a string |
| **`"dict-final"`** | Returns the final message as a dictionary |
| **`"list-final"`** | Returns the final message as a list |
#### Conversation Tracking
All messages are automatically tracked in the conversation object with the following roles:
- **`"User"`**: The original user query
- **`"{member_name}"`**: Each council member's response (e.g., "GPT-5.1-Councilor")
- **`"{member_name}-Evaluation"`**: Each council member's evaluation (e.g., "GPT-5.1-Councilor-Evaluation")
- **`"Chairman"`**: The final synthesized response
#### Description
Executes the complete LLM Council workflow:
1. **User Query Tracking**: Adds the user query to the conversation as "User" role
2. **Dispatch Phase**: Sends the query to all council members in parallel using `run_agents_concurrently`
3. **Collection Phase**: Collects all responses, maps them to member names, and adds each to the conversation with the member's name as the role
4. **Anonymization Phase**: Creates anonymous IDs (A, B, C, D, etc.) and shuffles them to ensure anonymity
5. **Evaluation Phase**: Each member evaluates and ranks all anonymized responses using `batched_grid_agent_execution`, then adds evaluations to the conversation with "{member_name}-Evaluation" as the role
6. **Synthesis Phase**: The Chairman agent synthesizes all responses and evaluations into a final comprehensive answer, which is added to the conversation as "Chairman" role
7. **Output Formatting**: Returns the conversation formatted according to the `output_type` setting using `history_output_formatter`
The method provides verbose output by default, showing progress at each stage. All messages are tracked in the `conversation` attribute for later access or export.
#### Example Usage
```python
from swarms.structs.llm_council import LLMCouncil
# Create council with default output format (dict)
council = LLMCouncil(verbose=True)
query = "What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?"
# Run the council - returns formatted conversation based on output_type
result = council.run(query)
# With default "dict" output_type, result is a list of message dictionaries
# Access conversation messages
for message in result:
print(f"{message['role']}: {message['content'][:200]}...")
# Access the conversation object directly for more control
conversation = council.conversation
print("\nFinal message:", conversation.get_final_message_content())
# Get conversation as string
print("\nFull conversation:")
print(conversation.get_str())
# Example with different output types
council_json = LLMCouncil(output_type="json", verbose=False)
result_json = council_json.run(query) # Returns JSON string
council_final = LLMCouncil(output_type="final", verbose=False)
result_final = council_final.run(query) # Returns only final response string
```
---
### `_create_default_council`
Creates default council members with specialized prompts and models.
#### Parameters
None (internal method).
#### Returns
| Type | Description |
|------|-------------|
| `List[Agent]` | List of Agent instances configured as council members. |
#### Description
Internal method that creates the default council configuration with four specialized agents:
- **GPT-5.1-Councilor** (`model_name="gpt-5.1"`): Analytical and comprehensive, temperature=0.7
- **Gemini-3-Pro-Councilor** (`model_name="gemini-2.5-flash"`): Concise and structured, temperature=0.7
- **Claude-Sonnet-4.5-Councilor** (`model_name="anthropic/claude-sonnet-4-5"`): Thoughtful and balanced, temperature=0.0
- **Grok-4-Councilor** (`model_name="x-ai/grok-4"`): Creative and innovative, temperature=0.8
Each agent is configured with:
- Specialized system prompts matching their role
- `max_loops=1` for single-response generation
- `verbose=False` to reduce noise during parallel execution
- Appropriate temperature settings for their style
---
## Helper Functions
### `get_gpt_councilor_prompt()`
Returns the system prompt for GPT-5.1 councilor agent.
#### Returns
| Type | Description |
|------|-------------|
| `str` | System prompt string emphasizing analytical thinking and comprehensive coverage. |
---
### `get_gemini_councilor_prompt()`
Returns the system prompt for Gemini 3 Pro councilor agent.
#### Returns
| Type | Description |
|------|-------------|
| `str` | System prompt string emphasizing concise, well-processed, and structured responses. |
---
### `get_claude_councilor_prompt()`
Returns the system prompt for Claude Sonnet 4.5 councilor agent.
#### Returns
| Type | Description |
|------|-------------|
| `str` | System prompt string emphasizing thoughtful, balanced, and nuanced responses. |
---
### `get_grok_councilor_prompt()`
Returns the system prompt for Grok-4 councilor agent.
#### Returns
| Type | Description |
|------|-------------|
| `str` | System prompt string emphasizing creative, innovative, and unique perspectives. |
---
### `get_chairman_prompt()`
Returns the system prompt for the Chairman agent.
#### Returns
| Type | Description |
|------|-------------|
| `str` | System prompt string for synthesizing responses and evaluations into a final answer. |
---
### `get_evaluation_prompt(query, responses, evaluator_name)`
Creates evaluation prompt for council members to review and rank responses.
#### Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `query` | `str` | The original user query. |
| `responses` | `Dict[str, str]` | Dictionary mapping anonymous IDs to response texts. |
| `evaluator_name` | `str` | Name of the agent doing the evaluation. |
#### Returns
| Type | Description |
|------|-------------|
| `str` | Formatted evaluation prompt string with instructions for ranking responses. |
---
### `get_synthesis_prompt(query, original_responses, evaluations, id_to_member)`
Creates synthesis prompt for the Chairman.
#### Parameters
| Parameter | Type | Description |
|-----------|------|-------------|
| `query` | `str` | Original user query. |
| `original_responses` | `Dict[str, str]` | Dictionary mapping member names to their responses. |
| `evaluations` | `Dict[str, str]` | Dictionary mapping evaluator names to their evaluation texts. |
| `id_to_member` | `Dict[str, str]` | Mapping from anonymous IDs to member names. |
#### Returns
| Type | Description |
|------|-------------|
| `str` | Formatted synthesis prompt for the Chairman agent. |
---
## Use Cases
The LLM Council is ideal for scenarios requiring:
- **Multi-perspective Analysis**: When you need diverse viewpoints on complex topics
- **Quality Assurance**: When peer review and ranking can improve response quality
- **Transparent Decision Making**: When you want to see how different models evaluate each other
- **Synthesis of Expertise**: When combining multiple specialized perspectives is valuable
### Common Applications
| Use Case | Description |
|-----------------------|--------------------------------------------------------------------------------------------------|
| **Medical Diagnosis** | Multiple medical AI agents provide diagnoses, evaluate each other, and synthesize recommendations |
| **Financial Analysis**| Different financial experts analyze investments and rank each other's assessments |
| **Legal Analysis** | Multiple legal perspectives evaluate compliance and risk |
| **Business Strategy** | Diverse strategic viewpoints are synthesized into comprehensive plans |
| **Research Analysis** | Multiple research perspectives are combined for thorough analysis |
## Examples
For comprehensive examples demonstrating various use cases, see the [LLM Council Examples](../../../examples/multi_agent/llm_council_examples/) directory:
- **Medical**: `medical_diagnosis_council.py`, `medical_treatment_council.py`
- **Finance**: `finance_analysis_council.py`, `etf_stock_analysis_council.py`
- **Business**: `business_strategy_council.py`, `marketing_strategy_council.py`
- **Technology**: `technology_assessment_council.py`, `research_analysis_council.py`
- **Legal**: `legal_analysis_council.py`
### Quick Start Example
```python
from swarms.structs.llm_council import LLMCouncil
# Create the council with default output format
council = LLMCouncil(verbose=True)
# Example query
query = "What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?"
# Run the council - returns formatted conversation
result = council.run(query)
# With default "dict" output_type, result is a list of message dictionaries
# Print all messages
for message in result:
role = message['role']
content = message['content']
print(f"\n{role}:")
print(content[:500] + "..." if len(content) > 500 else content)
# Access conversation object directly for more options
conversation = council.conversation
# Get only the final response
print("\n" + "="*80)
print("FINAL RESPONSE")
print("="*80)
print(conversation.get_final_message_content())
# Get conversation as formatted string
print("\n" + "="*80)
print("FULL CONVERSATION")
print("="*80)
print(conversation.get_str())
# Export conversation to JSON
conversation.export()
```
## Customization
### Creating Custom Council Members
You can create custom council members with specialized roles:
```python
from swarms import Agent
from swarms.structs.llm_council import LLMCouncil, get_gpt_councilor_prompt
# Create custom councilor
custom_agent = Agent(
agent_name="Domain-Expert-Councilor",
agent_description="Specialized domain expert for specific analysis",
system_prompt=get_gpt_councilor_prompt(), # Or create custom prompt
model_name="gpt-4",
max_loops=1,
verbose=False,
temperature=0.7,
)
# Create council with custom members
council = LLMCouncil(
council_members=[custom_agent, ...], # Add your custom agents
chairman_model="gpt-4",
verbose=True
)
```
### Custom Chairman Model
You can specify a different model for the Chairman:
```python
council = LLMCouncil(
chairman_model="claude-3-opus", # Use Claude as Chairman
verbose=True
)
```
### Custom Output Format
You can control the output format using the `output_type` parameter:
```python
# Get output as JSON string
council = LLMCouncil(output_type="json")
result = council.run(query) # Returns JSON string
# Get only the final response
council = LLMCouncil(output_type="final")
result = council.run(query) # Returns only final response string
# Get as YAML
council = LLMCouncil(output_type="yaml")
result = council.run(query) # Returns YAML string
# Get as formatted string
council = LLMCouncil(output_type="string")
result = council.run(query) # Returns formatted conversation string
```
### Accessing Conversation History
The conversation object is accessible for advanced usage:
```python
council = LLMCouncil()
council.run(query)
# Access conversation directly
conversation = council.conversation
# Get conversation history
history = conversation.conversation_history
# Export to file
conversation.export() # Saves to default location
# Get specific format
json_output = conversation.to_json()
yaml_output = conversation.return_messages_as_dictionary()
```
## Architecture Benefits
1. **Diversity**: Multiple models provide varied perspectives and approaches
2. **Quality Control**: Peer review ensures responses are evaluated objectively
3. **Synthesis**: Chairman combines the best elements from all responses
4. **Transparency**: Full visibility into individual responses and evaluation rankings
5. **Scalability**: Easy to add or remove council members
6. **Flexibility**: Supports custom agents and models
7. **Conversation Tracking**: All messages are automatically tracked in a Conversation object for history and export
8. **Flexible Output**: Multiple output formats supported via `history_output_formatter` (dict, list, string, JSON, YAML, XML, etc.)
## Performance Considerations
| Feature | Description |
|---------------------------|----------------------------------------------------------------------------------------------------------------|
| **Parallel Execution** | Both response generation and evaluation phases run in parallel for efficiency |
| **Anonymization** | Responses are anonymized to prevent bias in evaluation |
| **Model Selection** | Different models can be used for different roles based on their strengths |
| **Verbose Mode** | Can be disabled for production use to reduce output |
| **Conversation Management** | Conversation object efficiently tracks all messages in memory and supports export to JSON/YAML files |
| **Output Formatting** | Choose lightweight output formats (e.g., "final") for production to reduce memory usage |
## Related Documentation
- [Multi-Agent Architectures Overview](overview.md)
- [Council of Judges](council_of_judges.md) - Similar peer review pattern
- [Agent Class Reference](agent.md) - Understanding individual agents
- [Conversation Class Reference](conversation.md) - Understanding conversation tracking and management
- [Multi-Agent Execution Utilities](various_execution_methods.md) - Underlying execution methods
- [History Output Formatter](../../../swarms/utils/history_output_formatter.py) - Output formatting utilities

@ -2,6 +2,8 @@
The `RoundRobinSwarm` class is designed to manage and execute tasks among multiple agents in a round-robin fashion. This approach ensures that each agent in a swarm receives an equal opportunity to execute tasks, which promotes fairness and efficiency in distributed systems. It is particularly useful in environments where collaborative, sequential task execution is needed among various agents.
This swarm implements an AutoGen-style communication pattern where agents are shuffled randomly each loop for varied interaction patterns. Each agent receives the full conversation context to build upon others' responses.
## What is Round-Robin?
Round-robin is a scheduling technique commonly used in computing for managing processes in shared systems. It involves assigning a fixed time slot to each process and cycling through all processes in a circular order without prioritization. In the context of swarms of agents, this method ensures equitable distribution of tasks and resource usage among all agents.
@ -10,12 +12,33 @@ Round-robin is a scheduling technique commonly used in computing for managing pr
In swarms, `RoundRobinSwarm` utilizes the round-robin scheduling to manage tasks among agents like software components, autonomous robots, or virtual entities. This strategy is beneficial where tasks are interdependent or require sequential processing.
## Architecture
```mermaid
graph LR
User[Task] --> A1[Agent 1]
A1 --> A2[Agent 2]
A2 --> A3[Agent 3]
A3 --> A1
A3 --> Output[Result]
```
Each agent receives the task with full conversation history, responds, then passes context to the next agent. This cycle repeats for `max_loops` iterations.
## Class Attributes
- `agents (List[Agent])`: List of agents participating in the swarm.
- `verbose (bool)`: Enables or disables detailed logging of swarm operations.
- `max_loops (int)`: Limits the number of times the swarm cycles through all agents.
- `index (int)`: Maintains the current position in the agent list to ensure round-robin execution.
| Attribute | Type | Description |
|-----------|------|-------------|
| `name` | `str` | Name of the swarm. |
| `description` | `str` | Description of the swarm's purpose. |
| `agents` | `List[Agent]` | List of agents participating in the swarm. |
| `verbose` | `bool` | Enables or disables detailed logging of swarm operations. |
| `max_loops` | `int` | Limits the number of times the swarm cycles through all agents. |
| `callback` | `callable` | Callback function executed after each loop. |
| `index` | `int` | Maintains the current position in the agent list to ensure round-robin execution. |
| `max_retries` | `int` | Maximum number of retries for agent execution. |
| `output_type` | `OutputType` | Type of output format (e.g., "final", "all", "json"). |
| `conversation` | `Conversation` | Conversation history for the swarm. |
## Methods
@ -24,30 +47,92 @@ In swarms, `RoundRobinSwarm` utilizes the round-robin scheduling to manage tasks
Initializes the swarm with the provided list of agents, verbosity setting, and operational parameters.
**Parameters:**
| Parameter | Type | Description |
|-------------|---------------------|-----------------------------------------------------|
| agents | List[Agent], optional | List of agents in the swarm. |
| verbose | bool | Boolean flag for detailed logging. |
| max_loops | int | Maximum number of execution cycles. |
| callback | Callable, optional | Function called after each loop. |
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `name` | `str` | `"RoundRobinSwarm"` | Name of the swarm. |
| `description` | `str` | `"A swarm implementation..."` | Description of the swarm's purpose. |
| `agents` | `List[Agent]` | **Required** | List of agents in the swarm. |
| `verbose` | `bool` | `False` | Boolean flag for detailed logging. |
| `max_loops` | `int` | `1` | Maximum number of execution cycles. |
| `callback` | `callable` | `None` | Function called after each loop with `(loop_index, result)` arguments. |
| `max_retries` | `int` | `3` | Maximum number of retries for agent execution. |
| `output_type` | `OutputType` | `"final"` | Type of output format. |
**Raises:**
- `ValueError`: If no agents are provided during initialization.
---
### `run`
Executes a specified task across all agents in a round-robin manner, cycling through each agent repeatedly for the number of specified loops.
Executes a specified task across all agents in a randomized round-robin manner, cycling through each agent repeatedly for the number of specified loops.
```python
def run(self, task: str, *args, **kwargs) -> Union[str, dict, list]
```
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `task` | `str` | The task string to be executed by the agents. |
| `*args` | `Any` | Variable length argument list passed to each agent. |
| `**kwargs` | `Any` | Arbitrary keyword arguments passed to each agent. |
**Returns:**
| Type | Description |
|------|-------------|
| `Union[str, dict, list]` | The result of the task execution in the format specified by `output_type`. |
**Raises:**
- `ValueError`: If no agents are configured for the swarm.
- `Exception`: If an exception occurs during task execution.
**Conceptual Behavior:**
| Step | Description |
|------|-------------|
| 1 | Distribute the task sequentially among all agents starting from the current index. |
| 2 | Each agent processes the task and potentially modifies it or produces new output. |
| 3 | After an agent completes its part of the task, the index moves to the next agent. |
| 4 | This cycle continues until the specified maximum number of loops is completed. |
| 5 | Optionally, a callback function can be invoked after each loop to handle intermediate results or perform additional actions. |
| 1 | Add the initial task to the conversation history. |
| 2 | Shuffle agents randomly for varied interaction patterns. |
| 3 | Each agent receives the full conversation context and processes the task. |
| 4 | Agents build upon insights from previous agents in the conversation. |
| 5 | After an agent completes its part, its response is added to the conversation. |
| 6 | This cycle continues until the specified maximum number of loops is completed. |
| 7 | Optionally, a callback function is invoked after each loop. |
| 8 | Returns the formatted conversation history based on `output_type`. |
---
### `run_batch`
Execute multiple tasks sequentially through the round-robin swarm. Each task is processed independently through the full round-robin execution cycle.
```python
def run_batch(self, tasks: List[str]) -> List[Union[str, dict, list]]
```
**Parameters:**
| Parameter | Type | Description |
|-----------|------|-------------|
| `tasks` | `List[str]` | A list of task strings to be executed. |
**Returns:**
| Type | Description |
|------|-------------|
| `List[Union[str, dict, list]]` | A list of results, one for each task, in the format specified by `output_type`. |
## Examples
In this example, `RoundRobinSwarm` is used to distribute network requests evenly among a group of servers. This is common in scenarios where load balancing is crucial for maintaining system responsiveness and scalability.
### Basic Usage with `run`
In this example, `RoundRobinSwarm` is used to distribute a sales task among a group of specialized agents. Each agent contributes their unique perspective to the collaborative output.
```python
from swarms import Agent, RoundRobinSwarm
@ -78,15 +163,89 @@ sales_agent3 = Agent(
)
# Initialize the swarm with sales agents
sales_swarm = RoundRobinSwarm(agents=[sales_agent1, sales_agent2, sales_agent3], verbose=True)
sales_swarm = RoundRobinSwarm(
name="SalesTeamSwarm",
description="A collaborative sales team for generating comprehensive sales content",
agents=[sales_agent1, sales_agent2, sales_agent3],
verbose=True,
max_loops=2,
output_type="final",
)
# Define a sales task
task = "Generate a sales email for an accountant firm executive to sell swarms of agents to automate their accounting processes."
out = sales_swarm.run(task)
print(out)
# Run the task
result = sales_swarm.run(task)
print(result)
```
### Batch Processing with `run_batch`
Use `run_batch` when you need to process multiple independent tasks through the swarm. Each task is executed separately with full round-robin collaboration.
```python
from swarms import Agent, RoundRobinSwarm
# Define research agents
researcher1 = Agent(
agent_name="Technical Researcher",
system_prompt="You are a technical researcher who analyzes topics from a technical perspective.",
model_name="gpt-4.1",
max_loops=1,
)
researcher2 = Agent(
agent_name="Market Researcher",
system_prompt="You are a market researcher who analyzes topics from a business and market perspective.",
model_name="gpt-4.1",
max_loops=1,
)
# Initialize the swarm
research_swarm = RoundRobinSwarm(
name="ResearchSwarm",
agents=[researcher1, researcher2],
verbose=True,
max_loops=1,
output_type="json",
)
# Define multiple research tasks
tasks = [
"Analyze the current state of AI in healthcare.",
"Research the impact of automation on manufacturing.",
"Evaluate emerging trends in renewable energy.",
]
# Run all tasks and get results
results = research_swarm.run_batch(tasks)
# Process each result
for i, result in enumerate(results):
print(f"Task {i + 1} Result:")
print(result)
print("-" * 50)
```
### Using Callbacks
You can use callbacks to monitor or process intermediate results after each loop:
```python
def my_callback(loop_index: int, result: str):
"""Called after each loop completes."""
print(f"Loop {loop_index + 1} completed")
print(f"Latest result: {result[:100]}...") # Print first 100 chars
swarm = RoundRobinSwarm(
agents=[agent1, agent2, agent3],
max_loops=3,
callback=my_callback,
)
result = swarm.run("Analyze this complex topic from multiple perspectives.")
```
## Conclusion

@ -42,6 +42,7 @@ Main class for routing tasks to different swarm types.
| `verbose` | bool | Flag to enable/disable verbose logging (default: False) |
| `worker_tools` | List[Callable] | List of tools available to worker agents |
| `aggregation_strategy` | str | Aggregation strategy for HeavySwarm (default: "synthesis") |
| `chairman_model` | str | Model name for the Chairman in LLMCouncil (default: "gpt-5.1") |
### Methods
@ -123,6 +124,8 @@ The `SwarmRouter` supports many various multi-agent architectures for various ap
| `InteractiveGroupChat` | Interactive group chat with user participation |
| `HeavySwarm` | Heavy swarm architecture with question and worker agents |
| `BatchedGridWorkflow` | Batched grid workflow for parallel task processing |
| `LLMCouncil` | Council of specialized LLM agents with peer review and synthesis |
| `DebateWithJudge` | Debate architecture with Pro/Con agents and a Judge for self-refinement |
| `auto` | Automatically selects best swarm type via embedding search |
## Basic Usage
@ -456,6 +459,88 @@ result = batched_grid_router.run(tasks=["Task 1", "Task 2", "Task 3"])
BatchedGridWorkflow is designed for efficiently processing multiple tasks in parallel batches, optimizing resource utilization.
### LLMCouncil
Use Case: Collaborative analysis with multiple specialized LLM agents that evaluate each other's responses and synthesize a final answer.
```python
llm_council_router = SwarmRouter(
name="LLMCouncil",
description="Collaborative council of LLM agents with peer review",
swarm_type="LLMCouncil",
chairman_model="gpt-5.1", # Model for the Chairman agent
output_type="dict", # Output format: "dict", "list", "string", "json", "yaml", "final", etc.
verbose=True # Show progress and intermediate results
)
result = llm_council_router.run("What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?")
```
LLMCouncil creates a council of specialized agents (GPT-5.1, Gemini, Claude, Grok by default) that:
1. Each independently responds to the query
2. Evaluates and ranks each other's anonymized responses
3. A Chairman synthesizes all responses and evaluations into a final comprehensive answer
The council automatically tracks all messages in a conversation object and supports flexible output formats. Note: LLMCouncil uses default council members and doesn't require the `agents` parameter.
### DebateWithJudge
Use Case: Structured debate architecture where two agents (Pro and Con) present opposing arguments, and a Judge agent evaluates and synthesizes the arguments over multiple rounds to progressively refine the answer.
```python
from swarms import Agent, SwarmRouter
# Create three specialized agents for the debate
pro_agent = Agent(
agent_name="Pro-Agent",
system_prompt="You are an expert at presenting strong, well-reasoned arguments in favor of positions. "
"You provide compelling evidence and logical reasoning to support your stance.",
model_name="gpt-4.1",
max_loops=1,
)
con_agent = Agent(
agent_name="Con-Agent",
system_prompt="You are an expert at presenting strong, well-reasoned counter-arguments. "
"You identify weaknesses in opposing arguments and present compelling evidence against positions.",
model_name="gpt-4.1",
max_loops=1,
)
judge_agent = Agent(
agent_name="Judge-Agent",
system_prompt="You are an impartial judge evaluating debates. You carefully assess both arguments, "
"identify strengths and weaknesses, and provide refined synthesis that incorporates "
"the best elements from both sides.",
model_name="gpt-4.1",
max_loops=1,
)
# Initialize the SwarmRouter with DebateWithJudge
debate_router = SwarmRouter(
name="DebateWithJudge",
description="Structured debate with Pro/Con agents and Judge for self-refinement",
swarm_type="DebateWithJudge",
agents=[pro_agent, con_agent, judge_agent], # Must be exactly 3 agents
max_loops=3, # Number of debate rounds
output_type="str-all-except-first", # Output format
verbose=True # Show progress and intermediate results
)
# Run a debate on a topic
result = debate_router.run(
"Should artificial intelligence development be regulated by governments?"
)
```
DebateWithJudge implements a multi-round debate system where:
1. **Pro Agent** presents arguments in favor of the topic
2. **Con Agent** presents counter-arguments against the topic
3. **Judge Agent** evaluates both arguments and provides synthesis
4. The process repeats for N rounds (specified by `max_loops`), with each round refining the discussion based on the judge's feedback
The architecture progressively improves the answer through iterative refinement, making it ideal for complex topics requiring thorough analysis from multiple perspectives. Note: DebateWithJudge requires exactly 3 agents (pro_agent, con_agent, judge_agent) in that order.
## Advanced Features
### Processing Documents

@ -8,15 +8,13 @@ agent = Agent(
dynamic_temperature_enabled=True,
max_loops=1,
dynamic_context_window=True,
streaming_on=False,
top_p=None,
# stream=True,
streaming_on=True,
interactive=False,
)
out = agent.run(
task="What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?",
n=1,
)
for token in out:
print(token, end="", flush=True)
print(out)

@ -6,60 +6,90 @@ This directory contains comprehensive examples demonstrating various capabilitie
### Multi-Agent Systems
- **[multi_agent/](multi_agent/)** - Advanced multi-agent patterns including agent rearrangement, auto swarm builder (ASB), batched workflows, board of directors, caching, concurrent processing, councils, debates, elections, forest swarms, graph workflows, group chats, heavy swarms, hierarchical swarms, majority voting, orchestration examples, social algorithms, simulations, spreadsheet examples, and swarm routing.
- **[multi_agent/](multi_agent/)** - Advanced multi-agent patterns including agent rearrangement, auto swarm builder (ASB), batched workflows, board of directors, caching, concurrent processing, councils, debates, elections, forest swarms, graph workflows, group chats, heavy swarms, hierarchical swarms, LLM council, majority voting, orchestration examples, paper implementations, sequential workflows, social algorithms, simulations, spreadsheet examples, swarm routing, and utilities.
- [README.md](multi_agent/README.md) - Complete multi-agent examples documentation
- [duo_agent.py](multi_agent/duo_agent.py) - Two-agent collaboration example
- [llm_council_examples/](multi_agent/llm_council_examples/) - LLM Council collaboration patterns
- [caching_examples/](multi_agent/caching_examples/) - Agent caching examples
### Single Agent Systems
- **[single_agent/](single_agent/)** - Single agent implementations including demos, external agent integrations, LLM integrations (Azure, Claude, DeepSeek, Mistral, OpenAI, Qwen), onboarding, RAG, reasoning agents, tools integration, utils, and vision capabilities.
- **[single_agent/](single_agent/)** - Single agent implementations including demos, external agent integrations, LLM integrations (Azure, Claude, DeepSeek, Mistral, OpenAI, Qwen), onboarding, RAG, reasoning agents, tools integration, utils, vision capabilities, and MCP integration.
- [README.md](single_agent/README.md) - Complete single agent examples documentation
- [simple_agent.py](single_agent/simple_agent.py) - Basic single agent example
- [agent_mcp.py](single_agent/agent_mcp.py) - MCP integration example
- [rag/](single_agent/rag/) - Retrieval Augmented Generation (RAG) implementations with vector database integrations
### Tools & Integrations
- **[tools/](tools/)** - Tool integration examples including agent-as-tools, base tool implementations, browser automation, Claude integration, Exa search, Firecrawl, multi-tool usage, and Stagehand integration.
- [README.md](tools/README.md) - Complete tools examples documentation
- [agent_as_tools.py](tools/agent_as_tools.py) - Using agents as tools
- [browser_use_as_tool.py](tools/browser_use_as_tool.py) - Browser automation tool
- [exa_search_agent.py](tools/exa_search_agent.py) - Exa search integration
- [firecrawl_agents_example.py](tools/firecrawl_agents_example.py) - Firecrawl integration
- [base_tool_examples/](tools/base_tool_examples/) - Base tool implementation examples
- [multii_tool_use/](tools/multii_tool_use/) - Multi-tool usage examples
- [stagehand/](tools/stagehand/) - Stagehand UI automation
### Model Integrations
- **[models/](models/)** - Various model integrations including Cerebras, GPT-5, GPT-OSS, Llama 4, Lumo, and Ollama implementations with concurrent processing examples and provider-specific configurations.
- **[models/](models/)** - Various model integrations including Cerebras, GPT-5, GPT-OSS, Llama 4, Lumo, O3, Ollama, and vLLM implementations with concurrent processing examples and provider-specific configurations.
- [README.md](models/README.md) - Model integration documentation
- [simple_example_ollama.py](models/simple_example_ollama.py) - Ollama integration example
- [cerebas_example.py](models/cerebas_example.py) - Cerebras model example
- [lumo_example.py](models/lumo_example.py) - Lumo model example
- [example_o3.py](models/example_o3.py) - O3 model example
- [gpt_5/](models/gpt_5/) - GPT-5 model examples
- [gpt_oss_examples/](models/gpt_oss_examples/) - GPT-OSS examples
- [llama4_examples/](models/llama4_examples/) - Llama 4 examples
- [main_providers/](models/main_providers/) - Main provider configurations
- [vllm/](models/vllm/) - vLLM integration examples
### API & Protocols
- **[swarms_api_examples/](swarms_api_examples/)** - Swarms API usage examples including agent overview, batch processing, client integration, team examples, analysis, and rate limiting.
- [README.md](swarms_api_examples/README.md) - API examples documentation
- [client_example.py](swarms_api_examples/client_example.py) - API client example
- [batch_example.py](swarms_api_examples/batch_example.py) - Batch processing example
- **[swarms_api/](swarms_api/)** - Swarms API usage examples including agent overview, batch processing, client integration, team examples, analysis, and rate limiting.
- [README.md](swarms_api/README.md) - API examples documentation
- [client_example.py](swarms_api/client_example.py) - API client example
- [batch_example.py](swarms_api/batch_example.py) - Batch processing example
- [hospital_team.py](swarms_api/hospital_team.py) - Hospital management team simulation
- [legal_team.py](swarms_api/legal_team.py) - Legal team collaboration example
- [icd_ten_analysis.py](swarms_api/icd_ten_analysis.py) - ICD-10 medical code analysis
- [rate_limits.py](swarms_api/rate_limits.py) - Rate limiting and throttling examples
- **[mcp/](mcp/)** - Model Context Protocol (MCP) integration examples including agent implementations, multi-connection setups, server configurations, and utility functions.
- **[mcp/](mcp/)** - Model Context Protocol (MCP) integration examples including agent implementations, multi-connection setups, server configurations, utility functions, and multi-MCP guides.
- [README.md](mcp/README.md) - MCP examples documentation
- [multi_mcp_example.py](mcp/multi_mcp_example.py) - Multi-MCP connection example
- [agent_examples/](mcp/agent_examples/) - Agent-based MCP examples
- [servers/](mcp/servers/) - MCP server implementations
- [mcp_utils/](mcp/mcp_utils/) - MCP utility functions
- [multi_mcp_guide/](mcp/multi_mcp_guide/) - Multi-MCP setup guides
- **[aop_examples/](aop_examples/)** - Agents over Protocol (AOP) examples demonstrating MCP server setup, agent discovery, client interactions, queue-based task submission, and medical AOP implementations.
- **[aop_examples/](aop_examples/)** - Agents over Protocol (AOP) examples demonstrating MCP server setup, agent discovery, client interactions, queue-based task submission, medical AOP implementations, and utility functions.
- [README.md](aop_examples/README.md) - AOP examples documentation
- [server.py](aop_examples/server.py) - AOP server implementation
- [client/](aop_examples/client/) - AOP client examples and agent discovery
- [discovery/](aop_examples/discovery/) - Agent discovery examples
- [medical_aop/](aop_examples/medical_aop/) - Medical AOP implementations
- [utils/](aop_examples/utils/) - AOP utility functions
### Advanced Capabilities
- **[reasoning_agents/](reasoning_agents/)** - Advanced reasoning capabilities including agent judge evaluation systems, O3 model integration, and mixture of agents (MOA) sequential examples.
- **[reasoning_agents/](reasoning_agents/)** - Advanced reasoning capabilities including agent judge evaluation systems, O3 model integration, mixture of agents (MOA) sequential examples, and reasoning agent router examples.
- [README.md](reasoning_agents/README.md) - Reasoning agents documentation
- [example_o3.py](reasoning_agents/example_o3.py) - O3 model example
- [moa_seq_example.py](reasoning_agents/moa_seq_example.py) - MOA sequential example
- **[rag/](rag/)** - Retrieval Augmented Generation (RAG) implementations with vector database integrations including Qdrant examples.
- [README.md](rag/README.md) - RAG documentation
- [qdrant_rag_example.py](rag/qdrant_rag_example.py) - Qdrant RAG example
- [agent_judge_examples/](reasoning_agents/agent_judge_examples/) - Agent judge evaluation systems
- [reasoning_agent_router_examples/](reasoning_agents/reasoning_agent_router_examples/) - Reasoning agent router examples
### Guides & Tutorials
- **[guides/](guides/)** - Comprehensive guides and tutorials including generation length blog, geo guesser agent, graph workflow guide, hierarchical marketing team, nano banana Jarvis agent, smart database, web scraper agents, and workshop examples (840_update, 850_workshop).
- **[guides/](guides/)** - Comprehensive guides and tutorials including demos, generation length blog, geo guesser agent, graph workflow guide, hackathon examples, hierarchical marketing team, nano banana Jarvis agent, smart database, web scraper agents, workshops, x402 examples, and workshop examples (840_update, 850_workshop).
- [README.md](guides/README.md) - Guides documentation
- [hiearchical_marketing_team.py](guides/hiearchical_marketing_team.py) - Hierarchical marketing team example
- [demos/](guides/demos/) - Various demonstration examples
- [hackathons/](guides/hackathons/) - Hackathon project examples
- [workshops/](guides/workshops/) - Workshop examples
- [x402_examples/](guides/x402_examples/) - X402 protocol examples
### Deployment
@ -72,6 +102,11 @@ This directory contains comprehensive examples demonstrating various capabilitie
- **[utils/](utils/)** - Utility functions and helper implementations including agent loader, communication examples, concurrent wrappers, miscellaneous utilities, and telemetry.
- [README.md](utils/README.md) - Utils documentation
- [agent_loader/](utils/agent_loader/) - Agent loading utilities
- [communication_examples/](utils/communication_examples/) - Agent communication patterns
- [concurrent_wrapper_examples.py](utils/concurrent_wrapper_examples.py) - Concurrent processing wrappers
- [misc/](utils/misc/) - Miscellaneous utility functions
- [telemetry/](utils/telemetry/) - Telemetry and monitoring utilities
### User Interface
@ -79,16 +114,26 @@ This directory contains comprehensive examples demonstrating various capabilitie
- [README.md](ui/README.md) - UI examples documentation
- [chat.py](ui/chat.py) - Chat interface example
### Command Line Interface
- **[cli/](cli/)** - CLI command examples demonstrating all available Swarms CLI features including setup, agent management, multi-agent architectures, and utilities.
- [README.md](cli/README.md) - CLI examples documentation
- [01_setup_check.sh](cli/01_setup_check.sh) - Environment setup verification
- [05_create_agent.sh](cli/05_create_agent.sh) - Create custom agents
- [08_llm_council.sh](cli/08_llm_council.sh) - LLM Council collaboration
- [09_heavy_swarm.sh](cli/09_heavy_swarm.sh) - HeavySwarm complex analysis
## Quick Start
1. **New to Swarms?** Start with [single_agent/simple_agent.py](single_agent/simple_agent.py) for basic concepts
2. **Want multi-agent workflows?** Check out [multi_agent/duo_agent.py](multi_agent/duo_agent.py)
3. **Need tool integration?** Explore [tools/agent_as_tools.py](tools/agent_as_tools.py)
4. **Interested in AOP?** Try [aop_examples/client/example_new_agent_tools.py](aop_examples/client/example_new_agent_tools.py) for agent discovery
5. **Want to see social algorithms?** Check out [multi_agent/social_algorithms_examples/](multi_agent/social_algorithms_examples/)
6. **Looking for guides?** Visit [guides/](guides/) for comprehensive tutorials
7. **Need RAG?** Try [rag/qdrant_rag_example.py](rag/qdrant_rag_example.py)
8. **Want reasoning agents?** Check out [reasoning_agents/example_o3.py](reasoning_agents/example_o3.py)
2. **Want to use the CLI?** Check out [cli/](cli/) for all CLI command examples
3. **Want multi-agent workflows?** Check out [multi_agent/duo_agent.py](multi_agent/duo_agent.py)
4. **Need tool integration?** Explore [tools/agent_as_tools.py](tools/agent_as_tools.py)
5. **Interested in AOP?** Try [aop_examples/client/example_new_agent_tools.py](aop_examples/client/example_new_agent_tools.py) for agent discovery
6. **Want to see social algorithms?** Check out [multi_agent/social_algorithms_examples/](multi_agent/social_algorithms_examples/)
7. **Looking for guides?** Visit [guides/](guides/) for comprehensive tutorials
8. **Need RAG?** Try [single_agent/rag/](single_agent/rag/) for RAG examples
9. **Want reasoning agents?** Check out [reasoning_agents/](reasoning_agents/) for reasoning agent examples
## Key Examples by Category
@ -105,7 +150,7 @@ This directory contains comprehensive examples demonstrating various capabilitie
- [Simple Agent](single_agent/simple_agent.py) - Basic agent setup
- [Reasoning Agents](single_agent/reasoning_agent_examples/) - Advanced reasoning patterns
- [Vision Agents](single_agent/vision/multimodal_example.py) - Vision and multimodal capabilities
- [RAG Agents](single_agent/rag/qdrant_rag_example.py) - Retrieval augmented generation
- [RAG Agents](single_agent/rag/) - Retrieval augmented generation
### Tool Integrations
@ -122,6 +167,14 @@ This directory contains comprehensive examples demonstrating various capabilitie
- [Azure](single_agent/llms/azure_agent.py) - Azure OpenAI
- [Ollama](models/simple_example_ollama.py) - Local Ollama models
### CLI Examples
- [Setup Check](cli/01_setup_check.sh) - Verify environment setup
- [Create Agent](cli/05_create_agent.sh) - Create custom agents via CLI
- [LLM Council](cli/08_llm_council.sh) - Run LLM Council collaboration
- [HeavySwarm](cli/09_heavy_swarm.sh) - Run HeavySwarm for complex tasks
- [All CLI Examples](cli/) - Complete CLI examples directory
## Documentation
Each subdirectory contains its own README.md file with detailed descriptions and links to all available examples. Click on any folder above to explore its specific examples and use cases.

@ -92,7 +92,13 @@ financial_agent = Agent(
)
# Basic usage - individual agent addition
deployer = AOP(server_name="MyAgentServer", verbose=True, port=5932, json_response=True, queue_enabled=False)
deployer = AOP(
server_name="MyAgentServer",
verbose=True,
port=5932,
json_response=True,
queue_enabled=False,
)
agents = [
research_agent,

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - Setup Check Example
# Verify your Swarms environment setup
swarms setup-check

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - Onboarding Example
# Start the interactive onboarding process
swarms onboarding

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - Get API Key Example
# Open API key portal in browser
swarms get-api-key

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - Check Login Example
# Verify authentication status
swarms check-login

@ -0,0 +1,12 @@
#!/bin/bash
# Swarms CLI - Create Agent Example
# Create and run a custom agent
swarms agent \
--name "Research Agent" \
--description "AI research specialist" \
--system-prompt "You are an expert research agent." \
--task "Analyze current trends in renewable energy" \
--model-name "gpt-4o-mini"

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - Run Agents from YAML Example
# Execute agents from YAML configuration file
swarms run-agents --yaml-file agents.yaml

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - Load Markdown Agents Example
# Load agents from markdown files
swarms load-markdown --markdown-path ./agents/

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - LLM Council Example
# Run LLM Council for collaborative problem-solving
swarms llm-council --task "What are the best energy ETFs to invest in right now?"

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - HeavySwarm Example
# Run HeavySwarm for complex task analysis
swarms heavy-swarm --task "Analyze current market trends for renewable energy investments"

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - Autoswarm Example
# Auto-generate swarm configuration
swarms autoswarm --task "Analyze quarterly sales data" --model "gpt-4"

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - Features Example
# Display all available CLI features
swarms features

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - Help Example
# Display comprehensive help documentation
swarms help

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - Auto Upgrade Example
# Update Swarms to the latest version
swarms auto-upgrade

@ -0,0 +1,7 @@
#!/bin/bash
# Swarms CLI - Book Call Example
# Schedule a strategy session
swarms book-call

@ -0,0 +1,197 @@
# Swarms CLI Examples
This directory contains shell script examples demonstrating all available Swarms CLI commands and features. Each script is simple, focused, and demonstrates a single CLI command.
## Quick Start
All scripts are executable. Run them directly:
```bash
chmod +x *.sh
./01_setup_check.sh
```
Or execute with bash:
```bash
bash 01_setup_check.sh
```
## Available Examples
### Setup & Configuration
- **[01_setup_check.sh](examples/cli/01_setup_check.sh)** - Environment setup verification
```bash
swarms setup-check
```
- **[02_onboarding.sh](examples/cli/02_onboarding.sh)** - Interactive onboarding process
```bash
swarms onboarding
```
- **[03_get_api_key.sh](examples/cli/03_get_api_key.sh)** - Retrieve API keys
```bash
swarms get-api-key
```
- **[04_check_login.sh](examples/cli/04_check_login.sh)** - Verify authentication
```bash
swarms check-login
```
### Agent Management
- **[05_create_agent.sh](examples/cli/05_create_agent.sh)** - Create and run custom agents
```bash
swarms agent --name "Agent" --description "Description" --system-prompt "Prompt" --task "Task"
```
- **[06_run_agents_yaml.sh](examples/cli/06_run_agents_yaml.sh)** - Execute agents from YAML
```bash
swarms run-agents --yaml-file agents.yaml
```
- **[07_load_markdown.sh](examples/cli/07_load_markdown.sh)** - Load agents from markdown files
```bash
swarms load-markdown --markdown-path ./agents/
```
### Multi-Agent Architectures
- **[08_llm_council.sh](examples/cli/08_llm_council.sh)** - Run LLM Council collaboration
```bash
swarms llm-council --task "Your question here"
```
- **[09_heavy_swarm.sh](examples/cli/09_heavy_swarm.sh)** - Run HeavySwarm for complex tasks
```bash
swarms heavy-swarm --task "Your complex task here"
```
- **[10_autoswarm.sh](examples/cli/10_autoswarm.sh)** - Auto-generate swarm configurations
```bash
swarms autoswarm --task "Task description" --model "gpt-4"
```
### Utilities
- **[11_features.sh](examples/cli/11_features.sh)** - Display all available features
```bash
swarms features
```
- **[12_help.sh](examples/cli/12_help.sh)** - Display help documentation
```bash
swarms help
```
- **[13_auto_upgrade.sh](examples/cli/13_auto_upgrade.sh)** - Update Swarms package
```bash
swarms auto-upgrade
```
- **[14_book_call.sh](examples/cli/14_book_call.sh)** - Schedule strategy session
```bash
swarms book-call
```
### Run All Examples
- **[run_all_examples.sh](examples/cli/run_all_examples.sh)** - Run multiple examples in sequence
```bash
bash run_all_examples.sh
```
## Script Structure
Each script follows a simple pattern:
1. **Shebang** - `#!/bin/bash`
2. **Comment** - Brief description of what the script does
3. **Single Command** - One CLI command execution
Example:
```bash
#!/bin/bash
# Swarms CLI - Setup Check Example
# Verify your Swarms environment setup
swarms setup-check
```
## Usage Patterns
### Basic Command Execution
```bash
swarms <command> [options]
```
### With Verbose Output
```bash
swarms <command> --verbose
```
### Environment Variables
Set API keys before running scripts that require them:
```bash
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"
export GOOGLE_API_KEY="your-key-here"
```
## Examples by Category
### Setup & Diagnostics
- Environment setup verification
- Onboarding workflow
- API key management
- Authentication verification
### Single Agent Operations
- Custom agent creation
- Agent configuration from YAML
- Agent loading from markdown
### Multi-Agent Operations
- LLM Council for collaborative problem-solving
- HeavySwarm for complex analysis
- Auto-generated swarm configurations
### Information & Help
- Feature discovery
- Help documentation
- Package management
## File Paths
All scripts are located in `examples/cli/`:
- `examples/cli/01_setup_check.sh`
- `examples/cli/02_onboarding.sh`
- `examples/cli/03_get_api_key.sh`
- `examples/cli/04_check_login.sh`
- `examples/cli/05_create_agent.sh`
- `examples/cli/06_run_agents_yaml.sh`
- `examples/cli/07_load_markdown.sh`
- `examples/cli/08_llm_council.sh`
- `examples/cli/09_heavy_swarm.sh`
- `examples/cli/10_autoswarm.sh`
- `examples/cli/11_features.sh`
- `examples/cli/12_help.sh`
- `examples/cli/13_auto_upgrade.sh`
- `examples/cli/14_book_call.sh`
- `examples/cli/run_all_examples.sh`
## Related Documentation
- [CLI Reference](../../docs/swarms/cli/cli_reference.md) - Complete CLI documentation
- [Main Examples README](../README.md) - Other Swarms examples
- [Swarms Documentation](../../docs/) - Full Swarms documentation

@ -0,0 +1,11 @@
#!/bin/bash
# Swarms CLI - Run All Examples
# Run all CLI examples in sequence
chmod +x *.sh
swarms setup-check
swarms features
swarms help

@ -15,7 +15,7 @@ The `DebateWithJudge` architecture implements a debate system with self-refineme
- **Agent A (Pro)** and **Agent B (Con)** present opposing arguments
- Both arguments are evaluated by a **Judge/Critic Agent**
- The Judge provides a winner or synthesis → refined answer
- The process repeats for N rounds to progressively improve the answer
- The process repeats for N loops to progressively improve the answer
**Architecture Flow:**
```
@ -28,10 +28,48 @@ Agent A (Pro) ↔ Agent B (Con)
Winner or synthesis → refined answer
```
**Example Usage:**
**Initialization Options:**
The `DebateWithJudge` class supports three ways to configure agents:
1. **Preset Agents** (simplest): Use built-in optimized agents
2. **Agent List**: Provide a list of 3 agents `[pro, con, judge]`
3. **Individual Parameters**: Provide each agent separately
**Quick Start with Preset Agents:**
```python
from swarms import DebateWithJudge
# Create debate system with built-in agents (simplest approach)
debate = DebateWithJudge(
preset_agents=True,
max_loops=3,
model_name="gpt-4o-mini"
)
# Run debate
result = debate.run("Should AI be regulated?")
```
**Using Agent List:**
```python
from swarms import Agent
from swarms.structs.debate_with_judge import DebateWithJudge
from swarms import Agent, DebateWithJudge
# Create your agents
agents = [pro_agent, con_agent, judge_agent]
# Create debate system with agent list
debate = DebateWithJudge(
agents=agents,
max_loops=3
)
result = debate.run("Should AI be regulated?")
```
**Using Individual Agent Parameters:**
```python
from swarms import Agent, DebateWithJudge
# Create Pro, Con, and Judge agents
pro_agent = Agent(agent_name="Pro-Agent", ...)
@ -43,12 +81,19 @@ debate = DebateWithJudge(
pro_agent=pro_agent,
con_agent=con_agent,
judge_agent=judge_agent,
max_rounds=3
max_loops=3
)
# Run debate
result = debate.run("Should AI be regulated?")
```
See [debate_with_judge_example.py](./debate_with_judge_example.py) for a complete example.
## Example Files
| File | Description |
|------|-------------|
| [debate_with_judge_example.py](./debate_with_judge_example.py) | Complete example showing all initialization methods |
| [policy_debate_example.py](./policy_debate_example.py) | Policy debate on AI regulation |
| [technical_architecture_debate_example.py](./technical_architecture_debate_example.py) | Technical architecture debate with batch processing |
| [business_strategy_debate_example.py](./business_strategy_debate_example.py) | Business strategy debate with conversation history |

@ -52,12 +52,12 @@ judge_agent = Agent(
max_loops=1,
)
# Create the debate system with extended rounds for complex strategy discussions
# Create the debate system with extended loops for complex strategy discussions
strategy_debate = DebateWithJudge(
pro_agent=pro_agent,
con_agent=con_agent,
judge_agent=judge_agent,
max_rounds=4, # More rounds for complex strategic discussions
max_loops=4, # More loops for complex strategic discussions
output_type="dict", # Use dict format for structured analysis
verbose=True,
)

@ -1,61 +1,16 @@
from swarms import Agent, DebateWithJudge
from swarms import DebateWithJudge
# Create the Pro agent (arguing in favor)
pro_agent = Agent(
agent_name="Pro-Agent",
system_prompt=(
"You are a skilled debater who argues in favor of positions. "
"You present well-reasoned arguments with evidence, examples, "
"and logical reasoning. You are persuasive and articulate."
),
model_name="gpt-4o-mini",
max_loops=1,
)
# Create the Con agent (arguing against)
con_agent = Agent(
agent_name="Con-Agent",
system_prompt=(
"You are a skilled debater who argues against positions. "
"You present strong counter-arguments with evidence, examples, "
"and logical reasoning. You identify weaknesses in opposing "
"arguments and provide compelling alternatives."
),
model_name="gpt-4o-mini",
max_loops=1,
)
# Create the Judge agent (evaluates and synthesizes)
judge_agent = Agent(
agent_name="Judge-Agent",
system_prompt=(
"You are an impartial judge who evaluates debates. "
"You carefully analyze arguments from both sides, identify "
"strengths and weaknesses, and provide balanced synthesis. "
"You may declare a winner or provide a refined answer that "
"incorporates the best elements from both arguments."
),
model_name="gpt-4o-mini",
max_loops=1,
)
# Create the DebateWithJudge system
debate_system = DebateWithJudge(
pro_agent=pro_agent,
con_agent=con_agent,
judge_agent=judge_agent,
max_rounds=3, # Run 3 rounds of debate and refinement
output_type="str-all-except-first", # Return as formatted string
verbose=True, # Enable verbose logging
preset_agents=True,
max_loops=3,
model_name="gpt-4o-mini",
output_type="str-all-except-first",
verbose=True,
)
# Define the debate topic
topic = (
"Should artificial intelligence be regulated by governments? "
"Discuss the balance between innovation and safety."
)
# Run the debate
result = debate_system.run(task=topic)
print(result)

@ -59,7 +59,7 @@ debate_system = DebateWithJudge(
pro_agent=pro_agent,
con_agent=con_agent,
judge_agent=judge_agent,
max_rounds=3,
max_loops=3,
output_type="str-all-except-first",
verbose=True,
)

@ -49,7 +49,7 @@ architecture_debate = DebateWithJudge(
pro_agent=pro_agent,
con_agent=con_agent,
judge_agent=judge_agent,
max_rounds=2, # Fewer rounds for more focused technical debates
max_loops=2, # Fewer loops for more focused technical debates
output_type="str-all-except-first",
verbose=True,
)

@ -1,51 +1,43 @@
#!/usr/bin/env python3
"""
Basic Graph Workflow Example
A minimal example showing how to use GraphWorkflow with backend selection.
"""
from swarms.structs.graph_workflow import GraphWorkflow
from swarms.structs.agent import Agent
agent_one = Agent(agent_name="research_agent", model="gpt-4o-mini")
agent_one = Agent(
agent_name="research_agent",
model_name="gpt-4o-mini",
name="Research Agent",
agent_description="Agent responsible for gathering and summarizing research information.",
)
agent_two = Agent(
agent_name="research_agent_two", model="gpt-4o-mini"
agent_name="research_agent_two",
model_name="gpt-4o-mini",
name="Analysis Agent",
agent_description="Agent that analyzes the research data provided and processes insights.",
)
agent_three = Agent(
agent_name="research_agent_three", model="gpt-4o-mini"
agent_name="research_agent_three",
model_name="gpt-4o-mini",
agent_description="Agent tasked with structuring analysis into a final report or output.",
)
def main():
"""
Run a basic graph workflow example without print statements.
"""
# Create agents
# Create workflow with backend selection
workflow = GraphWorkflow(
name="Basic Example",
verbose=True,
)
# Add agents to workflow
workflow.add_node(agent_one)
workflow.add_node(agent_two)
workflow.add_node(agent_three)
workflow.add_nodes([agent_one, agent_two, agent_three])
# Create simple chain using the actual agent names
workflow.add_edge("research_agent", "research_agent_two")
workflow.add_edge("research_agent_two", "research_agent_three")
workflow.visualize()
# Compile the workflow
workflow.compile()
# Run the workflow
task = "Complete a simple task"
results = workflow.run(task)
return results
if __name__ == "__main__":
main()
print(results)

@ -0,0 +1,46 @@
from swarms.structs.graph_workflow import GraphWorkflow
from swarms.structs.agent import Agent
research_agent = Agent(
agent_name="Research-Analyst",
agent_description="Specialized in comprehensive research and data gathering",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
analysis_agent = Agent(
agent_name="Data-Analyst",
agent_description="Expert in data analysis and pattern recognition",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
strategy_agent = Agent(
agent_name="Strategy-Consultant",
agent_description="Specialized in strategic planning and recommendations",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
workflow = GraphWorkflow(
name="Rustworkx-Basic-Workflow",
description="Basic workflow using rustworkx backend for faster graph operations",
backend="rustworkx",
verbose=False,
)
workflow.add_node(research_agent)
workflow.add_node(analysis_agent)
workflow.add_node(strategy_agent)
workflow.add_edge(research_agent, analysis_agent)
workflow.add_edge(analysis_agent, strategy_agent)
task = "Conduct a research analysis on water stocks and ETFs"
results = workflow.run(task=task)
for agent_name, output in results.items():
print(f"{agent_name}: {output}")

@ -0,0 +1,56 @@
import time
from swarms.structs.graph_workflow import GraphWorkflow
from swarms.structs.agent import Agent
agents = [
Agent(
agent_name=f"Agent-{i}",
agent_description=f"Agent number {i}",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
for i in range(5)
]
nx_workflow = GraphWorkflow(
name="NetworkX-Workflow",
backend="networkx",
verbose=False,
)
for agent in agents:
nx_workflow.add_node(agent)
for i in range(len(agents) - 1):
nx_workflow.add_edge(agents[i], agents[i + 1])
nx_start = time.time()
nx_workflow.compile()
nx_compile_time = time.time() - nx_start
rx_workflow = GraphWorkflow(
name="Rustworkx-Workflow",
backend="rustworkx",
verbose=False,
)
for agent in agents:
rx_workflow.add_node(agent)
for i in range(len(agents) - 1):
rx_workflow.add_edge(agents[i], agents[i + 1])
rx_start = time.time()
rx_workflow.compile()
rx_compile_time = time.time() - rx_start
speedup = (
nx_compile_time / rx_compile_time if rx_compile_time > 0 else 0
)
print(f"NetworkX compile time: {nx_compile_time:.4f}s")
print(f"Rustworkx compile time: {rx_compile_time:.4f}s")
print(f"Speedup: {speedup:.2f}x")
print(
f"Identical layers: {nx_workflow._sorted_layers == rx_workflow._sorted_layers}"
)

@ -0,0 +1,73 @@
from swarms import Agent, GraphWorkflow
coordinator = Agent(
agent_name="Coordinator",
agent_description="Coordinates and distributes tasks",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
tech_analyst = Agent(
agent_name="Tech-Analyst",
agent_description="Technical analysis specialist",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
fundamental_analyst = Agent(
agent_name="Fundamental-Analyst",
agent_description="Fundamental analysis specialist",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
sentiment_analyst = Agent(
agent_name="Sentiment-Analyst",
agent_description="Sentiment analysis specialist",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
synthesis_agent = Agent(
agent_name="Synthesis-Agent",
agent_description="Synthesizes multiple analyses into final report",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
workflow = GraphWorkflow(
name="Fan-Out-Fan-In-Workflow",
description="Demonstrates parallel processing patterns with rustworkx",
backend="rustworkx",
verbose=False,
)
workflow.add_node(coordinator)
workflow.add_node(tech_analyst)
workflow.add_node(fundamental_analyst)
workflow.add_node(sentiment_analyst)
workflow.add_node(synthesis_agent)
workflow.add_edges_from_source(
coordinator,
[tech_analyst, fundamental_analyst, sentiment_analyst],
)
workflow.add_edges_to_target(
[tech_analyst, fundamental_analyst, sentiment_analyst],
synthesis_agent,
)
task = "Analyze Tesla stock from technical, fundamental, and sentiment perspectives"
results = workflow.run(task=task)
for agent_name, output in results.items():
print(f"{agent_name}: {output}")
workflow.visualize(view=True)

@ -0,0 +1,101 @@
from swarms.structs.graph_workflow import GraphWorkflow
from swarms.structs.agent import Agent
data_collector_1 = Agent(
agent_name="Data-Collector-1",
agent_description="Collects market data",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
data_collector_2 = Agent(
agent_name="Data-Collector-2",
agent_description="Collects financial data",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
technical_analyst = Agent(
agent_name="Technical-Analyst",
agent_description="Performs technical analysis",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
fundamental_analyst = Agent(
agent_name="Fundamental-Analyst",
agent_description="Performs fundamental analysis",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
risk_analyst = Agent(
agent_name="Risk-Analyst",
agent_description="Performs risk analysis",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
strategy_consultant = Agent(
agent_name="Strategy-Consultant",
agent_description="Develops strategic recommendations",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
report_writer = Agent(
agent_name="Report-Writer",
agent_description="Writes comprehensive reports",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
workflow = GraphWorkflow(
name="Complex-Multi-Layer-Workflow",
description="Complex workflow with multiple layers and parallel processing",
backend="rustworkx",
verbose=False,
)
all_agents = [
data_collector_1,
data_collector_2,
technical_analyst,
fundamental_analyst,
risk_analyst,
strategy_consultant,
report_writer,
]
for agent in all_agents:
workflow.add_node(agent)
workflow.add_parallel_chain(
[data_collector_1, data_collector_2],
[technical_analyst, fundamental_analyst, risk_analyst],
)
workflow.add_edges_to_target(
[technical_analyst, fundamental_analyst, risk_analyst],
strategy_consultant,
)
workflow.add_edges_to_target(
[technical_analyst, fundamental_analyst, risk_analyst],
report_writer,
)
workflow.add_edge(strategy_consultant, report_writer)
task = "Conduct a comprehensive analysis of the renewable energy sector including market trends, financial health, and risk assessment"
results = workflow.run(task=task)
for agent_name, output in results.items():
print(f"{agent_name}: {output}")

@ -0,0 +1,104 @@
import time
from swarms.structs.graph_workflow import GraphWorkflow
from swarms.structs.agent import Agent
agents_small = [
Agent(
agent_name=f"Agent-{i}",
agent_description=f"Agent number {i}",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
for i in range(5)
]
agents_medium = [
Agent(
agent_name=f"Agent-{i}",
agent_description=f"Agent number {i}",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
for i in range(20)
]
nx_workflow_small = GraphWorkflow(
name="NetworkX-Small",
backend="networkx",
verbose=False,
auto_compile=False,
)
for agent in agents_small:
nx_workflow_small.add_node(agent)
for i in range(len(agents_small) - 1):
nx_workflow_small.add_edge(agents_small[i], agents_small[i + 1])
nx_start = time.time()
nx_workflow_small.compile()
nx_small_time = time.time() - nx_start
rx_workflow_small = GraphWorkflow(
name="Rustworkx-Small",
backend="rustworkx",
verbose=False,
auto_compile=False,
)
for agent in agents_small:
rx_workflow_small.add_node(agent)
for i in range(len(agents_small) - 1):
rx_workflow_small.add_edge(agents_small[i], agents_small[i + 1])
rx_start = time.time()
rx_workflow_small.compile()
rx_small_time = time.time() - rx_start
nx_workflow_medium = GraphWorkflow(
name="NetworkX-Medium",
backend="networkx",
verbose=False,
auto_compile=False,
)
for agent in agents_medium:
nx_workflow_medium.add_node(agent)
for i in range(len(agents_medium) - 1):
nx_workflow_medium.add_edge(
agents_medium[i], agents_medium[i + 1]
)
nx_start = time.time()
nx_workflow_medium.compile()
nx_medium_time = time.time() - nx_start
rx_workflow_medium = GraphWorkflow(
name="Rustworkx-Medium",
backend="rustworkx",
verbose=False,
auto_compile=False,
)
for agent in agents_medium:
rx_workflow_medium.add_node(agent)
for i in range(len(agents_medium) - 1):
rx_workflow_medium.add_edge(
agents_medium[i], agents_medium[i + 1]
)
rx_start = time.time()
rx_workflow_medium.compile()
rx_medium_time = time.time() - rx_start
print(
f"Small (5 agents) - NetworkX: {nx_small_time:.4f}s, Rustworkx: {rx_small_time:.4f}s, Speedup: {nx_small_time/rx_small_time if rx_small_time > 0 else 0:.2f}x"
)
print(
f"Medium (20 agents) - NetworkX: {nx_medium_time:.4f}s, Rustworkx: {rx_medium_time:.4f}s, Speedup: {nx_medium_time/rx_medium_time if rx_medium_time > 0 else 0:.2f}x"
)

@ -0,0 +1,55 @@
from swarms.structs.graph_workflow import GraphWorkflow
from swarms.structs.agent import Agent
test_agent = Agent(
agent_name="Test-Agent",
agent_description="Test agent for error handling",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
workflow_rx = GraphWorkflow(
name="Rustworkx-Workflow",
backend="rustworkx",
verbose=False,
)
workflow_rx.add_node(test_agent)
workflow_nx = GraphWorkflow(
name="NetworkX-Workflow",
backend="networkx",
verbose=False,
)
workflow_nx.add_node(test_agent)
workflow_default = GraphWorkflow(
name="Default-Workflow",
verbose=False,
)
workflow_default.add_node(test_agent)
workflow_invalid = GraphWorkflow(
name="Invalid-Workflow",
backend="invalid_backend",
verbose=False,
)
workflow_invalid.add_node(test_agent)
print(
f"Rustworkx backend: {type(workflow_rx.graph_backend).__name__}"
)
print(f"NetworkX backend: {type(workflow_nx.graph_backend).__name__}")
print(
f"Default backend: {type(workflow_default.graph_backend).__name__}"
)
print(
f"Invalid backend fallback: {type(workflow_invalid.graph_backend).__name__}"
)
try:
import rustworkx as rx
print("Rustworkx available: True")
except ImportError:
print("Rustworkx available: False")

@ -0,0 +1,61 @@
import time
from swarms.structs.graph_workflow import GraphWorkflow
from swarms.structs.agent import Agent
NUM_AGENTS = 30
agents = [
Agent(
agent_name=f"Agent-{i:02d}",
agent_description=f"Agent number {i} in large-scale workflow",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
for i in range(NUM_AGENTS)
]
workflow = GraphWorkflow(
name="Large-Scale-Workflow",
description=f"Large-scale workflow with {NUM_AGENTS} agents using rustworkx",
backend="rustworkx",
verbose=False,
)
start_time = time.time()
for agent in agents:
workflow.add_node(agent)
add_nodes_time = time.time() - start_time
start_time = time.time()
for i in range(9):
workflow.add_edge(agents[i], agents[i + 1])
workflow.add_edges_from_source(
agents[5],
agents[10:20],
)
workflow.add_edges_to_target(
agents[10:20],
agents[20],
)
for i in range(20, 29):
workflow.add_edge(agents[i], agents[i + 1])
add_edges_time = time.time() - start_time
start_time = time.time()
workflow.compile()
compile_time = time.time() - start_time
print(
f"Agents: {len(workflow.nodes)}, Edges: {len(workflow.edges)}, Layers: {len(workflow._sorted_layers)}"
)
print(
f"Node addition: {add_nodes_time:.4f}s, Edge addition: {add_edges_time:.4f}s, Compilation: {compile_time:.4f}s"
)
print(
f"Total setup: {add_nodes_time + add_edges_time + compile_time:.4f}s"
)

@ -0,0 +1,73 @@
from swarms.structs.graph_workflow import GraphWorkflow
from swarms.structs.agent import Agent
data_collector_1 = Agent(
agent_name="Data-Collector-1",
agent_description="Collects market data",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
data_collector_2 = Agent(
agent_name="Data-Collector-2",
agent_description="Collects financial data",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
data_collector_3 = Agent(
agent_name="Data-Collector-3",
agent_description="Collects news data",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
technical_analyst = Agent(
agent_name="Technical-Analyst",
agent_description="Performs technical analysis",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
fundamental_analyst = Agent(
agent_name="Fundamental-Analyst",
agent_description="Performs fundamental analysis",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
sentiment_analyst = Agent(
agent_name="Sentiment-Analyst",
agent_description="Performs sentiment analysis",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
workflow = GraphWorkflow(
name="Parallel-Chain-Workflow",
description="Demonstrates parallel chain pattern with rustworkx",
backend="rustworkx",
verbose=False,
)
sources = [data_collector_1, data_collector_2, data_collector_3]
targets = [technical_analyst, fundamental_analyst, sentiment_analyst]
for agent in sources + targets:
workflow.add_node(agent)
workflow.add_parallel_chain(sources, targets)
workflow.compile()
task = "Analyze the technology sector using multiple data sources and analysis methods"
results = workflow.run(task=task)
for agent_name, output in results.items():
print(f"{agent_name}: {output}")

@ -0,0 +1,79 @@
from swarms.structs.graph_workflow import GraphWorkflow
from swarms.structs.agent import Agent
agent_a = Agent(
agent_name="Agent-A",
agent_description="Agent A",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
agent_b = Agent(
agent_name="Agent-B",
agent_description="Agent B",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
agent_c = Agent(
agent_name="Agent-C",
agent_description="Agent C",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
agent_isolated = Agent(
agent_name="Agent-Isolated",
agent_description="Isolated agent with no connections",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
workflow = GraphWorkflow(
name="Validation-Workflow",
description="Workflow for validation testing",
backend="rustworkx",
verbose=False,
)
workflow.add_node(agent_a)
workflow.add_node(agent_b)
workflow.add_node(agent_c)
workflow.add_node(agent_isolated)
workflow.add_edge(agent_a, agent_b)
workflow.add_edge(agent_b, agent_c)
validation_result = workflow.validate(auto_fix=False)
print(f"Valid: {validation_result['is_valid']}")
print(f"Warnings: {len(validation_result['warnings'])}")
print(f"Errors: {len(validation_result['errors'])}")
validation_result_fixed = workflow.validate(auto_fix=True)
print(
f"After auto-fix - Valid: {validation_result_fixed['is_valid']}"
)
print(f"Fixed: {len(validation_result_fixed['fixed'])}")
print(f"Entry points: {workflow.entry_points}")
print(f"End points: {workflow.end_points}")
workflow_cycle = GraphWorkflow(
name="Cycle-Test-Workflow",
backend="rustworkx",
verbose=False,
)
workflow_cycle.add_node(agent_a)
workflow_cycle.add_node(agent_b)
workflow_cycle.add_node(agent_c)
workflow_cycle.add_edge(agent_a, agent_b)
workflow_cycle.add_edge(agent_b, agent_c)
workflow_cycle.add_edge(agent_c, agent_a)
cycle_validation = workflow_cycle.validate(auto_fix=False)
print(f"Cycles detected: {len(cycle_validation.get('cycles', []))}")

@ -0,0 +1,122 @@
from swarms.structs.graph_workflow import GraphWorkflow
from swarms.structs.agent import Agent
market_researcher = Agent(
agent_name="Market-Researcher",
agent_description="Conducts comprehensive market research and data collection",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
competitor_analyst = Agent(
agent_name="Competitor-Analyst",
agent_description="Analyzes competitor landscape and positioning",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
market_analyst = Agent(
agent_name="Market-Analyst",
agent_description="Analyzes market trends and opportunities",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
financial_analyst = Agent(
agent_name="Financial-Analyst",
agent_description="Analyzes financial metrics and projections",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
risk_analyst = Agent(
agent_name="Risk-Analyst",
agent_description="Assesses market risks and challenges",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
strategy_consultant = Agent(
agent_name="Strategy-Consultant",
agent_description="Develops strategic recommendations based on all analyses",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
report_writer = Agent(
agent_name="Report-Writer",
agent_description="Compiles comprehensive market research report",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
executive_summary_writer = Agent(
agent_name="Executive-Summary-Writer",
agent_description="Creates executive summary for leadership",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
workflow = GraphWorkflow(
name="Market-Research-Workflow",
description="Real-world market research workflow using rustworkx backend",
backend="rustworkx",
verbose=False,
)
all_agents = [
market_researcher,
competitor_analyst,
market_analyst,
financial_analyst,
risk_analyst,
strategy_consultant,
report_writer,
executive_summary_writer,
]
for agent in all_agents:
workflow.add_node(agent)
workflow.add_parallel_chain(
[market_researcher, competitor_analyst],
[market_analyst, financial_analyst, risk_analyst],
)
workflow.add_edges_to_target(
[market_analyst, financial_analyst, risk_analyst],
strategy_consultant,
)
workflow.add_edges_from_source(
strategy_consultant,
[report_writer, executive_summary_writer],
)
workflow.add_edges_to_target(
[market_analyst, financial_analyst, risk_analyst],
report_writer,
)
task = """
Conduct a comprehensive market research analysis on the electric vehicle (EV) industry:
1. Research current market size, growth trends, and key players
2. Analyze competitor landscape and market positioning
3. Assess financial opportunities and investment potential
4. Evaluate risks and challenges in the EV market
5. Develop strategic recommendations
6. Create detailed report and executive summary
"""
results = workflow.run(task=task)
for agent_name, output in results.items():
print(f"{agent_name}: {output}")

@ -0,0 +1,156 @@
# Rustworkx Backend Examples
This directory contains comprehensive examples demonstrating the use of the **rustworkx backend** in GraphWorkflow. Rustworkx provides faster graph operations compared to NetworkX, especially for large graphs and complex operations.
## Installation
Before running these examples, ensure rustworkx is installed:
```bash
pip install rustworkx
```
If rustworkx is not installed, GraphWorkflow will automatically fallback to NetworkX backend.
## Examples Overview
### 01_basic_usage.py
Basic example showing how to use rustworkx backend with GraphWorkflow. Demonstrates simple linear workflow creation and execution.
**Key Concepts:**
- Initializing GraphWorkflow with rustworkx backend
- Adding agents and creating edges
- Running a workflow
### 02_backend_comparison.py
Compares NetworkX and Rustworkx backends side-by-side, showing performance differences and functional equivalence.
**Key Concepts:**
- Backend comparison
- Performance metrics
- Functional equivalence verification
### 03_fan_out_fan_in_patterns.py
Demonstrates parallel processing patterns: fan-out (one-to-many) and fan-in (many-to-one) connections.
**Key Concepts:**
- Fan-out pattern: `add_edges_from_source()`
- Fan-in pattern: `add_edges_to_target()`
- Parallel execution optimization
### 04_complex_workflow.py
Shows a complex multi-layer workflow with multiple parallel branches and convergence points.
**Key Concepts:**
- Multi-layer workflows
- Parallel chains: `add_parallel_chain()`
- Complex graph structures
### 05_performance_benchmark.py
Benchmarks performance differences between NetworkX and Rustworkx for various graph sizes and structures.
**Key Concepts:**
- Performance benchmarking
- Scalability testing
- Different graph topologies (chain, tree)
### 06_error_handling.py
Demonstrates error handling and graceful fallback behavior when rustworkx is unavailable.
**Key Concepts:**
- Error handling
- Automatic fallback to NetworkX
- Backend availability checking
### 07_large_scale_workflow.py
Demonstrates rustworkx's efficiency with large-scale workflows containing many agents.
**Key Concepts:**
- Large-scale workflows
- Performance with many nodes/edges
- Complex interconnections
### 08_parallel_chain_example.py
Detailed example of the parallel chain pattern creating a full mesh connection.
**Key Concepts:**
- Parallel chain pattern
- Full mesh connections
- Maximum parallelization
### 09_workflow_validation.py
Shows workflow validation features including cycle detection, isolated nodes, and auto-fixing.
**Key Concepts:**
- Workflow validation
- Cycle detection
- Auto-fixing capabilities
### 10_real_world_scenario.py
A realistic market research workflow demonstrating real-world agent coordination scenarios.
**Key Concepts:**
- Real-world use case
- Complex multi-phase workflow
- Practical application
## Quick Start
Run any example:
```bash
python 01_basic_usage.py
```
## Backend Selection
To use rustworkx backend:
```python
workflow = GraphWorkflow(
backend="rustworkx", # Use rustworkx
# ... other parameters
)
```
To use NetworkX backend (default):
```python
workflow = GraphWorkflow(
backend="networkx", # Or omit for default
# ... other parameters
)
```
## Performance Benefits
Rustworkx provides performance benefits especially for:
- **Large graphs** (100+ nodes)
- **Complex operations** (topological sorting, cycle detection)
- **Frequent graph modifications** (adding/removing nodes/edges)
## Key Differences
While both backends are functionally equivalent, rustworkx:
- Uses integer indices internally (abstracted away)
- Provides faster graph operations
- Better memory efficiency for large graphs
- Maintains full compatibility with GraphWorkflow API
## Notes
- Both backends produce identical results
- Rustworkx automatically falls back to NetworkX if not installed
- All GraphWorkflow features work with both backends
- Performance gains become more significant with larger graphs
## Requirements
- `swarms` package
- `rustworkx` (optional, for rustworkx backend)
- `networkx` (always available, default backend)
## Contributing
Feel free to add more examples demonstrating rustworkx capabilities or specific use cases!

@ -0,0 +1,632 @@
import pytest
from swarms.structs.graph_workflow import (
GraphWorkflow,
)
from swarms.structs.agent import Agent
try:
import rustworkx as rx
RUSTWORKX_AVAILABLE = True
except ImportError:
RUSTWORKX_AVAILABLE = False
def create_test_agent(name: str, description: str = None) -> Agent:
"""Create a test agent"""
if description is None:
description = f"Test agent for {name} operations"
return Agent(
agent_name=name,
agent_description=description,
model_name="gpt-4o-mini",
verbose=False,
print_on=False,
max_loops=1,
)
@pytest.mark.skipif(
not RUSTWORKX_AVAILABLE, reason="rustworkx not available"
)
class TestRustworkxBackend:
"""Test suite for rustworkx backend"""
def test_rustworkx_backend_initialization(self):
"""Test that rustworkx backend is properly initialized"""
workflow = GraphWorkflow(name="Test", backend="rustworkx")
assert (
workflow.graph_backend.__class__.__name__
== "RustworkxBackend"
)
assert hasattr(workflow.graph_backend, "_node_id_to_index")
assert hasattr(workflow.graph_backend, "_index_to_node_id")
assert hasattr(workflow.graph_backend, "graph")
def test_rustworkx_node_addition(self):
"""Test adding nodes to rustworkx backend"""
workflow = GraphWorkflow(name="Test", backend="rustworkx")
agent = create_test_agent("TestAgent", "Test agent")
workflow.add_node(agent)
assert "TestAgent" in workflow.nodes
assert "TestAgent" in workflow.graph_backend._node_id_to_index
assert (
workflow.graph_backend._node_id_to_index["TestAgent"]
in workflow.graph_backend._index_to_node_id
)
def test_rustworkx_edge_addition(self):
"""Test adding edges to rustworkx backend"""
workflow = GraphWorkflow(name="Test", backend="rustworkx")
agent1 = create_test_agent("Agent1", "First agent")
agent2 = create_test_agent("Agent2", "Second agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_edge(agent1, agent2)
assert len(workflow.edges) == 1
assert workflow.edges[0].source == "Agent1"
assert workflow.edges[0].target == "Agent2"
def test_rustworkx_topological_generations_linear(self):
"""Test topological generations with linear chain"""
workflow = GraphWorkflow(
name="Linear-Test", backend="rustworkx"
)
agents = [
create_test_agent(f"Agent{i}", f"Agent {i}")
for i in range(5)
]
for agent in agents:
workflow.add_node(agent)
for i in range(len(agents) - 1):
workflow.add_edge(agents[i], agents[i + 1])
workflow.compile()
assert len(workflow._sorted_layers) == 5
assert workflow._sorted_layers[0] == ["Agent0"]
assert workflow._sorted_layers[1] == ["Agent1"]
assert workflow._sorted_layers[2] == ["Agent2"]
assert workflow._sorted_layers[3] == ["Agent3"]
assert workflow._sorted_layers[4] == ["Agent4"]
def test_rustworkx_topological_generations_fan_out(self):
"""Test topological generations with fan-out pattern"""
workflow = GraphWorkflow(
name="FanOut-Test", backend="rustworkx"
)
coordinator = create_test_agent("Coordinator", "Coordinates")
analyst1 = create_test_agent("Analyst1", "First analyst")
analyst2 = create_test_agent("Analyst2", "Second analyst")
analyst3 = create_test_agent("Analyst3", "Third analyst")
workflow.add_node(coordinator)
workflow.add_node(analyst1)
workflow.add_node(analyst2)
workflow.add_node(analyst3)
workflow.add_edges_from_source(
coordinator, [analyst1, analyst2, analyst3]
)
workflow.compile()
assert len(workflow._sorted_layers) == 2
assert len(workflow._sorted_layers[0]) == 1
assert "Coordinator" in workflow._sorted_layers[0]
assert len(workflow._sorted_layers[1]) == 3
assert "Analyst1" in workflow._sorted_layers[1]
assert "Analyst2" in workflow._sorted_layers[1]
assert "Analyst3" in workflow._sorted_layers[1]
def test_rustworkx_topological_generations_fan_in(self):
"""Test topological generations with fan-in pattern"""
workflow = GraphWorkflow(
name="FanIn-Test", backend="rustworkx"
)
analyst1 = create_test_agent("Analyst1", "First analyst")
analyst2 = create_test_agent("Analyst2", "Second analyst")
analyst3 = create_test_agent("Analyst3", "Third analyst")
synthesizer = create_test_agent("Synthesizer", "Synthesizes")
workflow.add_node(analyst1)
workflow.add_node(analyst2)
workflow.add_node(analyst3)
workflow.add_node(synthesizer)
workflow.add_edges_to_target(
[analyst1, analyst2, analyst3], synthesizer
)
workflow.compile()
assert len(workflow._sorted_layers) == 2
assert len(workflow._sorted_layers[0]) == 3
assert "Analyst1" in workflow._sorted_layers[0]
assert "Analyst2" in workflow._sorted_layers[0]
assert "Analyst3" in workflow._sorted_layers[0]
assert len(workflow._sorted_layers[1]) == 1
assert "Synthesizer" in workflow._sorted_layers[1]
def test_rustworkx_topological_generations_complex(self):
"""Test topological generations with complex topology"""
workflow = GraphWorkflow(
name="Complex-Test", backend="rustworkx"
)
agents = [
create_test_agent(f"Agent{i}", f"Agent {i}")
for i in range(6)
]
for agent in agents:
workflow.add_node(agent)
# Create: Agent0 -> Agent1, Agent2
# Agent1, Agent2 -> Agent3
# Agent3 -> Agent4, Agent5
workflow.add_edge(agents[0], agents[1])
workflow.add_edge(agents[0], agents[2])
workflow.add_edge(agents[1], agents[3])
workflow.add_edge(agents[2], agents[3])
workflow.add_edge(agents[3], agents[4])
workflow.add_edge(agents[3], agents[5])
workflow.compile()
assert len(workflow._sorted_layers) == 4
assert "Agent0" in workflow._sorted_layers[0]
assert (
"Agent1" in workflow._sorted_layers[1]
or "Agent2" in workflow._sorted_layers[1]
)
assert "Agent3" in workflow._sorted_layers[2]
assert (
"Agent4" in workflow._sorted_layers[3]
or "Agent5" in workflow._sorted_layers[3]
)
def test_rustworkx_predecessors(self):
"""Test predecessor retrieval"""
workflow = GraphWorkflow(
name="Predecessors-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "First agent")
agent2 = create_test_agent("Agent2", "Second agent")
agent3 = create_test_agent("Agent3", "Third agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_node(agent3)
workflow.add_edge(agent1, agent2)
workflow.add_edge(agent2, agent3)
predecessors = list(
workflow.graph_backend.predecessors("Agent2")
)
assert "Agent1" in predecessors
assert len(predecessors) == 1
predecessors = list(
workflow.graph_backend.predecessors("Agent3")
)
assert "Agent2" in predecessors
assert len(predecessors) == 1
predecessors = list(
workflow.graph_backend.predecessors("Agent1")
)
assert len(predecessors) == 0
def test_rustworkx_descendants(self):
"""Test descendant retrieval"""
workflow = GraphWorkflow(
name="Descendants-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "First agent")
agent2 = create_test_agent("Agent2", "Second agent")
agent3 = create_test_agent("Agent3", "Third agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_node(agent3)
workflow.add_edge(agent1, agent2)
workflow.add_edge(agent2, agent3)
descendants = workflow.graph_backend.descendants("Agent1")
assert "Agent2" in descendants
assert "Agent3" in descendants
assert len(descendants) == 2
descendants = workflow.graph_backend.descendants("Agent2")
assert "Agent3" in descendants
assert len(descendants) == 1
descendants = workflow.graph_backend.descendants("Agent3")
assert len(descendants) == 0
def test_rustworkx_in_degree(self):
"""Test in-degree calculation"""
workflow = GraphWorkflow(
name="InDegree-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "First agent")
agent2 = create_test_agent("Agent2", "Second agent")
agent3 = create_test_agent("Agent3", "Third agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_node(agent3)
workflow.add_edge(agent1, agent2)
workflow.add_edge(agent3, agent2)
assert workflow.graph_backend.in_degree("Agent1") == 0
assert workflow.graph_backend.in_degree("Agent2") == 2
assert workflow.graph_backend.in_degree("Agent3") == 0
def test_rustworkx_out_degree(self):
"""Test out-degree calculation"""
workflow = GraphWorkflow(
name="OutDegree-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "First agent")
agent2 = create_test_agent("Agent2", "Second agent")
agent3 = create_test_agent("Agent3", "Third agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_node(agent3)
workflow.add_edge(agent1, agent2)
workflow.add_edge(agent1, agent3)
assert workflow.graph_backend.out_degree("Agent1") == 2
assert workflow.graph_backend.out_degree("Agent2") == 0
assert workflow.graph_backend.out_degree("Agent3") == 0
def test_rustworkx_agent_objects_in_edges(self):
"""Test using Agent objects directly in edge methods"""
workflow = GraphWorkflow(
name="AgentObjects-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "First agent")
agent2 = create_test_agent("Agent2", "Second agent")
agent3 = create_test_agent("Agent3", "Third agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_node(agent3)
# Use Agent objects directly
workflow.add_edges_from_source(agent1, [agent2, agent3])
workflow.add_edges_to_target([agent2, agent3], agent1)
workflow.compile()
assert len(workflow.edges) == 4
assert len(workflow._sorted_layers) >= 1
def test_rustworkx_parallel_chain(self):
"""Test parallel chain pattern"""
workflow = GraphWorkflow(
name="ParallelChain-Test", backend="rustworkx"
)
sources = [
create_test_agent(f"Source{i}", f"Source {i}")
for i in range(3)
]
targets = [
create_test_agent(f"Target{i}", f"Target {i}")
for i in range(3)
]
for agent in sources + targets:
workflow.add_node(agent)
workflow.add_parallel_chain(sources, targets)
workflow.compile()
assert len(workflow.edges) == 9 # 3x3 = 9 edges
assert len(workflow._sorted_layers) == 2
def test_rustworkx_large_scale(self):
"""Test rustworkx with large workflow"""
workflow = GraphWorkflow(
name="LargeScale-Test", backend="rustworkx"
)
agents = [
create_test_agent(f"Agent{i}", f"Agent {i}")
for i in range(20)
]
for agent in agents:
workflow.add_node(agent)
# Create linear chain
for i in range(len(agents) - 1):
workflow.add_edge(agents[i], agents[i + 1])
workflow.compile()
assert len(workflow._sorted_layers) == 20
assert len(workflow.nodes) == 20
assert len(workflow.edges) == 19
def test_rustworkx_reverse(self):
"""Test graph reversal"""
workflow = GraphWorkflow(
name="Reverse-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "First agent")
agent2 = create_test_agent("Agent2", "Second agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_edge(agent1, agent2)
reversed_backend = workflow.graph_backend.reverse()
# In reversed graph, Agent2 should have Agent1 as predecessor
preds = list(reversed_backend.predecessors("Agent1"))
assert "Agent2" in preds
# Agent2 should have no predecessors in reversed graph
preds = list(reversed_backend.predecessors("Agent2"))
assert len(preds) == 0
def test_rustworkx_entry_end_points(self):
"""Test entry and end point detection"""
workflow = GraphWorkflow(
name="EntryEnd-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "Entry agent")
agent2 = create_test_agent("Agent2", "Middle agent")
agent3 = create_test_agent("Agent3", "End agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_node(agent3)
workflow.add_edge(agent1, agent2)
workflow.add_edge(agent2, agent3)
workflow.auto_set_entry_points()
workflow.auto_set_end_points()
assert "Agent1" in workflow.entry_points
assert "Agent3" in workflow.end_points
assert workflow.graph_backend.in_degree("Agent1") == 0
assert workflow.graph_backend.out_degree("Agent3") == 0
def test_rustworkx_isolated_nodes(self):
"""Test handling of isolated nodes"""
workflow = GraphWorkflow(
name="Isolated-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "Connected agent")
agent2 = create_test_agent("Agent2", "Isolated agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_edge(agent1, agent1) # Self-loop
workflow.compile()
assert len(workflow.nodes) == 2
assert "Agent2" in workflow.nodes
def test_rustworkx_workflow_execution(self):
"""Test full workflow execution with rustworkx"""
workflow = GraphWorkflow(
name="Execution-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "First agent")
agent2 = create_test_agent("Agent2", "Second agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_edge(agent1, agent2)
result = workflow.run("Test task")
assert result is not None
assert "Agent1" in result
assert "Agent2" in result
def test_rustworkx_compilation_caching(self):
"""Test that compilation is cached correctly"""
workflow = GraphWorkflow(
name="Cache-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "First agent")
agent2 = create_test_agent("Agent2", "Second agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_edge(agent1, agent2)
# First compilation
workflow.compile()
layers1 = workflow._sorted_layers.copy()
compiled1 = workflow._compiled
# Second compilation should use cache
workflow.compile()
layers2 = workflow._sorted_layers.copy()
compiled2 = workflow._compiled
assert compiled1 == compiled2 == True
assert layers1 == layers2
def test_rustworkx_node_metadata(self):
"""Test node metadata handling"""
workflow = GraphWorkflow(
name="Metadata-Test", backend="rustworkx"
)
agent = create_test_agent("Agent", "Test agent")
workflow.add_node(
agent, metadata={"priority": "high", "timeout": 60}
)
node_index = workflow.graph_backend._node_id_to_index["Agent"]
node_data = workflow.graph_backend.graph[node_index]
assert isinstance(node_data, dict)
assert node_data.get("node_id") == "Agent"
assert node_data.get("priority") == "high"
assert node_data.get("timeout") == 60
def test_rustworkx_edge_metadata(self):
"""Test edge metadata handling"""
workflow = GraphWorkflow(
name="EdgeMetadata-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "First agent")
agent2 = create_test_agent("Agent2", "Second agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
workflow.add_edge(agent1, agent2, weight=5, label="test")
assert len(workflow.edges) == 1
assert workflow.edges[0].metadata.get("weight") == 5
assert workflow.edges[0].metadata.get("label") == "test"
@pytest.mark.skipif(
not RUSTWORKX_AVAILABLE, reason="rustworkx not available"
)
class TestRustworkxPerformance:
"""Performance tests for rustworkx backend"""
def test_rustworkx_large_graph_compilation(self):
"""Test compilation performance with large graph"""
workflow = GraphWorkflow(
name="LargeGraph-Test", backend="rustworkx"
)
agents = [
create_test_agent(f"Agent{i}", f"Agent {i}")
for i in range(50)
]
for agent in agents:
workflow.add_node(agent)
# Create a complex topology
for i in range(len(agents) - 1):
workflow.add_edge(agents[i], agents[i + 1])
import time
start = time.time()
workflow.compile()
compile_time = time.time() - start
assert compile_time < 1.0 # Should compile quickly
assert len(workflow._sorted_layers) == 50
def test_rustworkx_many_predecessors(self):
"""Test performance with many predecessors"""
workflow = GraphWorkflow(
name="ManyPreds-Test", backend="rustworkx"
)
target = create_test_agent("Target", "Target agent")
sources = [
create_test_agent(f"Source{i}", f"Source {i}")
for i in range(100)
]
workflow.add_node(target)
for source in sources:
workflow.add_node(source)
workflow.add_edges_to_target(sources, target)
workflow.compile()
predecessors = list(
workflow.graph_backend.predecessors("Target")
)
assert len(predecessors) == 100
@pytest.mark.skipif(
not RUSTWORKX_AVAILABLE, reason="rustworkx not available"
)
class TestRustworkxEdgeCases:
"""Edge case tests for rustworkx backend"""
def test_rustworkx_empty_graph(self):
"""Test empty graph handling"""
workflow = GraphWorkflow(
name="Empty-Test", backend="rustworkx"
)
workflow.compile()
assert len(workflow._sorted_layers) == 0
assert len(workflow.nodes) == 0
def test_rustworkx_single_node(self):
"""Test single node graph"""
workflow = GraphWorkflow(
name="Single-Test", backend="rustworkx"
)
agent = create_test_agent("Agent", "Single agent")
workflow.add_node(agent)
workflow.compile()
assert len(workflow._sorted_layers) == 1
assert workflow._sorted_layers[0] == ["Agent"]
def test_rustworkx_self_loop(self):
"""Test self-loop handling"""
workflow = GraphWorkflow(
name="SelfLoop-Test", backend="rustworkx"
)
agent = create_test_agent("Agent", "Self-looping agent")
workflow.add_node(agent)
workflow.add_edge(agent, agent)
workflow.compile()
assert len(workflow.edges) == 1
assert workflow.graph_backend.in_degree("Agent") == 1
assert workflow.graph_backend.out_degree("Agent") == 1
def test_rustworkx_duplicate_edge(self):
"""Test duplicate edge handling"""
workflow = GraphWorkflow(
name="Duplicate-Test", backend="rustworkx"
)
agent1 = create_test_agent("Agent1", "First agent")
agent2 = create_test_agent("Agent2", "Second agent")
workflow.add_node(agent1)
workflow.add_node(agent2)
# Add same edge twice
workflow.add_edge(agent1, agent2)
workflow.add_edge(agent1, agent2)
# rustworkx should handle duplicate edges
assert (
len(workflow.edges) == 2
) # Both edges are stored in workflow
workflow.compile() # Should not crash
if __name__ == "__main__":
pytest.main([__file__, "-v"])

@ -14,6 +14,7 @@ This directory contains examples demonstrating hierarchical swarm patterns for m
- [hs_stock_team.py](hs_stock_team.py) - Stock trading team
- [hybrid_hiearchical_swarm.py](hybrid_hiearchical_swarm.py) - Hybrid approach
- [sector_analysis_hiearchical_swarm.py](sector_analysis_hiearchical_swarm.py) - Sector analysis
- [display_hierarchy_example.py](display_hierarchy_example.py) - Visualize swarm hierarchy structure
## Subdirectories

@ -0,0 +1,47 @@
from swarms import Agent, HierarchicalSwarm
# Create specialized agents
research_agent = Agent(
agent_name="Research-Analyst",
agent_description="Specialized in comprehensive research and data gathering",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
analysis_agent = Agent(
agent_name="Data-Analyst",
agent_description="Expert in data analysis and pattern recognition",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
strategy_agent = Agent(
agent_name="Strategy-Consultant",
agent_description="Specialized in strategic planning and recommendations",
model_name="gpt-4o-mini",
max_loops=1,
verbose=False,
)
# Create hierarchical swarm with interactive dashboard
swarm = HierarchicalSwarm(
name="Swarms Corporation Operations",
description="Enterprise-grade hierarchical swarm for complex task execution",
agents=[research_agent, analysis_agent, strategy_agent],
max_loops=1,
interactive=False, # Enable the Arasaka dashboard
director_model_name="claude-haiku-4-5",
director_temperature=0.7,
director_top_p=None,
planning_enabled=True,
)
print(swarm.display_hierarchy())
# out = swarm.run(
# "Conduct a research analysis on water stocks and etfs"
# )
# print(out)

@ -0,0 +1,95 @@
# LLM Council Examples
This directory contains examples demonstrating the LLM Council pattern, inspired by Andrej Karpathy's llm-council implementation. The LLM Council uses multiple specialized AI agents that:
1. Each respond independently to queries
2. Review and rank each other's anonymized responses
3. Have a Chairman synthesize all responses into a final comprehensive answer
## Examples
### Marketing & Business
- **marketing_strategy_council.py** - Marketing strategy analysis and recommendations
- **business_strategy_council.py** - Comprehensive business strategy development
### Finance & Investment
- **finance_analysis_council.py** - Financial analysis and investment recommendations
- **etf_stock_analysis_council.py** - ETF and stock analysis with portfolio recommendations
### Medical & Healthcare
- **medical_treatment_council.py** - Medical treatment recommendations and care plans
- **medical_diagnosis_council.py** - Diagnostic analysis based on symptoms
### Technology & Research
- **technology_assessment_council.py** - Technology evaluation and implementation strategy
- **research_analysis_council.py** - Comprehensive research analysis on complex topics
### Legal
- **legal_analysis_council.py** - Legal implications and compliance analysis
## Usage
Each example follows the same pattern:
```python
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True)
# Run a query
result = council.run("Your query here")
# Access results
print(result["final_response"]) # Chairman's synthesized answer
print(result["original_responses"]) # Individual member responses
print(result["evaluations"]) # How members ranked each other
```
## Running Examples
Run any example directly:
```bash
python examples/multi_agent/llm_council_examples/marketing_strategy_council.py
python examples/multi_agent/llm_council_examples/finance_analysis_council.py
python examples/multi_agent/llm_council_examples/medical_diagnosis_council.py
```
## Key Features
- **Multiple Perspectives**: Each council member (GPT-5.1, Gemini, Claude, Grok) provides unique insights
- **Peer Review**: Members evaluate and rank each other's responses anonymously
- **Synthesis**: Chairman combines the best elements from all responses
- **Transparency**: See both individual responses and evaluation rankings
## Council Members
The default council consists of:
- **GPT-5.1-Councilor**: Analytical and comprehensive
- **Gemini-3-Pro-Councilor**: Concise and well-processed
- **Claude-Sonnet-4.5-Councilor**: Thoughtful and balanced
- **Grok-4-Councilor**: Creative and innovative
## Customization
You can create custom council members:
```python
from swarms import Agent
from swarms.structs.llm_council import LLMCouncil, get_gpt_councilor_prompt
custom_agent = Agent(
agent_name="Custom-Councilor",
system_prompt=get_gpt_councilor_prompt(),
model_name="gpt-4.1",
max_loops=1,
)
council = LLMCouncil(
council_members=[custom_agent, ...],
chairman_model="gpt-5.1",
verbose=True
)
```

@ -0,0 +1,31 @@
"""
LLM Council Example: Business Strategy Development
This example demonstrates using the LLM Council to develop comprehensive
business strategies for new ventures.
"""
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True)
# Business strategy query
query = """
A tech startup wants to launch an AI-powered personal finance app targeting
millennials and Gen Z. Develop a comprehensive business strategy including:
1. Market opportunity and competitive landscape analysis
2. Product positioning and unique value proposition
3. Go-to-market strategy and customer acquisition plan
4. Revenue model and pricing strategy
5. Key partnerships and distribution channels
6. Resource requirements and funding needs
7. Risk assessment and mitigation strategies
8. Success metrics and KPIs for first 12 months
"""
# Run the council
result = council.run(query)
# Print final response
print(result["final_response"])

@ -0,0 +1,29 @@
"""
LLM Council Example: ETF Stock Analysis
This example demonstrates using the LLM Council to analyze ETF holdings
and provide stock investment recommendations.
"""
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True)
# ETF and stock analysis query
query = """
Analyze the top energy ETFs (including nuclear, solar, gas, and renewable energy)
and provide:
1. Top 5 best-performing energy stocks across all energy sectors
2. ETF recommendations for diversified energy exposure
3. Risk-return profiles for each recommendation
4. Current market conditions affecting energy investments
5. Allocation strategy for a $100,000 portfolio
6. Key metrics to track for each investment
"""
# Run the council
result = council.run(query)
# Print final response
print(result["final_response"])

@ -0,0 +1,29 @@
"""
LLM Council Example: Financial Analysis
This example demonstrates using the LLM Council to provide comprehensive
financial analysis and investment recommendations.
"""
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True)
# Financial analysis query
query = """
Provide a comprehensive financial analysis for investing in emerging markets
technology ETFs. Include:
1. Risk assessment and volatility analysis
2. Historical performance trends
3. Sector composition and diversification benefits
4. Comparison with developed market tech ETFs
5. Recommended allocation percentage for a moderate risk portfolio
6. Key factors to monitor going forward
"""
# Run the council
result = council.run(query)
# Print final response
print(result["final_response"])

@ -0,0 +1,31 @@
"""
LLM Council Example: Legal Analysis
This example demonstrates using the LLM Council to analyze legal scenarios
and provide comprehensive legal insights.
"""
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True)
# Legal analysis query
query = """
A startup is considering using AI-generated content for their marketing materials.
Analyze the legal implications including:
1. Intellectual property rights and ownership of AI-generated content
2. Copyright and trademark considerations
3. Liability for AI-generated content that may be inaccurate or misleading
4. Compliance with advertising regulations (FTC, FDA, etc.)
5. Data privacy implications if using customer data to train models
6. Contractual considerations with AI service providers
7. Risk mitigation strategies
8. Best practices for legal compliance
"""
# Run the council
result = council.run(query)
# Print final response
print(result["final_response"])

@ -0,0 +1,12 @@
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True, output_type="final")
# Example query
query = "What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?"
# Run the council
result = council.run(query)
print(result)

@ -0,0 +1,28 @@
"""
LLM Council Example: Marketing Strategy Analysis
This example demonstrates using the LLM Council to analyze and develop
comprehensive marketing strategies by leveraging multiple AI perspectives.
"""
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True)
# Marketing strategy query
query = """
Analyze the marketing strategy for a new sustainable energy startup launching
a solar panel subscription service. Provide recommendations on:
1. Target audience segmentation
2. Key messaging and value propositions
3. Marketing channels and budget allocation
4. Competitive positioning
5. Launch timeline and milestones
"""
# Run the council
result = council.run(query)
# Print final response
print(result["final_response"])

@ -0,0 +1,36 @@
"""
LLM Council Example: Medical Diagnosis Analysis
This example demonstrates using the LLM Council to analyze symptoms
and provide diagnostic insights.
"""
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True)
# Medical diagnosis query
query = """
A 35-year-old patient presents with:
- Persistent fatigue for 3 months
- Unexplained weight loss (15 lbs)
- Night sweats
- Intermittent low-grade fever
- Swollen lymph nodes in neck and armpits
- Recent blood work shows elevated ESR and CRP
Provide:
1. Differential diagnosis with most likely conditions ranked
2. Additional diagnostic tests needed to confirm
3. Red flag symptoms requiring immediate attention
4. Possible causes and risk factors
5. Recommended next steps for the patient
6. When to seek emergency care
"""
# Run the council
result = council.run(query)
# Print final response
print(result["final_response"])

@ -0,0 +1,30 @@
"""
LLM Council Example: Medical Treatment Analysis
This example demonstrates using the LLM Council to analyze medical treatments
and provide comprehensive treatment recommendations.
"""
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True)
# Medical treatment query
query = """
A 45-year-old patient with Type 2 diabetes, hypertension, and early-stage
kidney disease needs treatment recommendations. Provide:
1. Comprehensive treatment plan addressing all conditions
2. Medication options with pros/cons for each condition
3. Lifestyle modifications and their expected impact
4. Monitoring schedule and key metrics to track
5. Potential drug interactions and contraindications
6. Expected outcomes and timeline for improvement
7. When to consider specialist referrals
"""
# Run the council
result = council.run(query)
# Print final response
print(result["final_response"])

@ -0,0 +1,31 @@
"""
LLM Council Example: Research Analysis
This example demonstrates using the LLM Council to conduct comprehensive
research analysis on complex topics.
"""
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True)
# Research analysis query
query = """
Conduct a comprehensive analysis of the potential impact of climate change
on global food security over the next 20 years. Include:
1. Key climate factors affecting agriculture (temperature, precipitation, extreme weather)
2. Regional vulnerabilities and impacts on major food-producing regions
3. Crop yield projections and food availability scenarios
4. Economic implications and food price volatility
5. Adaptation strategies and technological solutions
6. Policy recommendations for governments and international organizations
7. Role of innovation in agriculture (precision farming, GMOs, vertical farming)
8. Social and geopolitical implications of food insecurity
"""
# Run the council
result = council.run(query)
# Print final response
print(result["final_response"])

@ -0,0 +1,31 @@
"""
LLM Council Example: Technology Assessment
This example demonstrates using the LLM Council to assess emerging technologies
and their business implications.
"""
from swarms.structs.llm_council import LLMCouncil
# Create the council
council = LLMCouncil(verbose=True)
# Technology assessment query
query = """
Evaluate the business potential and implementation strategy for integrating
quantum computing capabilities into a financial services company. Consider:
1. Current state of quantum computing technology
2. Specific use cases in financial services (risk modeling, portfolio optimization, fraud detection)
3. Competitive advantages and potential ROI
4. Implementation timeline and resource requirements
5. Technical challenges and limitations
6. Risk factors and mitigation strategies
7. Partnership opportunities with quantum computing providers
8. Expected timeline for practical business value
"""
# Run the council
result = council.run(query)
# Print final response
print(result["final_response"])

@ -26,7 +26,6 @@ router = SwarmRouter(
agents=agents,
swarm_type="SequentialWorkflow",
output_type="dict",
return_entire_history=False,
)
output = router.run("How are you doing?")

@ -8,7 +8,6 @@ from swarms.structs.swarming_architectures import (
exponential_swarm,
fibonacci_swarm,
grid_swarm,
linear_swarm,
mesh_swarm,
one_to_three,
prime_swarm,
@ -121,30 +120,6 @@ def run_healthcare_grid_swarm():
print(result)
def run_finance_linear_swarm():
"""Loan approval process using linear swarm"""
print_separator()
print("FINANCE - LOAN APPROVAL PROCESS (Linear Swarm)")
agents = create_finance_agents()[:3]
tasks = [
"Review loan application and credit history",
"Assess risk factors and compliance requirements",
"Generate final loan recommendation",
]
print("\nTasks:")
for i, task in enumerate(tasks, 1):
print(f"{i}. {task}")
result = linear_swarm(agents, tasks)
print("\nResults:")
for log in result["history"]:
print(f"\n{log['agent_name']}:")
print(f"Task: {log['task']}")
print(f"Response: {log['response']}")
def run_healthcare_star_swarm():
"""Complex medical case management using star swarm"""
print_separator()
@ -287,7 +262,6 @@ async def run_all_examples():
# Finance examples
run_finance_circular_swarm()
run_finance_linear_swarm()
run_finance_mesh_swarm()
run_mathematical_finance_swarms()

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save