Merge branch 'kyegomez:master' into linear_removal

pull/1245/head
Hugh Nguyen 1 day ago committed by GitHub
commit 53f39fb2ba
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -0,0 +1,40 @@
# AOP Examples Overview
Deploy agents as network services using the Agent Orchestration Protocol (AOP). Turn your agents into distributed, scalable, and accessible services.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **AOP Fundamentals** | Understanding agent-as-a-service deployment |
| **Server Setup** | Running agents as MCP servers |
| **Client Integration** | Connecting to remote agents |
| **Production Deployment** | Scaling and monitoring agents |
---
## AOP Examples
| Example | Description | Link |
|---------|-------------|------|
| **Medical AOP Example** | Healthcare agent deployment with AOP | [View Example](./aop_medical.md) |
---
## Use Cases
| Use Case | Description |
|----------|-------------|
| **Microservices** | Agent per service |
| **API Gateway** | Central agent access point |
| **Multi-tenant** | Shared agent infrastructure |
| **Edge Deployment** | Agents at the edge |
---
## Related Resources
- [AOP Reference Documentation](../swarms/structs/aop.md) - Complete AOP API
- [AOP Server Setup](../swarms/examples/aop_server_example.md) - Server configuration
- [AOP Cluster Example](../swarms/examples/aop_cluster_example.md) - Multi-node setup
- [Deployment Solutions](../deployment_solutions/overview.md) - Production deployment

@ -0,0 +1,69 @@
# Applications Overview
Real-world multi-agent applications built with Swarms. These examples demonstrate complete solutions for business, research, finance, and automation use cases.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **Business Applications** | Marketing, hiring, M&A advisory swarms |
| **Research Systems** | Advanced research and analysis workflows |
| **Financial Analysis** | ETF research and investment analysis |
| **Automation** | Browser agents and web automation |
| **Industry Solutions** | Real estate, job finding, and more |
---
## Application Examples
| Application | Description | Industry | Link |
|-------------|-------------|----------|------|
| **Swarms of Browser Agents** | Automated web browsing with multiple agents | Automation | [View Example](../swarms/examples/swarms_of_browser_agents.md) |
| **Hierarchical Marketing Team** | Multi-agent marketing strategy and execution | Marketing | [View Example](./marketing_team.md) |
| **Gold ETF Research with HeavySwarm** | Comprehensive ETF analysis using Heavy Swarm | Finance | [View Example](./gold_etf_research.md) |
| **Hiring Swarm** | Automated candidate screening and evaluation | HR/Recruiting | [View Example](./hiring_swarm.md) |
| **Advanced Research** | Multi-agent research and analysis system | Research | [View Example](./av.md) |
| **Real Estate Swarm** | Property analysis and market research | Real Estate | [View Example](./realestate_swarm.md) |
| **Job Finding Swarm** | Automated job search and matching | Career | [View Example](./job_finding.md) |
| **M&A Advisory Swarm** | Mergers & acquisitions analysis | Finance | [View Example](./ma_swarm.md) |
---
## Applications by Category
### Business & Marketing
| Application | Description | Link |
|-------------|-------------|------|
| **Hierarchical Marketing Team** | Complete marketing strategy system | [View Example](./marketing_team.md) |
| **Hiring Swarm** | End-to-end recruiting automation | [View Example](./hiring_swarm.md) |
| **M&A Advisory Swarm** | Due diligence and analysis | [View Example](./ma_swarm.md) |
### Financial Analysis
| Application | Description | Link |
|-------------|-------------|------|
| **Gold ETF Research** | Comprehensive ETF analysis | [View Example](./gold_etf_research.md) |
### Research & Automation
| Application | Description | Link |
|-------------|-------------|------|
| **Advanced Research** | Multi-source research compilation | [View Example](./av.md) |
| **Browser Agents** | Automated web interaction | [View Example](../swarms/examples/swarms_of_browser_agents.md) |
| **Job Finding Swarm** | Career opportunity discovery | [View Example](./job_finding.md) |
### Real Estate
| Application | Description | Link |
|-------------|-------------|------|
| **Real Estate Swarm** | Property market analysis | [View Example](./realestate_swarm.md) |
---
## Related Resources
- [HierarchicalSwarm Documentation](../swarms/structs/hierarchical_swarm.md)
- [HeavySwarm Documentation](../swarms/structs/heavy_swarm.md)
- [Building Custom Swarms](../swarms/structs/custom_swarm.md)
- [Deployment Solutions](../deployment_solutions/overview.md)

@ -0,0 +1,29 @@
# Apps Examples Overview
Complete application examples built with Swarms. These examples show how to build practical tools and utilities with AI agents.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **Web Scraping** | Building intelligent web scrapers |
| **Database Integration** | Smart database query agents |
| **Practical Tools** | End-to-end application development |
---
## App Examples
| App | Description | Link |
|-----|-------------|------|
| **Web Scraper Agents** | Intelligent web data extraction | [View Example](../developer_guides/web_scraper.md) |
| **Smart Database** | AI-powered database interactions | [View Example](./smart_database.md) |
---
## Related Resources
- [Tools & Integrations](./tools_integrations_overview.md) - External service connections
- [Multi-Agent Architectures](./multi_agent_architectures_overview.md) - Complex agent systems
- [Deployment Solutions](../deployment_solutions/overview.md) - Production deployment

@ -0,0 +1,80 @@
# Basic Examples Overview
Start your Swarms journey with single-agent examples. Learn how to create agents, use tools, process images, integrate with different LLM providers, and publish to the marketplace.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **Agent Basics** | Create and configure individual agents |
| **Tool Integration** | Equip agents with callable tools and functions |
| **Vision Capabilities** | Process images and multi-modal inputs |
| **LLM Providers** | Connect to OpenAI, Anthropic, Groq, and more |
| **Utilities** | Streaming, output types, and marketplace publishing |
---
## Individual Agent Examples
### Core Agent Usage
| Example | Description | Link |
|---------|-------------|------|
| **Basic Agent** | Fundamental agent creation and execution | [View Example](../swarms/examples/basic_agent.md) |
### Tool Usage
| Example | Description | Link |
|---------|-------------|------|
| **Agents with Vision and Tool Usage** | Combine vision and tools in one agent | [View Example](../swarms/examples/vision_tools.md) |
| **Agents with Callable Tools** | Equip agents with Python functions as tools | [View Example](../swarms/examples/agent_with_tools.md) |
| **Agent with Structured Outputs** | Get consistent JSON/structured responses | [View Example](../swarms/examples/agent_structured_outputs.md) |
| **Message Transforms** | Manage context with message transformations | [View Example](../swarms/structs/transforms.md) |
### Vision & Multi-Modal
| Example | Description | Link |
|---------|-------------|------|
| **Agents with Vision** | Process and analyze images | [View Example](../swarms/examples/vision_processing.md) |
| **Agent with Multiple Images** | Handle multiple images in one request | [View Example](../swarms/examples/multiple_images.md) |
### Utilities
| Example | Description | Link |
|---------|-------------|------|
| **Agent with Streaming** | Stream responses in real-time | [View Example](./agent_stream.md) |
| **Agent Output Types** | Different output formats (str, json, dict, yaml) | [View Example](../swarms/examples/agent_output_types.md) |
| **Gradio Chat Interface** | Build chat UIs for your agents | [View Example](../swarms/ui/main.md) |
| **Agent with Gemini Nano Banana** | Jarvis-style agent example | [View Example](../swarms/examples/jarvis_agent.md) |
| **Agent Marketplace Publishing** | Publish agents to the Swarms marketplace | [View Example](./marketplace_publishing_quickstart.md) |
---
## LLM Provider Examples
Connect your agents to various language model providers:
| Provider | Description | Link |
|----------|-------------|------|
| **Overview** | Guide to all supported providers | [View Guide](../swarms/examples/model_providers.md) |
| **OpenAI** | GPT-4, GPT-4o, GPT-4o-mini integration | [View Example](../swarms/examples/openai_example.md) |
| **Anthropic** | Claude models integration | [View Example](../swarms/examples/claude.md) |
| **Groq** | Ultra-fast inference with Groq | [View Example](../swarms/examples/groq.md) |
| **Cohere** | Cohere Command models | [View Example](../swarms/examples/cohere.md) |
| **DeepSeek** | DeepSeek models integration | [View Example](../swarms/examples/deepseek.md) |
| **Ollama** | Local models with Ollama | [View Example](../swarms/examples/ollama.md) |
| **OpenRouter** | Access multiple providers via OpenRouter | [View Example](../swarms/examples/openrouter.md) |
| **XAI** | Grok models from xAI | [View Example](../swarms/examples/xai.md) |
| **Azure OpenAI** | Enterprise Azure deployment | [View Example](../swarms/examples/azure.md) |
| **Llama4** | Meta's Llama 4 models | [View Example](../swarms/examples/llama4.md) |
| **Custom Base URL** | Connect to any OpenAI-compatible API | [View Example](../swarms/examples/custom_base_url_example.md) |
---
## Next Steps
After mastering basic agents, explore:
- [Multi-Agent Architectures](./multi_agent_architectures_overview.md) - Coordinate multiple agents
- [Tools Documentation](../swarms/tools/main.md) - Deep dive into tool creation
- [CLI Guides](./cli_guides_overview.md) - Run agents from command line

@ -0,0 +1,47 @@
# CLI Guides Overview
Master the Swarms command-line interface with these step-by-step guides. Execute agents, run multi-agent workflows, and integrate Swarms into your DevOps pipelines—all from your terminal.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **CLI Basics** | Install, configure, and run your first commands |
| **Agent Creation** | Create and run agents directly from command line |
| **YAML Configuration** | Define agents in config files for reproducible deployments |
| **Multi-Agent Commands** | Run LLM Council and Heavy Swarm from terminal |
| **DevOps Integration** | Integrate into CI/CD pipelines and scripts |
---
## CLI Guides
| Guide | Description | Link |
|-------|-------------|------|
| **CLI Quickstart** | Get started with Swarms CLI in 3 steps—install, configure, and run | [View Guide](../swarms/cli/cli_quickstart.md) |
| **Creating Agents from CLI** | Create, configure, and run AI agents directly from your terminal | [View Guide](../swarms/cli/cli_agent_guide.md) |
| **YAML Configuration** | Run multiple agents from YAML configuration files | [View Guide](../swarms/cli/cli_yaml_guide.md) |
| **LLM Council CLI** | Run collaborative multi-agent decision-making from command line | [View Guide](../swarms/cli/cli_llm_council_guide.md) |
| **Heavy Swarm CLI** | Execute comprehensive task analysis swarms from terminal | [View Guide](../swarms/cli/cli_heavy_swarm_guide.md) |
| **CLI Multi-Agent Commands** | Complete guide to multi-agent CLI commands | [View Guide](./cli_multi_agent_quickstart.md) |
| **CLI Examples** | Additional CLI usage examples and patterns | [View Guide](../swarms/cli/cli_examples.md) |
---
## Use Cases
| Use Case | Recommended Guide |
|----------|-------------------|
| First time using CLI | [CLI Quickstart](../swarms/cli/cli_quickstart.md) |
| Creating custom agents | [Creating Agents from CLI](../swarms/cli/cli_agent_guide.md) |
| Team/production deployments | [YAML Configuration](../swarms/cli/cli_yaml_guide.md) |
| Collaborative decision-making | [LLM Council CLI](../swarms/cli/cli_llm_council_guide.md) |
| Complex research tasks | [Heavy Swarm CLI](../swarms/cli/cli_heavy_swarm_guide.md) |
---
## Related Resources
- [CLI Reference Documentation](../swarms/cli/cli_reference.md) - Complete command reference
- [Agent Documentation](../swarms/structs/agent.md) - Agent class reference
- [Environment Configuration](../swarms/install/env.md) - Environment setup guide

@ -0,0 +1,215 @@
# CLI Multi-Agent Features: 3-Step Quickstart Guide
Run LLM Council and Heavy Swarm directly from the command line for seamless DevOps integration. Execute sophisticated multi-agent workflows without writing Python code.
## Overview
| Feature | Description |
|---------|-------------|
| **LLM Council CLI** | Run collaborative decision-making from terminal |
| **Heavy Swarm CLI** | Execute comprehensive research swarms |
| **DevOps Ready** | Integrate into CI/CD pipelines and scripts |
| **Configurable** | Full parameter control from command line |
---
## Step 1: Install and Verify
Ensure Swarms is installed and verify CLI access:
```bash
# Install swarms
pip install swarms
# Verify CLI is available
swarms --help
```
You should see the Swarms CLI banner and available commands.
---
## Step 2: Set Environment Variables
Configure your API keys:
```bash
# Set your OpenAI API key (or other provider)
export OPENAI_API_KEY="your-openai-api-key"
# Optional: Set workspace directory
export WORKSPACE_DIR="./agent_workspace"
```
Or add to your `.env` file:
```
OPENAI_API_KEY=your-openai-api-key
WORKSPACE_DIR=./agent_workspace
```
---
## Step 3: Run Multi-Agent Commands
### LLM Council
Run a collaborative council of AI agents:
```bash
# Basic usage
swarms llm-council --task "What is the best approach to implement microservices architecture?"
# With verbose output
swarms llm-council --task "Evaluate investment opportunities in AI startups" --verbose
```
### Heavy Swarm
Run comprehensive research and analysis:
```bash
# Basic usage
swarms heavy-swarm --task "Analyze the current state of quantum computing"
# With configuration options
swarms heavy-swarm \
--task "Research renewable energy market trends" \
--loops-per-agent 2 \
--question-agent-model-name gpt-4o-mini \
--worker-model-name gpt-4o-mini \
--verbose
```
---
## Complete CLI Reference
### LLM Council Command
```bash
swarms llm-council --task "<your query>" [options]
```
| Option | Description |
|--------|-------------|
| `--task` | **Required.** The query or question for the council |
| `--verbose` | Enable detailed output logging |
**Examples:**
```bash
# Strategic decision
swarms llm-council --task "Should our startup pivot from B2B to B2C?"
# Technical evaluation
swarms llm-council --task "Compare React vs Vue for enterprise applications"
# Business analysis
swarms llm-council --task "What are the risks of expanding to European markets?"
```
---
### Heavy Swarm Command
```bash
swarms heavy-swarm --task "<your task>" [options]
```
| Option | Default | Description |
|--------|---------|-------------|
| `--task` | - | **Required.** The research task |
| `--loops-per-agent` | 1 | Number of loops per agent |
| `--question-agent-model-name` | gpt-4o-mini | Model for question agent |
| `--worker-model-name` | gpt-4o-mini | Model for worker agents |
| `--random-loops-per-agent` | False | Randomize loops per agent |
| `--verbose` | False | Enable detailed output |
**Examples:**
```bash
# Comprehensive research
swarms heavy-swarm --task "Research the impact of AI on healthcare diagnostics" --verbose
# With custom models
swarms heavy-swarm \
--task "Analyze cryptocurrency regulation trends globally" \
--question-agent-model-name gpt-4 \
--worker-model-name gpt-4 \
--loops-per-agent 3
# Quick analysis
swarms heavy-swarm --task "Summarize recent advances in battery technology"
```
---
## Other Useful CLI Commands
### Setup Check
Verify your environment is properly configured:
```bash
swarms setup-check --verbose
```
### Run Single Agent
Execute a single agent task:
```bash
swarms agent \
--name "Research-Agent" \
--task "Summarize recent AI developments" \
--model "gpt-4o-mini" \
--max-loops 1
```
### Auto Swarm
Automatically generate and run a swarm configuration:
```bash
swarms autoswarm --task "Build a content analysis pipeline" --model gpt-4
```
### Show All Commands
Display all available CLI features:
```bash
swarms show-all
```
---
## Troubleshooting
### Common Issues
| Issue | Solution |
|-------|----------|
| "Command not found" | Ensure `pip install swarms` completed successfully |
| "API key not set" | Export `OPENAI_API_KEY` environment variable |
| "Task cannot be empty" | Always provide `--task` argument |
| Timeout errors | Check network connectivity and API rate limits |
### Debug Mode
Run with verbose output for debugging:
```bash
swarms llm-council --task "Your query" --verbose 2>&1 | tee debug.log
```
---
## Next Steps
- Explore [CLI Reference Documentation](../swarms/cli/cli_reference.md) for all commands
- See [CLI Examples](../swarms/cli/cli_examples.md) for more use cases
- Learn about [LLM Council](./llm_council_quickstart.md) Python API
- Try [Heavy Swarm Documentation](../swarms/structs/heavy_swarm.md) for advanced configuration

@ -0,0 +1,233 @@
# DebateWithJudge: 3-Step Quickstart Guide
The DebateWithJudge architecture enables structured debates between two agents (Pro and Con) with a Judge providing refined synthesis over multiple rounds. This creates progressively improved answers through iterative argumentation and evaluation.
## Overview
| Feature | Description |
|---------|-------------|
| **Pro Agent** | Argues in favor of a position with evidence and reasoning |
| **Con Agent** | Presents counter-arguments and identifies weaknesses |
| **Judge Agent** | Evaluates both sides and synthesizes the best elements |
| **Iterative Refinement** | Multiple rounds progressively improve the final answer |
```
Agent A (Pro) ↔ Agent B (Con)
│ │
▼ ▼
Judge / Critic Agent
Winner or synthesis → refined answer
```
---
## Step 1: Install and Import
Ensure you have Swarms installed and import the DebateWithJudge class:
```bash
pip install swarms
```
```python
from swarms import DebateWithJudge
```
---
## Step 2: Create the Debate System
Create a DebateWithJudge system using preset agents (the simplest approach):
```python
# Create debate system with preset optimized agents
debate = DebateWithJudge(
preset_agents=True, # Use built-in optimized agents
max_loops=3, # 3 rounds of debate
model_name="gpt-4o-mini",
verbose=True
)
```
---
## Step 3: Run the Debate
Execute the debate on a topic:
```python
# Define the debate topic
topic = "Should artificial intelligence be regulated by governments?"
# Run the debate
result = debate.run(task=topic)
# Print the refined answer
print(result)
# Or get just the final synthesis
final_answer = debate.get_final_answer()
print(final_answer)
```
---
## Complete Example
Here's a complete working example:
```python
from swarms import DebateWithJudge
# Step 1: Create the debate system with preset agents
debate_system = DebateWithJudge(
preset_agents=True,
max_loops=3,
model_name="gpt-4o-mini",
output_type="str-all-except-first",
verbose=True,
)
# Step 2: Define a complex topic
topic = (
"Should artificial intelligence be regulated by governments? "
"Discuss the balance between innovation and safety."
)
# Step 3: Run the debate and get refined answer
result = debate_system.run(task=topic)
print("=" * 60)
print("DEBATE RESULT:")
print("=" * 60)
print(result)
# Access conversation history for detailed analysis
history = debate_system.get_conversation_history()
print(f"\nTotal exchanges: {len(history)}")
```
---
## Custom Agents Example
Create specialized agents for domain-specific debates:
```python
from swarms import Agent, DebateWithJudge
# Create specialized Pro agent
pro_agent = Agent(
agent_name="Innovation-Advocate",
system_prompt=(
"You are a technology policy expert arguing for innovation and minimal regulation. "
"You present arguments focusing on economic growth, technological competitiveness, "
"and the risks of over-regulation stifling progress."
),
model_name="gpt-4o-mini",
max_loops=1,
)
# Create specialized Con agent
con_agent = Agent(
agent_name="Safety-Advocate",
system_prompt=(
"You are a technology policy expert arguing for strong AI safety regulations. "
"You present arguments focusing on public safety, ethical considerations, "
"and the need for government oversight of powerful technologies."
),
model_name="gpt-4o-mini",
max_loops=1,
)
# Create specialized Judge agent
judge_agent = Agent(
agent_name="Policy-Analyst",
system_prompt=(
"You are an impartial policy analyst evaluating technology regulation debates. "
"You synthesize the strongest arguments from both sides and provide "
"balanced, actionable policy recommendations."
),
model_name="gpt-4o-mini",
max_loops=1,
)
# Create debate system with custom agents
debate = DebateWithJudge(
agents=[pro_agent, con_agent, judge_agent], # Pass as list
max_loops=3,
verbose=True,
)
result = debate.run("Should AI-generated content require mandatory disclosure labels?")
```
---
## Batch Processing
Process multiple debate topics:
```python
from swarms import DebateWithJudge
debate = DebateWithJudge(preset_agents=True, max_loops=2)
# Multiple topics to debate
topics = [
"Should remote work become the standard for knowledge workers?",
"Is cryptocurrency a viable alternative to traditional banking?",
"Should social media platforms be held accountable for content moderation?",
]
# Process all topics
results = debate.batched_run(topics)
for topic, result in zip(topics, results):
print(f"\nTopic: {topic}")
print(f"Result: {result[:200]}...")
```
---
## Configuration Options
| Parameter | Default | Description |
|-----------|---------|-------------|
| `preset_agents` | `False` | Use built-in optimized agents |
| `max_loops` | `3` | Number of debate rounds |
| `model_name` | `"gpt-4o-mini"` | Model for preset agents |
| `output_type` | `"str-all-except-first"` | Output format |
| `verbose` | `True` | Enable detailed logging |
### Output Types
| Value | Description |
|-------|-------------|
| `"str-all-except-first"` | Formatted string, excluding initialization (default) |
| `"str"` | All messages as formatted string |
| `"dict"` | Messages as dictionary |
| `"list"` | Messages as list |
---
## Use Cases
| Domain | Example Topic |
|--------|---------------|
| **Policy** | "Should universal basic income be implemented?" |
| **Technology** | "Microservices vs. monolithic architecture for startups?" |
| **Business** | "Should companies prioritize growth or profitability?" |
| **Ethics** | "Is it ethical to use AI in hiring decisions?" |
| **Science** | "Should gene editing be allowed for non-medical purposes?" |
---
## Next Steps
- Explore [DebateWithJudge Reference](../swarms/structs/debate_with_judge.md) for complete API details
- See [Debate Examples](https://github.com/kyegomez/swarms/tree/master/examples/multi_agent/debate_examples) for more use cases
- Learn about [Orchestration Methods](../swarms/structs/orchestration_methods.md) for other debate architectures

@ -0,0 +1,327 @@
# GraphWorkflow with Rustworkx: 3-Step Quickstart Guide
GraphWorkflow provides a powerful workflow orchestration system that creates directed graphs of agents for complex multi-agent collaboration. The new **Rustworkx integration** delivers 5-10x faster performance for large-scale workflows.
## Overview
| Feature | Description |
|---------|-------------|
| **Directed Graph Structure** | Nodes are agents, edges define data flow |
| **Dual Backend Support** | NetworkX (compatibility) or Rustworkx (performance) |
| **Parallel Execution** | Multiple agents run simultaneously within layers |
| **Automatic Compilation** | Optimizes workflow structure for efficient execution |
| **5-10x Performance** | Rustworkx backend for high-throughput workflows |
---
## Step 1: Install and Import
Install Swarms and Rustworkx for high-performance workflows:
```bash
pip install swarms rustworkx
```
```python
from swarms import Agent, GraphWorkflow
```
---
## Step 2: Create the Workflow with Rustworkx Backend
Create agents and build a workflow using the high-performance Rustworkx backend:
```python
# Create specialized agents
research_agent = Agent(
agent_name="ResearchAgent",
model_name="gpt-4o-mini",
system_prompt="You are a research specialist. Gather and analyze information.",
max_loops=1
)
analysis_agent = Agent(
agent_name="AnalysisAgent",
model_name="gpt-4o-mini",
system_prompt="You are an analyst. Process research findings and extract insights.",
max_loops=1
)
# Create workflow with rustworkx backend for better performance
workflow = GraphWorkflow(
name="Research-Analysis-Pipeline",
backend="rustworkx", # Use rustworkx for 5-10x faster performance
verbose=True
)
# Add agents as nodes
workflow.add_node(research_agent)
workflow.add_node(analysis_agent)
# Connect agents with edges
workflow.add_edge("ResearchAgent", "AnalysisAgent")
```
---
## Step 3: Execute the Workflow
Run the workflow and get results:
```python
# Execute the workflow
results = workflow.run("What are the latest trends in renewable energy technology?")
# Print results
print(results)
```
---
## Complete Example
Here's a complete parallel processing workflow:
```python
from swarms import Agent, GraphWorkflow
# Step 1: Create specialized agents
data_collector = Agent(
agent_name="DataCollector",
model_name="gpt-4o-mini",
system_prompt="You collect and organize data from various sources.",
max_loops=1
)
technical_analyst = Agent(
agent_name="TechnicalAnalyst",
model_name="gpt-4o-mini",
system_prompt="You perform technical analysis on data.",
max_loops=1
)
market_analyst = Agent(
agent_name="MarketAnalyst",
model_name="gpt-4o-mini",
system_prompt="You analyze market trends and conditions.",
max_loops=1
)
synthesis_agent = Agent(
agent_name="SynthesisAgent",
model_name="gpt-4o-mini",
system_prompt="You synthesize insights from multiple analysts into a cohesive report.",
max_loops=1
)
# Step 2: Build workflow with rustworkx backend
workflow = GraphWorkflow(
name="Market-Analysis-Pipeline",
backend="rustworkx", # High-performance backend
verbose=True
)
# Add all agents
for agent in [data_collector, technical_analyst, market_analyst, synthesis_agent]:
workflow.add_node(agent)
# Create fan-out pattern: data collector feeds both analysts
workflow.add_edges_from_source(
"DataCollector",
["TechnicalAnalyst", "MarketAnalyst"]
)
# Create fan-in pattern: both analysts feed synthesis agent
workflow.add_edges_to_target(
["TechnicalAnalyst", "MarketAnalyst"],
"SynthesisAgent"
)
# Step 3: Execute and get results
results = workflow.run("Analyze Bitcoin market trends for Q4 2024")
print("=" * 60)
print("WORKFLOW RESULTS:")
print("=" * 60)
print(results)
# Get compilation status
status = workflow.get_compilation_status()
print(f"\nLayers: {status['cached_layers_count']}")
print(f"Max workers: {status['max_workers']}")
```
---
## NetworkX vs Rustworkx Backend
| Graph Size | Recommended Backend | Performance |
|------------|-------------------|-------------|
| < 100 nodes | NetworkX | Minimal overhead |
| 100-1000 nodes | Either | Both perform well |
| 1000+ nodes | **Rustworkx** | 5-10x faster |
| 10k+ nodes | **Rustworkx** | Essential |
```python
# NetworkX backend (default, maximum compatibility)
workflow = GraphWorkflow(backend="networkx")
# Rustworkx backend (high performance)
workflow = GraphWorkflow(backend="rustworkx")
```
---
## Edge Patterns
### Fan-Out (One-to-Many)
```python
# One agent feeds multiple agents
workflow.add_edges_from_source(
"DataCollector",
["Analyst1", "Analyst2", "Analyst3"]
)
```
### Fan-In (Many-to-One)
```python
# Multiple agents feed one agent
workflow.add_edges_to_target(
["Analyst1", "Analyst2", "Analyst3"],
"SynthesisAgent"
)
```
### Parallel Chain (Many-to-Many)
```python
# Full mesh connection
workflow.add_parallel_chain(
["Source1", "Source2"],
["Target1", "Target2", "Target3"]
)
```
---
## Using from_spec for Quick Setup
Create workflows quickly with the `from_spec` class method:
```python
from swarms import Agent, GraphWorkflow
# Create agents
agent1 = Agent(agent_name="Researcher", model_name="gpt-4o-mini", max_loops=1)
agent2 = Agent(agent_name="Analyzer", model_name="gpt-4o-mini", max_loops=1)
agent3 = Agent(agent_name="Reporter", model_name="gpt-4o-mini", max_loops=1)
# Create workflow from specification
workflow = GraphWorkflow.from_spec(
agents=[agent1, agent2, agent3],
edges=[
("Researcher", "Analyzer"),
("Analyzer", "Reporter"),
],
task="Analyze climate change data",
backend="rustworkx" # Use high-performance backend
)
results = workflow.run()
```
---
## Visualization
Generate visual representations of your workflow:
```python
# Create visualization (requires graphviz)
output_file = workflow.visualize(
format="png",
view=True,
show_summary=True
)
print(f"Visualization saved to: {output_file}")
# Simple text visualization
text_viz = workflow.visualize_simple()
print(text_viz)
```
---
## Serialization
Save and load workflows:
```python
# Save workflow with conversation history
workflow.save_to_file(
"my_workflow.json",
include_conversation=True,
include_runtime_state=True
)
# Load workflow later
loaded_workflow = GraphWorkflow.load_from_file(
"my_workflow.json",
restore_runtime_state=True
)
# Continue execution
results = loaded_workflow.run("Follow-up analysis")
```
---
## Large-Scale Example with Rustworkx
```python
from swarms import Agent, GraphWorkflow
# Create workflow for large-scale processing
workflow = GraphWorkflow(
name="Large-Scale-Pipeline",
backend="rustworkx", # Essential for large graphs
verbose=True
)
# Create many processing agents
processors = []
for i in range(50):
agent = Agent(
agent_name=f"Processor{i}",
model_name="gpt-4o-mini",
max_loops=1
)
processors.append(agent)
workflow.add_node(agent)
# Create layered connections
for i in range(0, 40, 10):
sources = [f"Processor{j}" for j in range(i, i+10)]
targets = [f"Processor{j}" for j in range(i+10, min(i+20, 50))]
if targets:
workflow.add_parallel_chain(sources, targets)
# Compile and execute
workflow.compile()
status = workflow.get_compilation_status()
print(f"Compiled: {status['cached_layers_count']} layers")
results = workflow.run("Process dataset in parallel")
```
---
## Next Steps
- Explore [GraphWorkflow Reference](../swarms/structs/graph_workflow.md) for complete API details
- See [Multi-Agentic Patterns with GraphWorkflow](./graphworkflow_rustworkx_patterns.md) for advanced patterns
- Learn about [Visualization Options](../swarms/structs/graph_workflow.md#visualization-methods) for debugging workflows

@ -0,0 +1,170 @@
# LLM Council: 3-Step Quickstart Guide
The LLM Council enables collaborative decision-making with multiple AI agents through peer review and synthesis. Inspired by Andrej Karpathy's llm-council, it creates a council of specialized agents that respond independently, review each other's anonymized responses, and have a Chairman synthesize the best elements into a final answer.
## Overview
| Feature | Description |
|---------|-------------|
| **Multiple Perspectives** | Each council member provides unique insights from different viewpoints |
| **Peer Review** | Members evaluate and rank each other's responses anonymously |
| **Synthesis** | Chairman combines the best elements from all responses |
| **Transparency** | See both individual responses and evaluation rankings |
---
## Step 1: Install and Import
First, ensure you have Swarms installed and import the LLMCouncil class:
```bash
pip install swarms
```
```python
from swarms.structs.llm_council import LLMCouncil
```
---
## Step 2: Create the Council
Create an LLM Council with default council members (GPT-5.1, Gemini 3 Pro, Claude Sonnet 4.5, and Grok-4):
```python
# Create the council with default members
council = LLMCouncil(
name="Decision Council",
verbose=True,
output_type="dict-all-except-first"
)
```
---
## Step 3: Run a Query
Execute a query and get the synthesized response:
```python
# Run a query
result = council.run("What are the key factors to consider when choosing a cloud provider for enterprise applications?")
# Access the final synthesized answer
print(result["final_response"])
# View individual member responses
print(result["original_responses"])
# See how members ranked each other
print(result["evaluations"])
```
---
## Complete Example
Here's a complete working example:
```python
from swarms.structs.llm_council import LLMCouncil
# Step 1: Create the council
council = LLMCouncil(
name="Strategy Council",
description="A council for strategic decision-making",
verbose=True,
output_type="dict-all-except-first"
)
# Step 2: Run a strategic query
result = council.run(
"Should a B2B SaaS startup prioritize product-led growth or sales-led growth? "
"Consider factors like market size, customer acquisition costs, and scalability."
)
# Step 3: Process results
print("=" * 50)
print("FINAL SYNTHESIZED ANSWER:")
print("=" * 50)
print(result["final_response"])
```
---
## Custom Council Members
For specialized domains, create custom council members:
```python
from swarms import Agent
from swarms.structs.llm_council import LLMCouncil, get_gpt_councilor_prompt
# Create specialized agents
finance_expert = Agent(
agent_name="Finance-Councilor",
system_prompt="You are a financial analyst specializing in market analysis and investment strategies...",
model_name="gpt-4.1",
max_loops=1,
)
tech_expert = Agent(
agent_name="Technology-Councilor",
system_prompt="You are a technology strategist specializing in digital transformation...",
model_name="gpt-4.1",
max_loops=1,
)
risk_expert = Agent(
agent_name="Risk-Councilor",
system_prompt="You are a risk management expert specializing in enterprise risk assessment...",
model_name="gpt-4.1",
max_loops=1,
)
# Create council with custom members
council = LLMCouncil(
council_members=[finance_expert, tech_expert, risk_expert],
chairman_model="gpt-4.1",
verbose=True
)
result = council.run("Evaluate the risk-reward profile of investing in AI infrastructure")
```
---
## CLI Usage
Run LLM Council directly from the command line:
```bash
swarms llm-council --task "What is the best approach to implement microservices architecture?"
```
With verbose output:
```bash
swarms llm-council --task "Analyze the pros and cons of remote work" --verbose
```
---
## Use Cases
| Domain | Example Query |
|--------|---------------|
| **Business Strategy** | "Should we expand internationally or focus on domestic growth?" |
| **Technology** | "Which database architecture best suits our high-throughput requirements?" |
| **Finance** | "Evaluate investment opportunities in the renewable energy sector" |
| **Healthcare** | "What treatment approaches should be considered for this patient profile?" |
| **Legal** | "What are the compliance implications of this data processing policy?" |
---
## Next Steps
- Explore [LLM Council Examples](./llm_council_examples.md) for domain-specific implementations
- Learn about [LLM Council Reference Documentation](../swarms/structs/llm_council.md) for complete API details
- Try the [CLI Reference](../swarms/cli/cli_reference.md) for DevOps integration

@ -0,0 +1,273 @@
# Agent Marketplace Publishing: 3-Step Quickstart Guide
Publish your agents directly to the Swarms Marketplace with minimal configuration. Share your specialized agents with the community and monetize your creations.
## Overview
| Feature | Description |
|---------|-------------|
| **Direct Publishing** | Publish agents with a single flag |
| **Minimal Configuration** | Just add use cases, tags, and capabilities |
| **Automatic Integration** | Seamlessly integrates with marketplace API |
| **Monetization Ready** | Set pricing for your agents |
---
## Step 1: Get Your API Key
Before publishing, you need a Swarms API key:
1. Visit [swarms.world/platform/api-keys](https://swarms.world/platform/api-keys)
2. Create an account or sign in
3. Generate an API key
4. Set the environment variable:
```bash
export SWARMS_API_KEY="your-api-key-here"
```
Or add to your `.env` file:
```
SWARMS_API_KEY=your-api-key-here
```
---
## Step 2: Configure Your Agent
Create an agent with publishing configuration:
```python
from swarms import Agent
# Create your specialized agent
my_agent = Agent(
agent_name="Market-Analysis-Agent",
agent_description="Expert market analyst specializing in cryptocurrency and stock analysis",
model_name="gpt-4o-mini",
system_prompt="""You are an expert market analyst specializing in:
- Cryptocurrency market analysis
- Stock market trends
- Risk assessment
- Portfolio recommendations
Provide data-driven insights with confidence levels.""",
max_loops=1,
# Publishing configuration
publish_to_marketplace=True,
# Required: Define use cases
use_cases=[
{
"title": "Cryptocurrency Analysis",
"description": "Analyze crypto market trends and provide investment insights"
},
{
"title": "Stock Screening",
"description": "Screen stocks based on technical and fundamental criteria"
},
{
"title": "Portfolio Review",
"description": "Review and optimize investment portfolios"
}
],
# Required: Tags and capabilities
tags=["finance", "crypto", "stocks", "analysis"],
capabilities=["market-analysis", "risk-assessment", "portfolio-optimization"]
)
```
---
## Step 3: Run to Publish
Simply run the agent to trigger publishing:
```python
# Running the agent automatically publishes it
result = my_agent.run("Analyze Bitcoin's current market position")
print(result)
print("\n✅ Agent published to marketplace!")
```
---
## Complete Example
Here's a complete working example:
```python
import os
from swarms import Agent
# Ensure API key is set
if not os.getenv("SWARMS_API_KEY"):
raise ValueError("Please set SWARMS_API_KEY environment variable")
# Step 1: Create a specialized medical analysis agent
medical_agent = Agent(
agent_name="Blood-Data-Analysis-Agent",
agent_description="Explains and contextualizes common blood test panels with structured insights",
model_name="gpt-4o-mini",
max_loops=1,
system_prompt="""You are a clinical laboratory data analyst assistant focused on hematology and basic metabolic panels.
Your goals:
1) Interpret common blood test panels (CBC, CMP/BMP, lipid panel, HbA1c, thyroid panels)
2) Provide structured findings: out-of-range markers, degree of deviation, clinical significance
3) Identify potential confounders (e.g., hemolysis, fasting status, medications)
4) Suggest safe, non-diagnostic next steps
Reliability and safety:
- This is not medical advice. Do not diagnose or treat.
- Use cautious language with confidence levels (low/medium/high)
- Highlight red-flag combinations that warrant urgent clinical evaluation""",
# Step 2: Publishing configuration
publish_to_marketplace=True,
tags=["lab", "hematology", "metabolic", "education"],
capabilities=[
"panel-interpretation",
"risk-flagging",
"guideline-citation"
],
use_cases=[
{
"title": "Blood Analysis",
"description": "Analyze blood samples and summarize notable findings."
},
{
"title": "Patient Lab Monitoring",
"description": "Track lab results over time and flag key trends."
},
{
"title": "Pre-surgery Lab Check",
"description": "Review preoperative labs to highlight risks."
}
],
)
# Step 3: Run the agent (this publishes it to the marketplace)
result = medical_agent.run(
task="Analyze this blood sample: Hematology and Basic Metabolic Panel"
)
print(result)
```
---
## Required Fields for Publishing
| Field | Type | Description |
|-------|------|-------------|
| `publish_to_marketplace` | `bool` | Set to `True` to enable publishing |
| `use_cases` | `List[Dict]` | List of use case dictionaries with `title` and `description` |
| `tags` | `List[str]` | Keywords for discovery |
| `capabilities` | `List[str]` | Agent capabilities for matching |
### Use Case Format
```python
use_cases = [
{
"title": "Use Case Title",
"description": "Detailed description of what the agent does for this use case"
},
# Add more use cases...
]
```
---
## Optional: Programmatic Publishing
You can also publish prompts/agents directly using the utility function:
```python
from swarms.utils.swarms_marketplace_utils import add_prompt_to_marketplace
response = add_prompt_to_marketplace(
name="My Custom Agent",
prompt="Your detailed system prompt here...",
description="What this agent does",
use_cases=[
{"title": "Use Case 1", "description": "Description 1"},
{"title": "Use Case 2", "description": "Description 2"}
],
tags="tag1, tag2, tag3",
category="research",
is_free=True, # Set to False for paid agents
price_usd=0.0 # Set price if not free
)
print(response)
```
---
## Marketplace Categories
| Category | Description |
|----------|-------------|
| `research` | Research and analysis agents |
| `content` | Content generation agents |
| `coding` | Programming and development agents |
| `finance` | Financial analysis agents |
| `healthcare` | Medical and health-related agents |
| `education` | Educational and tutoring agents |
| `legal` | Legal research and analysis agents |
---
## Best Practices
!!! tip "Publishing Best Practices"
- **Clear Descriptions**: Write detailed, accurate agent descriptions
- **Multiple Use Cases**: Provide 3-5 distinct use cases
- **Relevant Tags**: Use specific, searchable keywords
- **Test First**: Thoroughly test your agent before publishing
- **System Prompt Quality**: Ensure your system prompt is well-crafted
!!! warning "Important Notes"
- `use_cases` is **required** when `publish_to_marketplace=True`
- Both `tags` and `capabilities` should be provided for discoverability
- The agent must have a valid `SWARMS_API_KEY` set in the environment
---
## Monetization
To create a paid agent:
```python
from swarms.utils.swarms_marketplace_utils import add_prompt_to_marketplace
response = add_prompt_to_marketplace(
name="Premium Analysis Agent",
prompt="Your premium agent prompt...",
description="Advanced analysis capabilities",
use_cases=[...],
tags="premium, advanced",
category="finance",
is_free=False, # Paid agent
price_usd=9.99 # Price per use
)
```
---
## Next Steps
- Visit [Swarms Marketplace](https://swarms.world) to browse published agents
- Learn about [Marketplace Documentation](../swarms_platform/share_and_discover.md)
- Explore [Monetization Options](../swarms_platform/monetize.md)
- See [API Key Management](../swarms_platform/apikeys.md)

@ -0,0 +1,69 @@
# Multi-Agent Architectures Overview
Build sophisticated multi-agent systems with Swarms' advanced orchestration patterns. From hierarchical teams to collaborative councils, these examples demonstrate how to coordinate multiple AI agents for complex tasks.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **Hierarchical Swarms** | Director agents coordinating worker agents |
| **Collaborative Systems** | Agents working together through debate and consensus |
| **Workflow Patterns** | Sequential, concurrent, and graph-based execution |
| **Routing Systems** | Intelligent task routing to specialized agents |
| **Group Interactions** | Multi-agent conversations and discussions |
---
## Architecture Examples
### Hierarchical & Orchestration
| Example | Description | Link |
|---------|-------------|------|
| **HierarchicalSwarm** | Multi-level agent organization with director and workers | [View Example](../swarms/examples/hierarchical_swarm_example.md) |
| **Hybrid Hierarchical-Cluster Swarm** | Combined hierarchical and cluster patterns | [View Example](../swarms/examples/hhcs_examples.md) |
| **SwarmRouter** | Intelligent routing of tasks to appropriate swarms | [View Example](../swarms/examples/swarm_router.md) |
| **MultiAgentRouter** | Route tasks to specialized individual agents | [View Example](../swarms/examples/multi_agent_router_minimal.md) |
### Collaborative & Consensus
| Example | Description | Link |
|---------|-------------|------|
| **LLM Council Quickstart** | Collaborative decision-making with peer review and synthesis | [View Example](./llm_council_quickstart.md) |
| **LLM Council Examples** | Domain-specific council implementations | [View Examples](./llm_council_examples.md) |
| **DebateWithJudge Quickstart** | Two agents debate with judge providing synthesis | [View Example](./debate_quickstart.md) |
| **Mixture of Agents** | Heterogeneous agents for diverse task handling | [View Example](../swarms/examples/moa_example.md) |
### Workflow Patterns
| Example | Description | Link |
|---------|-------------|------|
| **GraphWorkflow with Rustworkx** | High-performance graph-based workflows (5-10x faster) | [View Example](./graphworkflow_quickstart.md) |
| **Multi-Agentic Patterns with GraphWorkflow** | Advanced graph workflow patterns | [View Example](../swarms/examples/graphworkflow_rustworkx_patterns.md) |
| **SequentialWorkflow** | Linear agent pipelines | [View Example](../swarms/examples/sequential_example.md) |
| **ConcurrentWorkflow** | Parallel agent execution | [View Example](../swarms/examples/concurrent_workflow.md) |
### Group Communication
| Example | Description | Link |
|---------|-------------|------|
| **Group Chat** | Multi-agent group conversations | [View Example](../swarms/examples/groupchat_example.md) |
| **Interactive GroupChat** | Real-time interactive agent discussions | [View Example](../swarms/examples/igc_example.md) |
### Specialized Patterns
| Example | Description | Link |
|---------|-------------|------|
| **Agents as Tools** | Use agents as callable tools for other agents | [View Example](../swarms/examples/agents_as_tools.md) |
| **Aggregate Responses** | Combine outputs from multiple agents | [View Example](../swarms/examples/aggregate.md) |
| **Unique Swarms** | Experimental and specialized swarm patterns | [View Example](../swarms/examples/unique_swarms.md) |
| **BatchedGridWorkflow (Simple)** | Grid-based batch processing | [View Example](../swarms/examples/batched_grid_simple_example.md) |
| **BatchedGridWorkflow (Advanced)** | Advanced grid-based batch processing | [View Example](../swarms/examples/batched_grid_advanced_example.md) |
---
## Related Resources
- [Swarm Architectures Concept Guide](../swarms/concept/swarm_architectures.md)
- [Choosing Multi-Agent Architecture](../swarms/concept/how_to_choose_swarms.md)
- [Custom Swarm Development](../swarms/structs/custom_swarm.md)

@ -0,0 +1,39 @@
# RAG Examples Overview
Enhance your agents with Retrieval-Augmented Generation (RAG). Connect to vector databases and knowledge bases to give agents access to your custom data.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **RAG Fundamentals** | Understanding retrieval-augmented generation |
| **Vector Databases** | Connecting to Qdrant, Pinecone, and more |
| **Document Processing** | Ingesting and indexing documents |
| **Semantic Search** | Finding relevant context for queries |
---
## RAG Examples
| Example | Description | Vector DB | Link |
|---------|-------------|-----------|------|
| **RAG with Qdrant** | Complete RAG implementation with Qdrant | Qdrant | [View Example](../swarms/RAG/qdrant_rag.md) |
---
## Use Cases
| Use Case | Description |
|----------|-------------|
| **Document Q&A** | Answer questions about your documents |
| **Knowledge Base** | Query internal company knowledge |
| **Research Assistant** | Search through research papers |
| **Code Documentation** | Query codebase documentation |
| **Customer Support** | Access product knowledge |
---
## Related Resources
- [Memory Documentation](../swarms/memory/diy_memory.md) - Building custom memory
- [Agent Long-term Memory](../swarms/structs/agent.md#long-term-memory) - Agent memory configuration

@ -0,0 +1,55 @@
# Tools & Integrations Overview
Extend your agents with powerful integrations. Connect to web search, browser automation, financial data, and Model Context Protocol (MCP) servers.
## What You'll Learn
| Topic | Description |
|-------|-------------|
| **Web Search** | Integrate real-time web search capabilities |
| **Browser Automation** | Control web browsers programmatically |
| **Financial Data** | Access stock and market information |
| **Web Scraping** | Extract data from websites |
| **MCP Integration** | Connect to Model Context Protocol servers |
---
## Integration Examples
### Web Search
| Integration | Description | Link |
|-------------|-------------|------|
| **Exa Search** | AI-powered web search for agents | [View Example](./exa_search.md) |
### Browser Automation
| Integration | Description | Link |
|-------------|-------------|------|
| **Browser Use** | Automated browser control with agents | [View Example](./browser_use.md) |
### Financial Data
| Integration | Description | Link |
|-------------|-------------|------|
| **Yahoo Finance** | Stock data, quotes, and market info | [View Example](../swarms/examples/yahoo_finance.md) |
### Web Scraping
| Integration | Description | Link |
|-------------|-------------|------|
| **Firecrawl** | AI-powered web scraping | [View Example](../developer_guides/firecrawl.md) |
### MCP (Model Context Protocol)
| Integration | Description | Link |
|-------------|-------------|------|
| **Multi-MCP Agent** | Connect agents to multiple MCP servers | [View Example](../swarms/examples/multi_mcp_agent.md) |
---
## Related Resources
- [Tools Documentation](../swarms/tools/main.md) - Building custom tools
- [MCP Integration Guide](../swarms/structs/agent_mcp.md) - Detailed MCP setup
- [swarms-tools Package](../swarms_tools/overview.md) - Pre-built tool collection

@ -356,9 +356,19 @@ nav:
- Paper Implementations: "examples/paper_implementations.md"
- Templates & Applications: "examples/templates.md"
- Community Resources: "examples/community_resources.md"
- CLI Guides:
- Overview: "examples/cli_guides_overview.md"
- CLI Quickstart: "swarms/cli/cli_quickstart.md"
- Creating Agents from CLI: "swarms/cli/cli_agent_guide.md"
- YAML Configuration: "swarms/cli/cli_yaml_guide.md"
- LLM Council CLI: "swarms/cli/cli_llm_council_guide.md"
- Heavy Swarm CLI: "swarms/cli/cli_heavy_swarm_guide.md"
- CLI Multi-Agent Commands: "examples/cli_multi_agent_quickstart.md"
- CLI Examples: "swarms/cli/cli_examples.md"
- Basic Examples:
- Overview: "examples/basic_examples_overview.md"
- Individual Agents:
- Basic Agent: "swarms/examples/basic_agent.md"
- Tool Usage:
@ -374,6 +384,7 @@ nav:
- Agent Output Types: "swarms/examples/agent_output_types.md"
- Gradio Chat Interface: "swarms/ui/main.md"
- Agent with Gemini Nano Banana: "swarms/examples/jarvis_agent.md"
- Agent Marketplace Publishing: "examples/marketplace_publishing_quickstart.md"
- LLM Providers:
- Language Models:
- Overview: "swarms/examples/model_providers.md"
@ -391,7 +402,9 @@ nav:
- Advanced Examples:
- Overview: "examples/multi_agent_architectures_overview.md"
- Multi-Agent Architectures:
- HierarchicalSwarm Examples: "swarms/examples/hierarchical_swarm_example.md"
- Hybrid Hierarchical-Cluster Swarm Example: "swarms/examples/hhcs_examples.md"
@ -407,10 +420,15 @@ nav:
- Agents as Tools: "swarms/examples/agents_as_tools.md"
- Aggregate Multi-Agent Responses: "swarms/examples/aggregate.md"
- Interactive GroupChat Example: "swarms/examples/igc_example.md"
- LLM Council Quickstart: "examples/llm_council_quickstart.md"
- DebateWithJudge Quickstart: "examples/debate_quickstart.md"
- GraphWorkflow with Rustworkx: "examples/graphworkflow_quickstart.md"
- BatchedGridWorkflow Examples:
- Simple BatchedGridWorkflow: "swarms/examples/batched_grid_simple_example.md"
- Advanced BatchedGridWorkflow: "swarms/examples/batched_grid_advanced_example.md"
- Applications:
- Overview: "examples/applications_overview.md"
- Swarms of Browser Agents: "swarms/examples/swarms_of_browser_agents.md"
- Hiearchical Marketing Team: "examples/marketing_team.md"
- Gold ETF Research with HeavySwarm: "examples/gold_etf_research.md"
@ -421,6 +439,7 @@ nav:
- Mergers & Aquisition (M&A) Advisory Swarm: "examples/ma_swarm.md"
- Tools & Integrations:
- Overview: "examples/tools_integrations_overview.md"
- Web Search with Exa: "examples/exa_search.md"
- Browser Use: "examples/browser_use.md"
- Yahoo Finance: "swarms/examples/yahoo_finance.md"
@ -430,13 +449,16 @@ nav:
- Multi-MCP Agent Integration: "swarms/examples/multi_mcp_agent.md"
- RAG:
- Overview: "examples/rag_examples_overview.md"
- RAG with Qdrant: "swarms/RAG/qdrant_rag.md"
- Apps:
- Overview: "examples/apps_examples_overview.md"
- Web Scraper Agents: "developer_guides/web_scraper.md"
- Smart Database: "examples/smart_database.md"
- AOP:
- Overview: "examples/aop_examples_overview.md"
- Medical AOP Example: "examples/aop_medical.md"
- X402:

@ -0,0 +1,242 @@
# CLI Agent Guide: Create Agents from Command Line
Create, configure, and run AI agents directly from your terminal without writing Python code.
## Basic Agent Creation
### Step 1: Define Your Agent
Create an agent with required parameters:
```bash
swarms agent \
--name "Research-Agent" \
--description "An AI agent that researches topics and provides summaries" \
--system-prompt "You are an expert researcher. Provide comprehensive, well-structured summaries with key insights." \
--task "Research the current state of quantum computing and its applications"
```
### Step 2: Customize Model Settings
Add model configuration options:
```bash
swarms agent \
--name "Code-Reviewer" \
--description "Expert code review assistant" \
--system-prompt "You are a senior software engineer. Review code for best practices, bugs, and improvements." \
--task "Review this Python function for efficiency: def fib(n): return fib(n-1) + fib(n-2) if n > 1 else n" \
--model-name "gpt-4o-mini" \
--temperature 0.1 \
--max-loops 3
```
### Step 3: Enable Advanced Features
Add streaming, dashboard, and autosave:
```bash
swarms agent \
--name "Analysis-Agent" \
--description "Data analysis specialist" \
--system-prompt "You are a data analyst. Provide detailed statistical analysis and insights." \
--task "Analyze market trends for electric vehicles in 2024" \
--model-name "gpt-4" \
--streaming-on \
--verbose \
--autosave \
--saved-state-path "./agent_states/analysis_agent.json"
```
---
## Complete Parameter Reference
### Required Parameters
| Parameter | Description | Example |
|-----------|-------------|---------|
| `--name` | Agent name | `"Research-Agent"` |
| `--description` | Agent description | `"AI research assistant"` |
| `--system-prompt` | Agent's system instructions | `"You are an expert..."` |
| `--task` | Task for the agent | `"Analyze this data"` |
### Model Parameters
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--model-name` | `"gpt-4"` | LLM model to use |
| `--temperature` | `None` | Creativity (0.0-2.0) |
| `--max-loops` | `None` | Maximum execution loops |
| `--context-length` | `None` | Context window size |
### Behavior Parameters
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--auto-generate-prompt` | `False` | Auto-generate prompts |
| `--dynamic-temperature-enabled` | `False` | Dynamic temperature adjustment |
| `--dynamic-context-window` | `False` | Dynamic context window |
| `--streaming-on` | `False` | Enable streaming output |
| `--verbose` | `False` | Verbose mode |
### State Management
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--autosave` | `False` | Enable autosave |
| `--saved-state-path` | `None` | Path to save state |
| `--dashboard` | `False` | Enable dashboard |
| `--return-step-meta` | `False` | Return step metadata |
### Integration
| Parameter | Default | Description |
|-----------|---------|-------------|
| `--mcp-url` | `None` | MCP server URL |
| `--user-name` | `None` | Username for agent |
| `--output-type` | `None` | Output format (str, json) |
| `--retry-attempts` | `None` | Retry attempts on failure |
---
## Use Case Examples
### Financial Analyst Agent
```bash
swarms agent \
--name "Financial-Analyst" \
--description "Expert financial analysis and market insights" \
--system-prompt "You are a CFA-certified financial analyst. Provide detailed market analysis with data-driven insights. Include risk assessments and recommendations." \
--task "Analyze Apple (AAPL) stock performance and provide investment outlook for Q4 2024" \
--model-name "gpt-4" \
--temperature 0.2 \
--max-loops 5 \
--verbose
```
### Code Generation Agent
```bash
swarms agent \
--name "Code-Generator" \
--description "Expert Python developer and code generator" \
--system-prompt "You are an expert Python developer. Write clean, efficient, well-documented code following PEP 8 guidelines. Include type hints and docstrings." \
--task "Create a Python class for managing a task queue with priority scheduling" \
--model-name "gpt-4" \
--temperature 0.1 \
--streaming-on
```
### Creative Writing Agent
```bash
swarms agent \
--name "Creative-Writer" \
--description "Professional content writer and storyteller" \
--system-prompt "You are a professional writer with expertise in engaging content. Write compelling, creative content with strong narrative flow." \
--task "Write a short story about a scientist who discovers time travel" \
--model-name "gpt-4" \
--temperature 0.8 \
--max-loops 2
```
### Research Summarizer Agent
```bash
swarms agent \
--name "Research-Summarizer" \
--description "Academic research summarization specialist" \
--system-prompt "You are an academic researcher. Summarize research topics with key findings, methodologies, and implications. Cite sources when available." \
--task "Summarize recent advances in CRISPR gene editing technology" \
--model-name "gpt-4o-mini" \
--temperature 0.3 \
--verbose \
--autosave
```
---
## Scripting Examples
### Bash Script with Multiple Agents
```bash
#!/bin/bash
# run_agents.sh
# Research phase
swarms agent \
--name "Researcher" \
--description "Research specialist" \
--system-prompt "You are a researcher. Gather comprehensive information on topics." \
--task "Research the impact of AI on healthcare" \
--model-name "gpt-4o-mini" \
--output-type "json" > research_output.json
# Analysis phase
swarms agent \
--name "Analyst" \
--description "Data analyst" \
--system-prompt "You are an analyst. Analyze data and provide insights." \
--task "Analyze the research findings from: $(cat research_output.json)" \
--model-name "gpt-4o-mini" \
--output-type "json" > analysis_output.json
echo "Pipeline complete!"
```
### Loop Through Tasks
```bash
#!/bin/bash
# batch_analysis.sh
TOPICS=("renewable energy" "electric vehicles" "smart cities" "AI ethics")
for topic in "${TOPICS[@]}"; do
echo "Analyzing: $topic"
swarms agent \
--name "Topic-Analyst" \
--description "Topic analysis specialist" \
--system-prompt "You are an expert analyst. Provide concise analysis of topics." \
--task "Analyze current trends in: $topic" \
--model-name "gpt-4o-mini" \
>> "analysis_results.txt"
echo "---" >> "analysis_results.txt"
done
```
---
## Tips and Best Practices
!!! tip "System Prompt Tips"
- Be specific about the agent's role and expertise
- Include output format preferences
- Specify any constraints or guidelines
!!! tip "Temperature Settings"
- Use **0.1-0.3** for factual/analytical tasks
- Use **0.5-0.7** for balanced responses
- Use **0.8-1.0** for creative tasks
!!! tip "Performance Optimization"
- Use `gpt-4o-mini` for simpler tasks (faster, cheaper)
- Use `gpt-4` for complex reasoning tasks
- Set appropriate `--max-loops` to control execution time
!!! warning "Common Issues"
- Ensure API key is set: `export OPENAI_API_KEY="..."`
- Wrap multi-word arguments in quotes
- Use `--verbose` to debug issues
---
## Next Steps
- [CLI YAML Configuration](./cli_yaml_guide.md) - Run agents from YAML files
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - LLM Council and Heavy Swarm
- [CLI Reference](./cli_reference.md) - Complete command documentation

@ -0,0 +1,262 @@
# CLI Heavy Swarm Guide: Comprehensive Task Analysis
Run Heavy Swarm from command line for complex task decomposition and comprehensive analysis with specialized agents.
## Overview
Heavy Swarm follows a structured workflow:
1. **Task Decomposition**: Breaks down tasks into specialized questions
2. **Parallel Execution**: Executes specialized agents in parallel
3. **Result Synthesis**: Integrates and synthesizes results
4. **Comprehensive Reporting**: Generates detailed final reports
---
## Basic Usage
### Step 1: Run a Simple Analysis
```bash
swarms heavy-swarm --task "Analyze the current state of quantum computing"
```
### Step 2: Customize with Options
```bash
swarms heavy-swarm \
--task "Research renewable energy market trends" \
--loops-per-agent 2 \
--verbose
```
### Step 3: Use Custom Models
```bash
swarms heavy-swarm \
--task "Analyze cryptocurrency regulation globally" \
--question-agent-model-name gpt-4 \
--worker-model-name gpt-4 \
--loops-per-agent 3 \
--verbose
```
---
## Command Options
| Option | Default | Description |
|--------|---------|-------------|
| `--task` | **Required** | The task to analyze |
| `--loops-per-agent` | 1 | Execution loops per agent |
| `--question-agent-model-name` | gpt-4o-mini | Model for question generation |
| `--worker-model-name` | gpt-4o-mini | Model for worker agents |
| `--random-loops-per-agent` | False | Randomize loops (1-10) |
| `--verbose` | False | Enable detailed output |
---
## Specialized Agents
Heavy Swarm includes specialized agents for different aspects:
| Agent | Role | Focus |
|-------|------|-------|
| **Question Agent** | Decomposes tasks | Generates targeted questions |
| **Research Agent** | Gathers information | Fast, trustworthy research |
| **Analysis Agent** | Processes data | Statistical analysis, insights |
| **Writing Agent** | Creates reports | Clear, structured documentation |
---
## Use Case Examples
### Market Research
```bash
swarms heavy-swarm \
--task "Comprehensive market analysis of the electric vehicle industry in North America" \
--loops-per-agent 3 \
--question-agent-model-name gpt-4 \
--worker-model-name gpt-4 \
--verbose
```
### Technology Assessment
```bash
swarms heavy-swarm \
--task "Evaluate the technical feasibility and ROI of implementing AI-powered customer service automation" \
--loops-per-agent 2 \
--verbose
```
### Competitive Analysis
```bash
swarms heavy-swarm \
--task "Analyze competitive landscape for cloud computing services: AWS vs Azure vs Google Cloud" \
--loops-per-agent 2 \
--question-agent-model-name gpt-4 \
--verbose
```
### Investment Research
```bash
swarms heavy-swarm \
--task "Research investment opportunities in AI infrastructure companies for 2024-2025" \
--loops-per-agent 3 \
--worker-model-name gpt-4 \
--verbose
```
### Policy Analysis
```bash
swarms heavy-swarm \
--task "Analyze the impact of proposed AI regulations on tech startups in the United States" \
--loops-per-agent 2 \
--verbose
```
### Due Diligence
```bash
swarms heavy-swarm \
--task "Conduct technology due diligence for acquiring a fintech startup focusing on payment processing" \
--loops-per-agent 3 \
--question-agent-model-name gpt-4 \
--worker-model-name gpt-4 \
--verbose
```
---
## Workflow Visualization
```
┌─────────────────────────────────────────────────────────────────┐
│ User Task │
│ "Analyze the impact of AI on healthcare" │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Question Agent │
│ Decomposes task into specialized questions: │
│ - What are current AI applications in healthcare? │
│ - What are the regulatory challenges? │
│ - What is the market size and growth? │
│ - What are the key players and competitors? │
└─────────────────────────────────────────────────────────────────┘
┌─────────────┬─────────────┬─────────────┬─────────────┐
│ Research │ Analysis │ Research │ Writing │
│ Agent 1 │ Agent │ Agent 2 │ Agent │
└─────────────┴─────────────┴─────────────┴─────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Synthesis & Integration │
│ Combines all agent outputs │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Comprehensive Report │
│ - Executive Summary │
│ - Detailed Findings │
│ - Analysis & Insights │
│ - Recommendations │
└─────────────────────────────────────────────────────────────────┘
```
---
## Configuration Recommendations
### Quick Analysis (Cost-Effective)
```bash
swarms heavy-swarm \
--task "Quick overview of [topic]" \
--loops-per-agent 1 \
--question-agent-model-name gpt-4o-mini \
--worker-model-name gpt-4o-mini
```
### Standard Research
```bash
swarms heavy-swarm \
--task "Detailed analysis of [topic]" \
--loops-per-agent 2 \
--verbose
```
### Deep Dive (Comprehensive)
```bash
swarms heavy-swarm \
--task "Comprehensive research on [topic]" \
--loops-per-agent 3 \
--question-agent-model-name gpt-4 \
--worker-model-name gpt-4 \
--verbose
```
### Exploratory (Variable Depth)
```bash
swarms heavy-swarm \
--task "Explore [topic] with varying depth" \
--random-loops-per-agent \
--verbose
```
---
## Best Practices
!!! tip "Task Formulation"
- Be specific about what you want analyzed
- Include scope and constraints
- Specify desired output format
!!! tip "Loop Configuration"
- Use `--loops-per-agent 1` for quick overviews
- Use `--loops-per-agent 2-3` for detailed analysis
- Higher loops = more comprehensive but slower
!!! tip "Model Selection"
- Use `gpt-4o-mini` for cost-effective analysis
- Use `gpt-4` for complex, nuanced topics
- Match model to task complexity
!!! warning "Performance Notes"
- Deep analysis (3+ loops) may take several minutes
- Higher loops increase API costs
- Use `--verbose` to monitor progress
---
## Comparison: LLM Council vs Heavy Swarm
| Feature | LLM Council | Heavy Swarm |
|---------|-------------|-------------|
| **Focus** | Collaborative decision-making | Comprehensive task analysis |
| **Workflow** | Parallel responses + peer review | Task decomposition + parallel research |
| **Best For** | Questions with multiple viewpoints | Complex research and analysis tasks |
| **Output** | Synthesized consensus | Detailed research report |
| **Speed** | Faster | More thorough but slower |
---
## Next Steps
- [CLI LLM Council Guide](./cli_llm_council_guide.md) - Collaborative decisions
- [CLI Reference](./cli_reference.md) - Complete command documentation
- [Heavy Swarm Python API](../structs/heavy_swarm.md) - Programmatic usage

@ -0,0 +1,162 @@
# CLI LLM Council Guide: Collaborative Multi-Agent Decisions
Run the LLM Council directly from command line for collaborative decision-making with multiple AI agents through peer review and synthesis.
## Overview
The LLM Council creates a collaborative environment where:
1. **Multiple Perspectives**: Each council member (GPT-5.1, Gemini, Claude, Grok) independently responds
2. **Peer Review**: Members evaluate and rank each other's anonymized responses
3. **Synthesis**: A Chairman synthesizes the best elements into a final answer
---
## Basic Usage
### Step 1: Run a Simple Query
```bash
swarms llm-council --task "What are the best practices for code review?"
```
### Step 2: Enable Verbose Output
```bash
swarms llm-council --task "How should we approach microservices architecture?" --verbose
```
### Step 3: Process the Results
The council returns:
- Individual member responses
- Peer review rankings
- Synthesized final answer
---
## Use Case Examples
### Strategic Business Decisions
```bash
swarms llm-council --task "Should our SaaS startup prioritize product-led growth or sales-led growth? Consider market size, CAC, and scalability."
```
### Technology Evaluation
```bash
swarms llm-council --task "Compare Kubernetes vs Docker Swarm for a startup with 10 microservices. Consider cost, complexity, and scalability."
```
### Investment Analysis
```bash
swarms llm-council --task "Evaluate investment opportunities in AI infrastructure companies. Consider market size, competition, and growth potential."
```
### Policy Analysis
```bash
swarms llm-council --task "What are the implications of implementing AI regulation similar to the EU AI Act in the United States?"
```
### Research Questions
```bash
swarms llm-council --task "What are the most promising approaches to achieving AGI? Evaluate different research paradigms."
```
---
## Council Members
The default council includes:
| Member | Model | Strengths |
|--------|-------|-----------|
| **GPT-5.1 Councilor** | gpt-5.1 | Analytical, comprehensive |
| **Gemini 3 Pro Councilor** | gemini-3-pro | Concise, well-processed |
| **Claude Sonnet 4.5 Councilor** | claude-sonnet-4.5 | Thoughtful, balanced |
| **Grok-4 Councilor** | grok-4 | Creative, innovative |
| **Chairman** | gpt-5.1 | Synthesizes final answer |
---
## Workflow Visualization
```
┌─────────────────────────────────────────────────────────────────┐
│ User Query │
└─────────────────────────────────────────────────────────────────┘
┌─────────────┬─────────────┬─────────────┬─────────────┐
│ GPT-5.1 │ Gemini 3 │ Claude 4.5 │ Grok-4 │
│ Councilor │ Councilor │ Councilor │ Councilor │
└─────────────┴─────────────┴─────────────┴─────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Anonymized Peer Review │
│ Each member ranks all responses (anonymized) │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Chairman │
│ Synthesizes best elements from all responses │
└─────────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────────┐
│ Final Synthesized Answer │
└─────────────────────────────────────────────────────────────────┘
```
---
## Best Practices
!!! tip "Query Formulation"
- Be specific and detailed in your queries
- Include context and constraints
- Ask for specific types of analysis
!!! tip "When to Use LLM Council"
- Complex decisions requiring multiple perspectives
- Research questions needing comprehensive analysis
- Strategic planning and evaluation
- Questions with trade-offs to consider
!!! tip "Performance Tips"
- Use `--verbose` for detailed progress tracking
- Expect responses to take 30-60 seconds
- Complex queries may take longer
!!! warning "Limitations"
- Requires multiple API calls (higher cost)
- Not suitable for simple factual queries
- Response time is longer than single-agent queries
---
## Command Reference
```bash
swarms llm-council --task "<query>" [--verbose]
```
| Option | Type | Default | Description |
|--------|------|---------|-------------|
| `--task` | string | **Required** | Query for the council |
| `--verbose` | flag | False | Enable detailed output |
---
## Next Steps
- [CLI Heavy Swarm Guide](./cli_heavy_swarm_guide.md) - Complex task analysis
- [CLI Reference](./cli_reference.md) - Complete command documentation
- [LLM Council Python API](../examples/llm_council_quickstart.md) - Programmatic usage

@ -0,0 +1,115 @@
# CLI Quickstart: Getting Started in 3 Steps
Get up and running with the Swarms CLI in minutes. This guide covers installation, setup verification, and running your first commands.
## Step 1: Install Swarms
Install the Swarms package which includes the CLI:
```bash
pip install swarms
```
Verify installation:
```bash
swarms --help
```
You should see the Swarms CLI banner with available commands.
---
## Step 2: Configure Environment
Set up your API keys and workspace:
```bash
# Set your OpenAI API key (or other provider)
export OPENAI_API_KEY="your-openai-api-key"
# Optional: Set workspace directory
export WORKSPACE_DIR="./agent_workspace"
```
Or create a `.env` file in your project directory:
```
OPENAI_API_KEY=your-openai-api-key
WORKSPACE_DIR=./agent_workspace
```
Verify your setup:
```bash
swarms setup-check --verbose
```
Expected output:
```
🔍 Running Swarms Environment Setup Check
┌─────────────────────────────────────────────────────────────────────────────┐
│ Environment Check Results │
├─────────┬─────────────────────────┬─────────────────────────────────────────┤
│ Status │ Check │ Details │
├─────────┼─────────────────────────┼─────────────────────────────────────────┤
│ ✓ │ Python Version │ Python 3.11.5 │
│ ✓ │ Swarms Version │ Current version: 8.7.0 │
│ ✓ │ API Keys │ API keys found: OPENAI_API_KEY │
│ ✓ │ Dependencies │ All required dependencies available │
└─────────┴─────────────────────────┴─────────────────────────────────────────┘
```
---
## Step 3: Run Your First Command
Try these commands to verify everything works:
### View All Features
```bash
swarms features
```
### Create a Simple Agent
```bash
swarms agent \
--name "Assistant" \
--description "A helpful AI assistant" \
--system-prompt "You are a helpful assistant that provides clear, concise answers." \
--task "What are the benefits of renewable energy?" \
--model-name "gpt-4o-mini"
```
### Run LLM Council
```bash
swarms llm-council --task "What are the best practices for code review?"
```
---
## Quick Reference
| Command | Description |
|---------|-------------|
| `swarms --help` | Show all available commands |
| `swarms features` | Display all CLI features |
| `swarms setup-check` | Verify environment setup |
| `swarms onboarding` | Interactive setup wizard |
| `swarms agent` | Create and run a custom agent |
| `swarms llm-council` | Run collaborative LLM council |
| `swarms heavy-swarm` | Run comprehensive analysis swarm |
---
## Next Steps
- [CLI Agent Guide](./cli_agent_guide.md) - Create custom agents from CLI
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - Run LLM Council and Heavy Swarm
- [CLI Reference](./cli_reference.md) - Complete command documentation

@ -0,0 +1,320 @@
# CLI YAML Configuration Guide: Run Agents from Config Files
Run multiple agents from YAML configuration files for reproducible, version-controlled agent deployments.
## Basic YAML Configuration
### Step 1: Create YAML Config File
Create a file named `agents.yaml`:
```yaml
agents:
- name: "Research-Agent"
description: "AI research specialist"
model_name: "gpt-4o-mini"
system_prompt: |
You are an expert researcher.
Provide comprehensive, well-structured research summaries.
Include key insights and data points.
temperature: 0.3
max_loops: 2
task: "Research current trends in renewable energy"
- name: "Analysis-Agent"
description: "Data analysis specialist"
model_name: "gpt-4o-mini"
system_prompt: |
You are a data analyst.
Provide detailed statistical analysis and insights.
Use data-driven reasoning.
temperature: 0.2
max_loops: 3
task: "Analyze market opportunities in the EV sector"
```
### Step 2: Run Agents from YAML
```bash
swarms run-agents --yaml-file agents.yaml
```
### Step 3: View Results
Results are displayed in the terminal with formatted output for each agent.
---
## Complete YAML Schema
### Agent Configuration Options
```yaml
agents:
- name: "Agent-Name" # Required: Agent identifier
description: "Agent description" # Required: What the agent does
model_name: "gpt-4o-mini" # Model to use
system_prompt: "Your instructions" # Agent's system prompt
temperature: 0.5 # Creativity (0.0-2.0)
max_loops: 3 # Maximum execution loops
task: "Task to execute" # Task for this agent
# Optional settings
context_length: 8192 # Context window size
streaming_on: true # Enable streaming
verbose: true # Verbose output
autosave: true # Auto-save state
saved_state_path: "./states/agent.json" # State file path
output_type: "json" # Output format
retry_attempts: 3 # Retries on failure
```
---
## Use Case Examples
### Multi-Agent Research Pipeline
```yaml
# research_pipeline.yaml
agents:
- name: "Data-Collector"
description: "Collects and organizes research data"
model_name: "gpt-4o-mini"
system_prompt: |
You are a research data collector.
Gather comprehensive information on the given topic.
Organize findings into structured categories.
temperature: 0.3
max_loops: 2
task: "Collect data on AI applications in healthcare"
- name: "Trend-Analyst"
description: "Analyzes trends and patterns"
model_name: "gpt-4o-mini"
system_prompt: |
You are a trend analyst.
Identify emerging patterns and trends from data.
Provide statistical insights and projections.
temperature: 0.2
max_loops: 2
task: "Analyze AI healthcare adoption trends from 2020-2024"
- name: "Report-Writer"
description: "Creates comprehensive reports"
model_name: "gpt-4"
system_prompt: |
You are a professional report writer.
Create comprehensive, well-structured reports.
Include executive summaries and key recommendations.
temperature: 0.4
max_loops: 1
task: "Write an executive summary on AI in healthcare"
```
Run:
```bash
swarms run-agents --yaml-file research_pipeline.yaml
```
### Financial Analysis Team
```yaml
# financial_team.yaml
agents:
- name: "Market-Analyst"
description: "Analyzes market conditions"
model_name: "gpt-4"
system_prompt: |
You are a CFA-certified market analyst.
Provide detailed market analysis with technical indicators.
Include risk assessments and market outlook.
temperature: 0.2
max_loops: 3
task: "Analyze current S&P 500 market conditions"
- name: "Risk-Assessor"
description: "Evaluates investment risks"
model_name: "gpt-4"
system_prompt: |
You are a risk management specialist.
Evaluate investment risks and provide mitigation strategies.
Use quantitative risk metrics.
temperature: 0.1
max_loops: 2
task: "Assess risks in current tech sector investments"
- name: "Portfolio-Advisor"
description: "Provides portfolio recommendations"
model_name: "gpt-4"
system_prompt: |
You are a portfolio advisor.
Provide asset allocation recommendations.
Consider risk tolerance and market conditions.
temperature: 0.3
max_loops: 2
task: "Recommend portfolio adjustments for Q4 2024"
```
### Content Creation Pipeline
```yaml
# content_pipeline.yaml
agents:
- name: "Topic-Researcher"
description: "Researches content topics"
model_name: "gpt-4o-mini"
system_prompt: |
You are a content researcher.
Research topics thoroughly and identify key angles.
Find unique perspectives and data points.
temperature: 0.4
max_loops: 2
task: "Research content angles for 'Future of Remote Work'"
- name: "Content-Writer"
description: "Writes engaging content"
model_name: "gpt-4"
system_prompt: |
You are a professional content writer.
Write engaging, SEO-friendly content.
Use clear structure with headers and bullet points.
temperature: 0.7
max_loops: 2
task: "Write a blog post about remote work trends"
- name: "Editor"
description: "Edits and polishes content"
model_name: "gpt-4o-mini"
system_prompt: |
You are a professional editor.
Review content for clarity, grammar, and style.
Suggest improvements and optimize for readability.
temperature: 0.2
max_loops: 1
task: "Edit and polish the blog post for publication"
```
---
## Advanced Configuration
### Environment Variables in YAML
You can reference environment variables:
```yaml
agents:
- name: "API-Agent"
description: "Agent with API access"
model_name: "${MODEL_NAME:-gpt-4o-mini}" # Default if not set
system_prompt: "You are an API integration specialist."
task: "Test API integration"
```
### Multiple Config Files
Organize agents by purpose:
```bash
# Run different configurations
swarms run-agents --yaml-file research_agents.yaml
swarms run-agents --yaml-file analysis_agents.yaml
swarms run-agents --yaml-file reporting_agents.yaml
```
### Pipeline Script
```bash
#!/bin/bash
# run_pipeline.sh
echo "Starting research pipeline..."
swarms run-agents --yaml-file configs/research.yaml
echo "Starting analysis pipeline..."
swarms run-agents --yaml-file configs/analysis.yaml
echo "Starting reporting pipeline..."
swarms run-agents --yaml-file configs/reporting.yaml
echo "Pipeline complete!"
```
---
## Markdown Configuration
### Alternative: Load from Markdown
Create agents using markdown with YAML frontmatter:
```markdown
---
name: Research Agent
description: AI research specialist
model_name: gpt-4o-mini
temperature: 0.3
max_loops: 2
---
You are an expert researcher specializing in technology trends.
Provide comprehensive research summaries with:
- Key findings and insights
- Data points and statistics
- Recommendations and implications
Always cite sources when available and maintain objectivity.
```
Load from markdown:
```bash
# Load single file
swarms load-markdown --markdown-path ./agents/research_agent.md
# Load directory (concurrent processing)
swarms load-markdown --markdown-path ./agents/ --concurrent
```
---
## Best Practices
!!! tip "Configuration Management"
- Version control your YAML files
- Use descriptive agent names
- Document purpose in descriptions
!!! tip "Template Organization"
```
configs/
├── research/
│ ├── tech_research.yaml
│ └── market_research.yaml
├── analysis/
│ ├── financial_analysis.yaml
│ └── data_analysis.yaml
└── production/
└── prod_agents.yaml
```
!!! tip "Testing Configurations"
- Test with `--verbose` flag first
- Use lower `max_loops` for testing
- Start with `gpt-4o-mini` for cost efficiency
!!! warning "Common Pitfalls"
- Ensure proper YAML indentation (2 spaces)
- Quote strings with special characters
- Use `|` for multi-line prompts
---
## Next Steps
- [CLI Agent Guide](./cli_agent_guide.md) - Create agents from command line
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - LLM Council and Heavy Swarm
- [CLI Reference](./cli_reference.md) - Complete command documentation

@ -6,11 +6,11 @@ agent = Agent(
agent_description="Advanced quantitative trading and algorithmic analysis agent",
model_name="gpt-4.1",
dynamic_temperature_enabled=True,
max_loops=5,
max_loops=1,
dynamic_context_window=True,
top_p=None,
streaming_on=True,
interactive=True,
interactive=False,
)
out = agent.run(

@ -1,4 +1,3 @@
from swarms import DebateWithJudge
debate_system = DebateWithJudge(

@ -50,33 +50,15 @@ blood_analysis_agent = Agent(
use_cases=[
{
"title": "Blood Analysis",
"description": (
"Analyze blood samples and provide a report on the results, "
"highlighting significant deviations, clinical context, red flags, "
"and referencing established guidelines for lab test interpretation."
),
"description": "Analyze blood samples and summarize notable findings.",
},
{
"title": "Longitudinal Patient Lab Monitoring",
"description": (
"Process serial blood test results for a patient over time to identify clinical trends in key parameters (e.g., "
"progression of anemia, impact of pharmacologic therapy, signs of organ dysfunction). Generate structured summaries "
"that succinctly track rises, drops, or persistently abnormal markers. Flag patterns that suggest evolving risk or "
"require physician escalation, such as a dropping platelet count, rising creatinine, or new-onset hyperglycemia. "
"Report should distinguish true trends from ordinary biological variability, referencing clinical guidelines for "
"critical-change thresholds and best-practice follow-up actions."
),
"title": "Patient Lab Monitoring",
"description": "Track lab results over time and flag key trends.",
},
{
"title": "Preoperative Laboratory Risk Stratification",
"description": (
"Interpret pre-surgical laboratory panels as part of risk assessment for patients scheduled for procedures. Identify "
"abnormal or borderline values that may increase the risk of perioperative complications (e.g., bleeding risk from "
"thrombocytopenia, signs of undiagnosed infection, electrolyte imbalances affecting anesthesia safety). Structure the "
"output to clearly separate routine findings from emergent concerns, and suggest evidence-based adjustments, further "
"workup, or consultation needs before proceeding with surgery, based on current clinical best practices and guideline "
"recommendations."
),
"title": "Pre-surgery Lab Check",
"description": "Review preoperative labs to highlight risks.",
},
],
)

@ -0,0 +1,55 @@
import re
from swarms.structs.maker import MAKER
# Define task-specific functions for a counting task
def format_counting_prompt(
task, state, step_idx, previous_result
):
"""Format prompt for counting task."""
if previous_result is None:
return f"{task}\nThis is step 1. What is the first number? Reply with just the number."
return f"{task}\nThe previous number was {previous_result}. What is the next number? Reply with just the number."
def parse_counting_response(response):
"""Parse the counting response to extract the number."""
numbers = re.findall(r"\d+", response)
if numbers:
return int(numbers[0])
return response.strip()
def validate_counting_response(response, max_tokens):
"""Validate counting response."""
if len(response) > max_tokens * 4:
return False
return bool(re.search(r"\d+", response))
# Create MAKER instance
maker = MAKER(
name="CountingExample",
description="MAKER example: counting numbers",
model_name="gpt-4o-mini",
system_prompt="You are a helpful assistant. When asked to count, respond with just the number, nothing else.",
format_prompt=format_counting_prompt,
parse_response=parse_counting_response,
validate_response=validate_counting_response,
k=2,
max_tokens=100,
temperature=0.1,
verbose=True,
)
# Run the solver with the task as the main input
results = maker.run(
task="Count from 1 to 10, one number at a time",
max_steps=5,
)
print(results)
# Show statistics
stats = maker.get_statistics()

@ -5,7 +5,7 @@ build-backend = "poetry.core.masonry.api"
[tool.poetry]
name = "swarms"
version = "8.6.5"
version = "8.7.0"
description = "Swarms - TGSC"
license = "MIT"
authors = ["Kye Gomez <kye@swarms.world>"]

@ -3309,7 +3309,7 @@ class Agent:
# Get the text content from the tool response
# execute_tool_call_simple returns a string directly, not an object with content attribute
text_content = f"MCP Tool Response: \n\n {json.dumps(tool_response, indent=2)}"
text_content = f"MCP Tool Response: \n\n {json.dumps(tool_response, indent=2, sort_keys=True)}"
if self.print_on is True:
formatter.print_panel(

@ -235,7 +235,11 @@ class DebateWithJudge:
return
# Option 2: Use individual agent parameters
if pro_agent is not None and con_agent is not None and judge_agent is not None:
if (
pro_agent is not None
and con_agent is not None
and judge_agent is not None
):
self.pro_agent = pro_agent
self.con_agent = con_agent
self.judge_agent = judge_agent
@ -321,9 +325,7 @@ class DebateWithJudge:
# Execute N loops of debate and refinement
for round_num in range(self.max_loops):
if self.verbose:
logger.info(
f"Loop {round_num + 1}/{self.max_loops}"
)
logger.info(f"Loop {round_num + 1}/{self.max_loops}")
# Step 1: Pro agent presents argument
pro_prompt = self._create_pro_prompt(

@ -179,7 +179,9 @@ class MAKER:
self.max_tokens = max_tokens
self.temperature = temperature
self.temperature_first = temperature_first
self.max_workers = max_workers if max_workers is not None else k
self.max_workers = (
max_workers if max_workers is not None else k
)
self.verbose = verbose
self.max_retries_per_step = max_retries_per_step
self.agents = agents
@ -245,10 +247,16 @@ class MAKER:
if self.temperature < 0 or self.temperature > 2:
raise ValueError("temperature must be between 0 and 2")
if self.max_retries_per_step < 1:
raise ValueError("max_retries_per_step must be at least 1")
raise ValueError(
"max_retries_per_step must be at least 1"
)
def _default_format_prompt(
self, task: str, state: Any, step_idx: int, previous_result: Any
self,
task: str,
state: Any,
step_idx: int,
previous_result: Any,
) -> str:
"""
Default prompt formatter.
@ -268,7 +276,9 @@ class MAKER:
prompt_parts.insert(1, f"Current state: {state}")
if previous_result is not None:
prompt_parts.insert(-1, f"Previous result: {previous_result}")
prompt_parts.insert(
-1, f"Previous result: {previous_result}"
)
prompt_parts.append("Provide the result for this step.")
@ -341,7 +351,11 @@ class MAKER:
Returns:
An Agent instance configured for single-step execution.
"""
temp = temperature if temperature is not None else self.temperature
temp = (
temperature
if temperature is not None
else self.temperature
)
agent = Agent(
agent_name=f"MAKER-MicroAgent-{uuid.uuid4().hex[:8]}",
@ -395,16 +409,21 @@ class MAKER:
elif isinstance(result, dict):
return tuple(
sorted(
(k, self._make_hashable(v)) for k, v in result.items()
(k, self._make_hashable(v))
for k, v in result.items()
)
)
elif isinstance(result, set):
return frozenset(self._make_hashable(item) for item in result)
return frozenset(
self._make_hashable(item) for item in result
)
else:
# Fall back to string representation
return str(result)
def _unhash_result(self, hashable: Any, original_type: type) -> Any:
def _unhash_result(
self, hashable: Any, original_type: type
) -> Any:
"""
Convert a hashable result back to its original type.
@ -418,11 +437,23 @@ class MAKER:
if original_type in (str, int, float, bool, type(None)):
return hashable
elif original_type is list:
return list(hashable) if isinstance(hashable, tuple) else hashable
return (
list(hashable)
if isinstance(hashable, tuple)
else hashable
)
elif original_type is dict:
return dict(hashable) if isinstance(hashable, tuple) else hashable
return (
dict(hashable)
if isinstance(hashable, tuple)
else hashable
)
elif original_type is set:
return set(hashable) if isinstance(hashable, frozenset) else hashable
return (
set(hashable)
if isinstance(hashable, frozenset)
else hashable
)
else:
return hashable
@ -456,7 +487,9 @@ class MAKER:
self.stats["total_samples"] += 1
agent = self._get_agent(temperature)
prompt = self.format_prompt(task, state, step_idx, previous_result)
prompt = self.format_prompt(
task, state, step_idx, previous_result
)
try:
response = agent.run(task=prompt)
@ -465,7 +498,9 @@ class MAKER:
if not self.validate_response(response, self.max_tokens):
self.stats["red_flagged"] += 1
if self.verbose:
logger.debug(f"Red-flagged response at step {step_idx + 1}")
logger.debug(
f"Red-flagged response at step {step_idx + 1}"
)
return None
# Parse the response
@ -522,11 +557,17 @@ class MAKER:
while samples_this_step < self.max_retries_per_step:
# Use temperature 0 for first vote, then configured temperature
temp = self.temperature_first if is_first_vote else self.temperature
temp = (
self.temperature_first
if is_first_vote
else self.temperature
)
is_first_vote = False
# Get a vote
result = self.get_vote(task, state, step_idx, previous_result, temp)
result = self.get_vote(
task, state, step_idx, previous_result, temp
)
samples_this_step += 1
if result is None:
@ -553,7 +594,9 @@ class MAKER:
if current_count >= max_other + self.k:
# We have a winner!
self.stats["votes_per_step"].append(votes_this_step)
self.stats["samples_per_step"].append(samples_this_step)
self.stats["samples_per_step"].append(
samples_this_step
)
if self.verbose:
logger.debug(
@ -605,13 +648,23 @@ class MAKER:
... )
"""
if not task:
raise ValueError("task is required - this is the objective to complete")
raise ValueError(
"task is required - this is the objective to complete"
)
if max_steps is None:
raise ValueError("max_steps is required - specify how many steps to execute")
raise ValueError(
"max_steps is required - specify how many steps to execute"
)
if self.verbose:
logger.info(f"Starting MAKER with {max_steps} steps, k={self.k}")
logger.info(f"Task: {task[:100]}..." if len(task) > 100 else f"Task: {task}")
logger.info(
f"Starting MAKER with {max_steps} steps, k={self.k}"
)
logger.info(
f"Task: {task[:100]}..."
if len(task) > 100
else f"Task: {task}"
)
# Initialize state
state = self.initial_state
@ -620,11 +673,18 @@ class MAKER:
previous_result = None
for step_idx in range(max_steps):
if self.verbose and (step_idx + 1) % max(1, max_steps // 10) == 0:
logger.info(f"Progress: {step_idx + 1}/{max_steps} steps completed")
if (
self.verbose
and (step_idx + 1) % max(1, max_steps // 10) == 0
):
logger.info(
f"Progress: {step_idx + 1}/{max_steps} steps completed"
)
# Do voting for this step
result, response = self.do_voting(task, state, step_idx, previous_result)
result, response = self.do_voting(
task, state, step_idx, previous_result
)
# Record the result
results.append(result)
@ -678,15 +738,23 @@ class MAKER:
... )
"""
if not task:
raise ValueError("task is required - this is the objective to complete")
raise ValueError(
"task is required - this is the objective to complete"
)
if stop_condition is None:
raise ValueError("stop_condition must be provided")
state = self.initial_state
if self.verbose:
logger.info(f"Starting MAKER (conditional), max_steps={max_steps}, k={self.k}")
logger.info(f"Task: {task[:100]}..." if len(task) > 100 else f"Task: {task}")
logger.info(
f"Starting MAKER (conditional), max_steps={max_steps}, k={self.k}"
)
logger.info(
f"Task: {task[:100]}..."
if len(task) > 100
else f"Task: {task}"
)
results = []
previous_result = None
@ -695,14 +763,20 @@ class MAKER:
# Check stop condition
if stop_condition(state, results, step_idx):
if self.verbose:
logger.info(f"Stop condition met at step {step_idx + 1}")
logger.info(
f"Stop condition met at step {step_idx + 1}"
)
break
if self.verbose and (step_idx + 1) % 10 == 0:
logger.info(f"Progress: {step_idx + 1} steps completed")
logger.info(
f"Progress: {step_idx + 1} steps completed"
)
# Do voting for this step
result, response = self.do_voting(task, state, step_idx, previous_result)
result, response = self.do_voting(
task, state, step_idx, previous_result
)
results.append(result)
state = self.update_state(state, result, step_idx)
@ -714,7 +788,9 @@ class MAKER:
return results
def run_parallel_voting(self, task: str, max_steps: int = None) -> List[Any]:
def run_parallel_voting(
self, task: str, max_steps: int = None
) -> List[Any]:
"""
Run MAKER with parallel vote sampling.
@ -730,22 +806,37 @@ class MAKER:
List of results from each step.
"""
if not task:
raise ValueError("task is required - this is the objective to complete")
raise ValueError(
"task is required - this is the objective to complete"
)
if max_steps is None:
raise ValueError("max_steps is required - specify how many steps to execute")
raise ValueError(
"max_steps is required - specify how many steps to execute"
)
state = self.initial_state
if self.verbose:
logger.info(f"Starting MAKER (parallel) with {max_steps} steps, k={self.k}")
logger.info(f"Task: {task[:100]}..." if len(task) > 100 else f"Task: {task}")
logger.info(
f"Starting MAKER (parallel) with {max_steps} steps, k={self.k}"
)
logger.info(
f"Task: {task[:100]}..."
if len(task) > 100
else f"Task: {task}"
)
results = []
previous_result = None
for step_idx in range(max_steps):
if self.verbose and (step_idx + 1) % max(1, max_steps // 10) == 0:
logger.info(f"Progress: {step_idx + 1}/{max_steps} steps completed")
if (
self.verbose
and (step_idx + 1) % max(1, max_steps // 10) == 0
):
logger.info(
f"Progress: {step_idx + 1}/{max_steps} steps completed"
)
result, response = self._do_voting_parallel(
task, state, step_idx, previous_result
@ -826,7 +917,9 @@ class MAKER:
if hashable_result not in votes:
votes[hashable_result] = 0
responses[hashable_result] = response
original_types[hashable_result] = original_type
original_types[hashable_result] = (
original_type
)
votes[hashable_result] += 1
# Check if we have a winner, continue sequentially if not
@ -840,8 +933,12 @@ class MAKER:
)
if leader_count >= max_other + self.k:
self.stats["votes_per_step"].append(votes_this_step)
self.stats["samples_per_step"].append(samples_this_step)
self.stats["votes_per_step"].append(
votes_this_step
)
self.stats["samples_per_step"].append(
samples_this_step
)
final_result = self._unhash_result(
leader, original_types[leader]
@ -850,7 +947,11 @@ class MAKER:
# No winner yet, get more votes sequentially
result = self.get_vote(
task, state, step_idx, previous_result, self.temperature
task,
state,
step_idx,
previous_result,
self.temperature,
)
samples_this_step += 1
@ -873,10 +974,14 @@ class MAKER:
logger.info("=" * 50)
logger.info("MAKER Execution Statistics")
logger.info("=" * 50)
logger.info(f"Steps completed: {self.stats['steps_completed']}")
logger.info(
f"Steps completed: {self.stats['steps_completed']}"
)
logger.info(f"Total samples: {self.stats['total_samples']}")
logger.info(f"Total valid votes: {self.stats['total_votes']}")
logger.info(f"Red-flagged responses: {self.stats['red_flagged']}")
logger.info(
f"Red-flagged responses: {self.stats['red_flagged']}"
)
if self.stats["votes_per_step"]:
avg_votes = sum(self.stats["votes_per_step"]) / len(
@ -890,14 +995,20 @@ class MAKER:
avg_samples = sum(self.stats["samples_per_step"]) / len(
self.stats["samples_per_step"]
)
logger.info(f"Average samples per step: {avg_samples:.2f}")
logger.info(
f"Average samples per step: {avg_samples:.2f}"
)
red_flag_rate = self.stats["red_flagged"] / max(1, self.stats["total_samples"])
red_flag_rate = self.stats["red_flagged"] / max(
1, self.stats["total_samples"]
)
logger.info(f"Red-flag rate: {red_flag_rate:.2%}")
logger.info("=" * 50)
def estimate_cost(
self, total_steps: int, target_success_probability: float = 0.95
self,
total_steps: int,
target_success_probability: float = 0.95,
) -> Dict[str, Any]:
"""
Estimate the expected cost of solving a task with given steps.
@ -917,7 +1028,9 @@ class MAKER:
valid_rate = self.stats["total_votes"] / max(
1, self.stats["total_samples"]
)
p = valid_rate * 0.99 # Assume 99% of valid votes are correct
p = (
valid_rate * 0.99
) # Assume 99% of valid votes are correct
else:
p = 0.99 # Default assumption
@ -928,7 +1041,9 @@ class MAKER:
if p > 0.5:
ratio = (1 - p) / p
try:
k_min = math.ceil(math.log(t ** (-1 / s) - 1) / math.log(ratio))
k_min = math.ceil(
math.log(t ** (-1 / s) - 1) / math.log(ratio)
)
except (ValueError, ZeroDivisionError):
k_min = 1
else:
@ -973,72 +1088,6 @@ class MAKER:
"votes_per_step": [],
"samples_per_step": [],
}
self.conversation = Conversation(name=f"maker_{self.name}_{self.id}")
if __name__ == "__main__":
import re
# Example: Using MAKER for a simple step-by-step task
print("MAKER: General-purpose example")
print("=" * 50)
# Define task-specific functions for a counting task
def format_counting_prompt(task, state, step_idx, previous_result):
"""Format prompt for counting task."""
if previous_result is None:
return f"{task}\nThis is step 1. What is the first number? Reply with just the number."
return f"{task}\nThe previous number was {previous_result}. What is the next number? Reply with just the number."
def parse_counting_response(response):
"""Parse the counting response to extract the number."""
numbers = re.findall(r"\d+", response)
if numbers:
return int(numbers[0])
return response.strip()
def validate_counting_response(response, max_tokens):
"""Validate counting response."""
if len(response) > max_tokens * 4:
return False
return bool(re.search(r"\d+", response))
# Create MAKER instance
maker = MAKER(
name="CountingExample",
description="MAKER example: counting numbers",
model_name="gpt-4o-mini",
system_prompt="You are a helpful assistant. When asked to count, respond with just the number, nothing else.",
format_prompt=format_counting_prompt,
parse_response=parse_counting_response,
validate_response=validate_counting_response,
k=2,
max_tokens=100,
temperature=0.1,
verbose=True,
)
print("\nRunning MAKER to count from 1 to 10...")
# Run the solver with the task as the main input
try:
results = maker.run(
task="Count from 1 to 10, one number at a time",
max_steps=10,
)
print(f"\nResults: {results}")
print("Expected: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]")
# Show statistics
stats = maker.get_statistics()
print("\nStatistics:")
print(f" Steps completed: {stats['steps_completed']}")
print(f" Total samples: {stats['total_samples']}")
print(f" Red-flagged: {stats['red_flagged']}")
if stats["votes_per_step"]:
print(
f" Avg votes per step: {sum(stats['votes_per_step'])/len(stats['votes_per_step']):.2f}"
self.conversation = Conversation(
name=f"maker_{self.name}_{self.id}"
)
except Exception as e:
print(f"Error: {e}")
print("(This example requires an API key to be configured)")

@ -26,7 +26,7 @@ from swarms.structs.council_as_judge import CouncilAsAJudge
from swarms.structs.debate_with_judge import DebateWithJudge
from swarms.structs.groupchat import GroupChat
from swarms.structs.heavy_swarm import HeavySwarm
from swarms.structs.hierarchical_swarm import HierarchicalSwarm
from swarms.structs.hiearchical_swarm import HierarchicalSwarm
from swarms.structs.interactive_groupchat import InteractiveGroupChat
from swarms.structs.ma_utils import list_all_agents
from swarms.structs.majority_voting import MajorityVoting
@ -306,25 +306,6 @@ class SwarmRouter:
"See https://docs.swarms.world/en/latest/swarms/structs/swarm_router/"
)
if (
self.swarm_type != "HeavySwarm"
and self.swarm_type != "DebateWithJudge"
and self.agents is None
):
raise SwarmRouterConfigError(
"SwarmRouter: No agents provided for the swarm. Check the docs to learn of required parameters. https://docs.swarms.world/en/latest/swarms/structs/agent/"
)
if self.swarm_type == "DebateWithJudge":
if self.agents is None or len(self.agents) != 3:
raise SwarmRouterConfigError(
"SwarmRouter: DebateWithJudge requires exactly 3 agents: "
"pro_agent (arguing in favor), con_agent (arguing against), "
"and judge_agent (evaluating and synthesizing). "
f"Provided {len(self.agents) if self.agents else 0} agent(s). "
"Check the docs: https://docs.swarms.world/en/latest/swarms/structs/swarm_router/"
)
if (
self.swarm_type == "AgentRearrange"
and self.rearrange_flow is None

@ -1,16 +1,16 @@
import asyncio
import base64
import socket
import traceback
import uuid
from pathlib import Path
from typing import List, Optional
import socket
import litellm
from pydantic import BaseModel
import requests
from litellm import completion, supports_vision
from loguru import logger
from pydantic import BaseModel
class LiteLLMException(Exception):
@ -402,70 +402,6 @@ class LiteLLM:
# Store other types of runtime_args for debugging
completion_params["runtime_args"] = runtime_args
# def output_for_tools(self, response: any):
# """
# Process tool calls from the LLM response and return formatted output.
# Args:
# response: The response object from the LLM API call
# Returns:
# dict or list: Formatted tool call data, or default response if no tool calls
# """
# try:
# # Convert response to dict if it's a Pydantic model
# if hasattr(response, "model_dump"):
# response_dict = response.model_dump()
# else:
# response_dict = response
# print(f"Response dict: {response_dict}")
# # Check if tool_calls exists and is not None
# if (
# response_dict.get("choices")
# and response_dict["choices"][0].get("message")
# and response_dict["choices"][0]["message"].get(
# "tool_calls"
# )
# and len(
# response_dict["choices"][0]["message"][
# "tool_calls"
# ]
# )
# > 0
# ):
# tool_call = response_dict["choices"][0]["message"][
# "tool_calls"
# ][0]
# if "function" in tool_call:
# return {
# "function": {
# "name": tool_call["function"].get(
# "name", ""
# ),
# "arguments": tool_call["function"].get(
# "arguments", "{}"
# ),
# }
# }
# else:
# # Handle case where tool_call structure is different
# return tool_call
# else:
# # Return a default response when no tool calls are present
# logger.warning(
# "No tool calls found in response, returning default response"
# )
# return {
# "function": {
# "name": "no_tool_call",
# "arguments": "{}",
# }
# }
# except Exception as e:
# logger.error(f"Error processing tool calls: {str(e)} Traceback: {traceback.format_exc()}")
def output_for_tools(self, response: any):
"""
Process and extract tool call information from the LLM response.

@ -6,6 +6,7 @@ from swarms.structs.custom_agent import CustomAgent, AgentResponse
try:
import pytest_asyncio
ASYNC_AVAILABLE = True
except ImportError:
ASYNC_AVAILABLE = False
@ -40,7 +41,10 @@ def test_custom_agent_initialization():
timeout=30.0,
verify_ssl=True,
)
assert custom_agent_instance.base_url == "https://api.example.com"
assert (
custom_agent_instance.base_url
== "https://api.example.com"
)
assert custom_agent_instance.endpoint == "v1/endpoint"
assert custom_agent_instance.timeout == 30.0
assert custom_agent_instance.verify_ssl is True
@ -51,7 +55,9 @@ def test_custom_agent_initialization():
raise
def test_custom_agent_initialization_with_default_headers(sample_custom_agent):
def test_custom_agent_initialization_with_default_headers(
sample_custom_agent,
):
try:
custom_agent_no_headers = CustomAgent(
name="TestAgent",
@ -59,7 +65,9 @@ def test_custom_agent_initialization_with_default_headers(sample_custom_agent):
base_url="https://api.test.com",
endpoint="test",
)
assert "Content-Type" in custom_agent_no_headers.default_headers
assert (
"Content-Type" in custom_agent_no_headers.default_headers
)
assert (
custom_agent_no_headers.default_headers["Content-Type"]
== "application/json"
@ -78,7 +86,10 @@ def test_custom_agent_url_normalization():
base_url="https://api.test.com/",
endpoint="/v1/test",
)
assert custom_agent_with_slashes.base_url == "https://api.test.com"
assert (
custom_agent_with_slashes.base_url
== "https://api.test.com"
)
assert custom_agent_with_slashes.endpoint == "v1/test"
logger.debug("URL normalization works correctly")
except Exception as e:
@ -90,14 +101,22 @@ def test_prepare_headers(sample_custom_agent):
try:
prepared_headers = sample_custom_agent._prepare_headers()
assert "Authorization" in prepared_headers
assert prepared_headers["Authorization"] == "Bearer test-token"
assert (
prepared_headers["Authorization"] == "Bearer test-token"
)
additional_headers = {"X-Custom-Header": "custom-value"}
prepared_headers_with_additional = (
sample_custom_agent._prepare_headers(additional_headers)
)
assert prepared_headers_with_additional["X-Custom-Header"] == "custom-value"
assert prepared_headers_with_additional["Authorization"] == "Bearer test-token"
assert (
prepared_headers_with_additional["X-Custom-Header"]
== "custom-value"
)
assert (
prepared_headers_with_additional["Authorization"]
== "Bearer test-token"
)
logger.debug("Header preparation works correctly")
except Exception as e:
logger.error(f"Failed to test prepare_headers: {e}")
@ -107,7 +126,9 @@ def test_prepare_headers(sample_custom_agent):
def test_prepare_payload_dict(sample_custom_agent):
try:
payload_dict = {"key": "value", "number": 123}
prepared_payload = sample_custom_agent._prepare_payload(payload_dict)
prepared_payload = sample_custom_agent._prepare_payload(
payload_dict
)
assert isinstance(prepared_payload, str)
parsed = json.loads(prepared_payload)
assert parsed["key"] == "value"
@ -121,22 +142,30 @@ def test_prepare_payload_dict(sample_custom_agent):
def test_prepare_payload_string(sample_custom_agent):
try:
payload_string = '{"test": "value"}'
prepared_payload = sample_custom_agent._prepare_payload(payload_string)
prepared_payload = sample_custom_agent._prepare_payload(
payload_string
)
assert prepared_payload == payload_string
logger.debug("String payload prepared correctly")
except Exception as e:
logger.error(f"Failed to test prepare_payload with string: {e}")
logger.error(
f"Failed to test prepare_payload with string: {e}"
)
raise
def test_prepare_payload_bytes(sample_custom_agent):
try:
payload_bytes = b'{"test": "value"}'
prepared_payload = sample_custom_agent._prepare_payload(payload_bytes)
prepared_payload = sample_custom_agent._prepare_payload(
payload_bytes
)
assert prepared_payload == payload_bytes
logger.debug("Bytes payload prepared correctly")
except Exception as e:
logger.error(f"Failed to test prepare_payload with bytes: {e}")
logger.error(
f"Failed to test prepare_payload with bytes: {e}"
)
raise
@ -148,7 +177,9 @@ def test_parse_response_success(sample_custom_agent):
mock_response.headers = {"content-type": "application/json"}
mock_response.json.return_value = {"message": "success"}
parsed_response = sample_custom_agent._parse_response(mock_response)
parsed_response = sample_custom_agent._parse_response(
mock_response
)
assert isinstance(parsed_response, AgentResponse)
assert parsed_response.status_code == 200
assert parsed_response.success is True
@ -167,7 +198,9 @@ def test_parse_response_error(sample_custom_agent):
mock_response.text = "Not Found"
mock_response.headers = {"content-type": "text/plain"}
parsed_response = sample_custom_agent._parse_response(mock_response)
parsed_response = sample_custom_agent._parse_response(
mock_response
)
assert isinstance(parsed_response, AgentResponse)
assert parsed_response.status_code == 404
assert parsed_response.success is False
@ -189,11 +222,15 @@ def test_extract_content_openai_format(sample_custom_agent):
}
]
}
extracted_content = sample_custom_agent._extract_content(openai_response)
extracted_content = sample_custom_agent._extract_content(
openai_response
)
assert extracted_content == "This is the response content"
logger.debug("OpenAI format content extracted correctly")
except Exception as e:
logger.error(f"Failed to test extract_content OpenAI format: {e}")
logger.error(
f"Failed to test extract_content OpenAI format: {e}"
)
raise
@ -202,25 +239,33 @@ def test_extract_content_anthropic_format(sample_custom_agent):
anthropic_response = {
"content": [
{"text": "First part "},
{"text": "second part"}
{"text": "second part"},
]
}
extracted_content = sample_custom_agent._extract_content(anthropic_response)
extracted_content = sample_custom_agent._extract_content(
anthropic_response
)
assert extracted_content == "First part second part"
logger.debug("Anthropic format content extracted correctly")
except Exception as e:
logger.error(f"Failed to test extract_content Anthropic format: {e}")
logger.error(
f"Failed to test extract_content Anthropic format: {e}"
)
raise
def test_extract_content_generic_format(sample_custom_agent):
try:
generic_response = {"text": "Generic response text"}
extracted_content = sample_custom_agent._extract_content(generic_response)
extracted_content = sample_custom_agent._extract_content(
generic_response
)
assert extracted_content == "Generic response text"
logger.debug("Generic format content extracted correctly")
except Exception as e:
logger.error(f"Failed to test extract_content generic format: {e}")
logger.error(
f"Failed to test extract_content generic format: {e}"
)
raise
@ -229,14 +274,18 @@ def test_run_success(mock_client_class, sample_custom_agent):
try:
mock_response = Mock()
mock_response.status_code = 200
mock_response.text = '{"choices": [{"message": {"content": "Success"}}]}'
mock_response.text = (
'{"choices": [{"message": {"content": "Success"}}]}'
)
mock_response.json.return_value = {
"choices": [{"message": {"content": "Success"}}]
}
mock_response.headers = {"content-type": "application/json"}
mock_client_instance = Mock()
mock_client_instance.__enter__ = Mock(return_value=mock_client_instance)
mock_client_instance.__enter__ = Mock(
return_value=mock_client_instance
)
mock_client_instance.__exit__ = Mock(return_value=None)
mock_client_instance.post.return_value = mock_response
mock_client_class.return_value = mock_client_instance
@ -259,7 +308,9 @@ def test_run_error_response(mock_client_class, sample_custom_agent):
mock_response.text = "Internal Server Error"
mock_client_instance = Mock()
mock_client_instance.__enter__ = Mock(return_value=mock_client_instance)
mock_client_instance.__enter__ = Mock(
return_value=mock_client_instance
)
mock_client_instance.__exit__ = Mock(return_value=None)
mock_client_instance.post.return_value = mock_response
mock_client_class.return_value = mock_client_instance
@ -280,9 +331,13 @@ def test_run_request_error(mock_client_class, sample_custom_agent):
import httpx
mock_client_instance = Mock()
mock_client_instance.__enter__ = Mock(return_value=mock_client_instance)
mock_client_instance.__enter__ = Mock(
return_value=mock_client_instance
)
mock_client_instance.__exit__ = Mock(return_value=None)
mock_client_instance.post.side_effect = httpx.RequestError("Connection failed")
mock_client_instance.post.side_effect = httpx.RequestError(
"Connection failed"
)
mock_client_class.return_value = mock_client_instance
test_payload = {"message": "test"}
@ -295,23 +350,33 @@ def test_run_request_error(mock_client_class, sample_custom_agent):
raise
@pytest.mark.skipif(not ASYNC_AVAILABLE, reason="pytest-asyncio not installed")
@pytest.mark.skipif(
not ASYNC_AVAILABLE, reason="pytest-asyncio not installed"
)
@pytest.mark.asyncio
@patch("swarms.structs.custom_agent.httpx.AsyncClient")
async def test_run_async_success(mock_async_client_class, sample_custom_agent):
async def test_run_async_success(
mock_async_client_class, sample_custom_agent
):
try:
mock_response = Mock()
mock_response.status_code = 200
mock_response.text = '{"content": [{"text": "Async Success"}]}'
mock_response.text = (
'{"content": [{"text": "Async Success"}]}'
)
mock_response.json.return_value = {
"content": [{"text": "Async Success"}]
}
mock_response.headers = {"content-type": "application/json"}
mock_client_instance = AsyncMock()
mock_client_instance.__aenter__ = AsyncMock(return_value=mock_client_instance)
mock_client_instance.__aenter__ = AsyncMock(
return_value=mock_client_instance
)
mock_client_instance.__aexit__ = AsyncMock(return_value=None)
mock_client_instance.post = AsyncMock(return_value=mock_response)
mock_client_instance.post = AsyncMock(
return_value=mock_response
)
mock_async_client_class.return_value = mock_client_instance
test_payload = {"message": "test"}
@ -324,19 +389,27 @@ async def test_run_async_success(mock_async_client_class, sample_custom_agent):
raise
@pytest.mark.skipif(not ASYNC_AVAILABLE, reason="pytest-asyncio not installed")
@pytest.mark.skipif(
not ASYNC_AVAILABLE, reason="pytest-asyncio not installed"
)
@pytest.mark.asyncio
@patch("swarms.structs.custom_agent.httpx.AsyncClient")
async def test_run_async_error_response(mock_async_client_class, sample_custom_agent):
async def test_run_async_error_response(
mock_async_client_class, sample_custom_agent
):
try:
mock_response = Mock()
mock_response.status_code = 400
mock_response.text = "Bad Request"
mock_client_instance = AsyncMock()
mock_client_instance.__aenter__ = AsyncMock(return_value=mock_client_instance)
mock_client_instance.__aenter__ = AsyncMock(
return_value=mock_client_instance
)
mock_client_instance.__aexit__ = AsyncMock(return_value=None)
mock_client_instance.post = AsyncMock(return_value=mock_response)
mock_client_instance.post = AsyncMock(
return_value=mock_response
)
mock_async_client_class.return_value = mock_client_instance
test_payload = {"message": "test"}

Loading…
Cancel
Save