commit
8442b773ed
@ -0,0 +1,15 @@
|
||||
from swarms import Agent
|
||||
|
||||
# Initialize the agent
|
||||
agent = Agent(
|
||||
agent_name="Quantitative-Trading-Agent",
|
||||
agent_description="Advanced quantitative trading and algorithmic analysis agent",
|
||||
model_name="gpt-4.1",
|
||||
max_loops="auto",
|
||||
)
|
||||
|
||||
out = agent.run(
|
||||
task="What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?",
|
||||
)
|
||||
|
||||
print(out)
|
||||
@ -0,0 +1,40 @@
|
||||
# AOP Examples Overview
|
||||
|
||||
Deploy agents as network services using the Agent Orchestration Protocol (AOP). Turn your agents into distributed, scalable, and accessible services.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **AOP Fundamentals** | Understanding agent-as-a-service deployment |
|
||||
| **Server Setup** | Running agents as MCP servers |
|
||||
| **Client Integration** | Connecting to remote agents |
|
||||
| **Production Deployment** | Scaling and monitoring agents |
|
||||
|
||||
---
|
||||
|
||||
## AOP Examples
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Medical AOP Example** | Healthcare agent deployment with AOP | [View Example](./aop_medical.md) |
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Use Case | Description |
|
||||
|----------|-------------|
|
||||
| **Microservices** | Agent per service |
|
||||
| **API Gateway** | Central agent access point |
|
||||
| **Multi-tenant** | Shared agent infrastructure |
|
||||
| **Edge Deployment** | Agents at the edge |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [AOP Reference Documentation](../swarms/structs/aop.md) - Complete AOP API
|
||||
- [AOP Server Setup](../swarms/examples/aop_server_example.md) - Server configuration
|
||||
- [AOP Cluster Example](../swarms/examples/aop_cluster_example.md) - Multi-node setup
|
||||
- [Deployment Solutions](../deployment_solutions/overview.md) - Production deployment
|
||||
@ -0,0 +1,69 @@
|
||||
# Applications Overview
|
||||
|
||||
Real-world multi-agent applications built with Swarms. These examples demonstrate complete solutions for business, research, finance, and automation use cases.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **Business Applications** | Marketing, hiring, M&A advisory swarms |
|
||||
| **Research Systems** | Advanced research and analysis workflows |
|
||||
| **Financial Analysis** | ETF research and investment analysis |
|
||||
| **Automation** | Browser agents and web automation |
|
||||
| **Industry Solutions** | Real estate, job finding, and more |
|
||||
|
||||
---
|
||||
|
||||
## Application Examples
|
||||
|
||||
| Application | Description | Industry | Link |
|
||||
|-------------|-------------|----------|------|
|
||||
| **Swarms of Browser Agents** | Automated web browsing with multiple agents | Automation | [View Example](../swarms/examples/swarms_of_browser_agents.md) |
|
||||
| **Hierarchical Marketing Team** | Multi-agent marketing strategy and execution | Marketing | [View Example](./marketing_team.md) |
|
||||
| **Gold ETF Research with HeavySwarm** | Comprehensive ETF analysis using Heavy Swarm | Finance | [View Example](./gold_etf_research.md) |
|
||||
| **Hiring Swarm** | Automated candidate screening and evaluation | HR/Recruiting | [View Example](./hiring_swarm.md) |
|
||||
| **Advanced Research** | Multi-agent research and analysis system | Research | [View Example](./av.md) |
|
||||
| **Real Estate Swarm** | Property analysis and market research | Real Estate | [View Example](./realestate_swarm.md) |
|
||||
| **Job Finding Swarm** | Automated job search and matching | Career | [View Example](./job_finding.md) |
|
||||
| **M&A Advisory Swarm** | Mergers & acquisitions analysis | Finance | [View Example](./ma_swarm.md) |
|
||||
|
||||
---
|
||||
|
||||
## Applications by Category
|
||||
|
||||
### Business & Marketing
|
||||
|
||||
| Application | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Hierarchical Marketing Team** | Complete marketing strategy system | [View Example](./marketing_team.md) |
|
||||
| **Hiring Swarm** | End-to-end recruiting automation | [View Example](./hiring_swarm.md) |
|
||||
| **M&A Advisory Swarm** | Due diligence and analysis | [View Example](./ma_swarm.md) |
|
||||
|
||||
### Financial Analysis
|
||||
|
||||
| Application | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Gold ETF Research** | Comprehensive ETF analysis | [View Example](./gold_etf_research.md) |
|
||||
|
||||
### Research & Automation
|
||||
|
||||
| Application | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Advanced Research** | Multi-source research compilation | [View Example](./av.md) |
|
||||
| **Browser Agents** | Automated web interaction | [View Example](../swarms/examples/swarms_of_browser_agents.md) |
|
||||
| **Job Finding Swarm** | Career opportunity discovery | [View Example](./job_finding.md) |
|
||||
|
||||
### Real Estate
|
||||
|
||||
| Application | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Real Estate Swarm** | Property market analysis | [View Example](./realestate_swarm.md) |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [HierarchicalSwarm Documentation](../swarms/structs/hierarchical_swarm.md)
|
||||
- [HeavySwarm Documentation](../swarms/structs/heavy_swarm.md)
|
||||
- [Building Custom Swarms](../swarms/structs/custom_swarm.md)
|
||||
- [Deployment Solutions](../deployment_solutions/overview.md)
|
||||
@ -0,0 +1,29 @@
|
||||
# Apps Examples Overview
|
||||
|
||||
Complete application examples built with Swarms. These examples show how to build practical tools and utilities with AI agents.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **Web Scraping** | Building intelligent web scrapers |
|
||||
| **Database Integration** | Smart database query agents |
|
||||
| **Practical Tools** | End-to-end application development |
|
||||
|
||||
---
|
||||
|
||||
## App Examples
|
||||
|
||||
| App | Description | Link |
|
||||
|-----|-------------|------|
|
||||
| **Web Scraper Agents** | Intelligent web data extraction | [View Example](../developer_guides/web_scraper.md) |
|
||||
| **Smart Database** | AI-powered database interactions | [View Example](./smart_database.md) |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [Tools & Integrations](./tools_integrations_overview.md) - External service connections
|
||||
- [Multi-Agent Architectures](./multi_agent_architectures_overview.md) - Complex agent systems
|
||||
- [Deployment Solutions](../deployment_solutions/overview.md) - Production deployment
|
||||
|
||||
@ -0,0 +1,80 @@
|
||||
# Basic Examples Overview
|
||||
|
||||
Start your Swarms journey with single-agent examples. Learn how to create agents, use tools, process images, integrate with different LLM providers, and publish to the marketplace.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **Agent Basics** | Create and configure individual agents |
|
||||
| **Tool Integration** | Equip agents with callable tools and functions |
|
||||
| **Vision Capabilities** | Process images and multi-modal inputs |
|
||||
| **LLM Providers** | Connect to OpenAI, Anthropic, Groq, and more |
|
||||
| **Utilities** | Streaming, output types, and marketplace publishing |
|
||||
|
||||
---
|
||||
|
||||
## Individual Agent Examples
|
||||
|
||||
### Core Agent Usage
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Basic Agent** | Fundamental agent creation and execution | [View Example](../swarms/examples/basic_agent.md) |
|
||||
|
||||
### Tool Usage
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Agents with Vision and Tool Usage** | Combine vision and tools in one agent | [View Example](../swarms/examples/vision_tools.md) |
|
||||
| **Agents with Callable Tools** | Equip agents with Python functions as tools | [View Example](../swarms/examples/agent_with_tools.md) |
|
||||
| **Agent with Structured Outputs** | Get consistent JSON/structured responses | [View Example](../swarms/examples/agent_structured_outputs.md) |
|
||||
| **Message Transforms** | Manage context with message transformations | [View Example](../swarms/structs/transforms.md) |
|
||||
|
||||
### Vision & Multi-Modal
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Agents with Vision** | Process and analyze images | [View Example](../swarms/examples/vision_processing.md) |
|
||||
| **Agent with Multiple Images** | Handle multiple images in one request | [View Example](../swarms/examples/multiple_images.md) |
|
||||
|
||||
### Utilities
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Agent with Streaming** | Stream responses in real-time | [View Example](./agent_stream.md) |
|
||||
| **Agent Output Types** | Different output formats (str, json, dict, yaml) | [View Example](../swarms/examples/agent_output_types.md) |
|
||||
| **Gradio Chat Interface** | Build chat UIs for your agents | [View Example](../swarms/ui/main.md) |
|
||||
| **Agent with Gemini Nano Banana** | Jarvis-style agent example | [View Example](../swarms/examples/jarvis_agent.md) |
|
||||
| **Agent Marketplace Publishing** | Publish agents to the Swarms marketplace | [View Example](./marketplace_publishing_quickstart.md) |
|
||||
|
||||
---
|
||||
|
||||
## LLM Provider Examples
|
||||
|
||||
Connect your agents to various language model providers:
|
||||
|
||||
| Provider | Description | Link |
|
||||
|----------|-------------|------|
|
||||
| **Overview** | Guide to all supported providers | [View Guide](../swarms/examples/model_providers.md) |
|
||||
| **OpenAI** | GPT-4, GPT-4o, GPT-4o-mini integration | [View Example](../swarms/examples/openai_example.md) |
|
||||
| **Anthropic** | Claude models integration | [View Example](../swarms/examples/claude.md) |
|
||||
| **Groq** | Ultra-fast inference with Groq | [View Example](../swarms/examples/groq.md) |
|
||||
| **Cohere** | Cohere Command models | [View Example](../swarms/examples/cohere.md) |
|
||||
| **DeepSeek** | DeepSeek models integration | [View Example](../swarms/examples/deepseek.md) |
|
||||
| **Ollama** | Local models with Ollama | [View Example](../swarms/examples/ollama.md) |
|
||||
| **OpenRouter** | Access multiple providers via OpenRouter | [View Example](../swarms/examples/openrouter.md) |
|
||||
| **XAI** | Grok models from xAI | [View Example](../swarms/examples/xai.md) |
|
||||
| **Azure OpenAI** | Enterprise Azure deployment | [View Example](../swarms/examples/azure.md) |
|
||||
| **Llama4** | Meta's Llama 4 models | [View Example](../swarms/examples/llama4.md) |
|
||||
| **Custom Base URL** | Connect to any OpenAI-compatible API | [View Example](../swarms/examples/custom_base_url_example.md) |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
After mastering basic agents, explore:
|
||||
|
||||
- [Multi-Agent Architectures](./multi_agent_architectures_overview.md) - Coordinate multiple agents
|
||||
- [Tools Documentation](../swarms/tools/main.md) - Deep dive into tool creation
|
||||
- [CLI Guides](./cli_guides_overview.md) - Run agents from command line
|
||||
@ -0,0 +1,47 @@
|
||||
# CLI Guides Overview
|
||||
|
||||
Master the Swarms command-line interface with these step-by-step guides. Execute agents, run multi-agent workflows, and integrate Swarms into your DevOps pipelines—all from your terminal.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **CLI Basics** | Install, configure, and run your first commands |
|
||||
| **Agent Creation** | Create and run agents directly from command line |
|
||||
| **YAML Configuration** | Define agents in config files for reproducible deployments |
|
||||
| **Multi-Agent Commands** | Run LLM Council and Heavy Swarm from terminal |
|
||||
| **DevOps Integration** | Integrate into CI/CD pipelines and scripts |
|
||||
|
||||
---
|
||||
|
||||
## CLI Guides
|
||||
|
||||
| Guide | Description | Link |
|
||||
|-------|-------------|------|
|
||||
| **CLI Quickstart** | Get started with Swarms CLI in 3 steps—install, configure, and run | [View Guide](../swarms/cli/cli_quickstart.md) |
|
||||
| **Creating Agents from CLI** | Create, configure, and run AI agents directly from your terminal | [View Guide](../swarms/cli/cli_agent_guide.md) |
|
||||
| **YAML Configuration** | Run multiple agents from YAML configuration files | [View Guide](../swarms/cli/cli_yaml_guide.md) |
|
||||
| **LLM Council CLI** | Run collaborative multi-agent decision-making from command line | [View Guide](../swarms/cli/cli_llm_council_guide.md) |
|
||||
| **Heavy Swarm CLI** | Execute comprehensive task analysis swarms from terminal | [View Guide](../swarms/cli/cli_heavy_swarm_guide.md) |
|
||||
| **CLI Multi-Agent Commands** | Complete guide to multi-agent CLI commands | [View Guide](./cli_multi_agent_quickstart.md) |
|
||||
| **CLI Examples** | Additional CLI usage examples and patterns | [View Guide](../swarms/cli/cli_examples.md) |
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Use Case | Recommended Guide |
|
||||
|----------|-------------------|
|
||||
| First time using CLI | [CLI Quickstart](../swarms/cli/cli_quickstart.md) |
|
||||
| Creating custom agents | [Creating Agents from CLI](../swarms/cli/cli_agent_guide.md) |
|
||||
| Team/production deployments | [YAML Configuration](../swarms/cli/cli_yaml_guide.md) |
|
||||
| Collaborative decision-making | [LLM Council CLI](../swarms/cli/cli_llm_council_guide.md) |
|
||||
| Complex research tasks | [Heavy Swarm CLI](../swarms/cli/cli_heavy_swarm_guide.md) |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [CLI Reference Documentation](../swarms/cli/cli_reference.md) - Complete command reference
|
||||
- [Agent Documentation](../swarms/structs/agent.md) - Agent class reference
|
||||
- [Environment Configuration](../swarms/install/env.md) - Environment setup guide
|
||||
@ -0,0 +1,215 @@
|
||||
# CLI Multi-Agent Features: 3-Step Quickstart Guide
|
||||
|
||||
Run LLM Council and Heavy Swarm directly from the command line for seamless DevOps integration. Execute sophisticated multi-agent workflows without writing Python code.
|
||||
|
||||
## Overview
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **LLM Council CLI** | Run collaborative decision-making from terminal |
|
||||
| **Heavy Swarm CLI** | Execute comprehensive research swarms |
|
||||
| **DevOps Ready** | Integrate into CI/CD pipelines and scripts |
|
||||
| **Configurable** | Full parameter control from command line |
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Install and Verify
|
||||
|
||||
Ensure Swarms is installed and verify CLI access:
|
||||
|
||||
```bash
|
||||
# Install swarms
|
||||
pip install swarms
|
||||
|
||||
# Verify CLI is available
|
||||
swarms --help
|
||||
```
|
||||
|
||||
You should see the Swarms CLI banner and available commands.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Set Environment Variables
|
||||
|
||||
Configure your API keys:
|
||||
|
||||
```bash
|
||||
# Set your OpenAI API key (or other provider)
|
||||
export OPENAI_API_KEY="your-openai-api-key"
|
||||
|
||||
# Optional: Set workspace directory
|
||||
export WORKSPACE_DIR="./agent_workspace"
|
||||
```
|
||||
|
||||
Or add to your `.env` file:
|
||||
|
||||
```
|
||||
OPENAI_API_KEY=your-openai-api-key
|
||||
WORKSPACE_DIR=./agent_workspace
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run Multi-Agent Commands
|
||||
|
||||
### LLM Council
|
||||
|
||||
Run a collaborative council of AI agents:
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
swarms llm-council --task "What is the best approach to implement microservices architecture?"
|
||||
|
||||
# With verbose output
|
||||
swarms llm-council --task "Evaluate investment opportunities in AI startups" --verbose
|
||||
```
|
||||
|
||||
### Heavy Swarm
|
||||
|
||||
Run comprehensive research and analysis:
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
swarms heavy-swarm --task "Analyze the current state of quantum computing"
|
||||
|
||||
# With configuration options
|
||||
swarms heavy-swarm \
|
||||
--task "Research renewable energy market trends" \
|
||||
--loops-per-agent 2 \
|
||||
--question-agent-model-name gpt-4o-mini \
|
||||
--worker-model-name gpt-4o-mini \
|
||||
--verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete CLI Reference
|
||||
|
||||
### LLM Council Command
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "<your query>" [options]
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--task` | **Required.** The query or question for the council |
|
||||
| `--verbose` | Enable detailed output logging |
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Strategic decision
|
||||
swarms llm-council --task "Should our startup pivot from B2B to B2C?"
|
||||
|
||||
# Technical evaluation
|
||||
swarms llm-council --task "Compare React vs Vue for enterprise applications"
|
||||
|
||||
# Business analysis
|
||||
swarms llm-council --task "What are the risks of expanding to European markets?"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Heavy Swarm Command
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm --task "<your task>" [options]
|
||||
```
|
||||
|
||||
| Option | Default | Description |
|
||||
|--------|---------|-------------|
|
||||
| `--task` | - | **Required.** The research task |
|
||||
| `--loops-per-agent` | 1 | Number of loops per agent |
|
||||
| `--question-agent-model-name` | gpt-4o-mini | Model for question agent |
|
||||
| `--worker-model-name` | gpt-4o-mini | Model for worker agents |
|
||||
| `--random-loops-per-agent` | False | Randomize loops per agent |
|
||||
| `--verbose` | False | Enable detailed output |
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Comprehensive research
|
||||
swarms heavy-swarm --task "Research the impact of AI on healthcare diagnostics" --verbose
|
||||
|
||||
# With custom models
|
||||
swarms heavy-swarm \
|
||||
--task "Analyze cryptocurrency regulation trends globally" \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--loops-per-agent 3
|
||||
|
||||
# Quick analysis
|
||||
swarms heavy-swarm --task "Summarize recent advances in battery technology"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Other Useful CLI Commands
|
||||
|
||||
### Setup Check
|
||||
|
||||
Verify your environment is properly configured:
|
||||
|
||||
```bash
|
||||
swarms setup-check --verbose
|
||||
```
|
||||
|
||||
### Run Single Agent
|
||||
|
||||
Execute a single agent task:
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Research-Agent" \
|
||||
--task "Summarize recent AI developments" \
|
||||
--model "gpt-4o-mini" \
|
||||
--max-loops 1
|
||||
```
|
||||
|
||||
### Auto Swarm
|
||||
|
||||
Automatically generate and run a swarm configuration:
|
||||
|
||||
```bash
|
||||
swarms autoswarm --task "Build a content analysis pipeline" --model gpt-4
|
||||
```
|
||||
|
||||
### Show All Commands
|
||||
|
||||
Display all available CLI features:
|
||||
|
||||
```bash
|
||||
swarms show-all
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| "Command not found" | Ensure `pip install swarms` completed successfully |
|
||||
| "API key not set" | Export `OPENAI_API_KEY` environment variable |
|
||||
| "Task cannot be empty" | Always provide `--task` argument |
|
||||
| Timeout errors | Check network connectivity and API rate limits |
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Run with verbose output for debugging:
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Your query" --verbose 2>&1 | tee debug.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Explore [CLI Reference Documentation](../swarms/cli/cli_reference.md) for all commands
|
||||
- See [CLI Examples](../swarms/cli/cli_examples.md) for more use cases
|
||||
- Learn about [LLM Council](./llm_council_quickstart.md) Python API
|
||||
- Try [Heavy Swarm Documentation](../swarms/structs/heavy_swarm.md) for advanced configuration
|
||||
|
||||
@ -0,0 +1,233 @@
|
||||
# DebateWithJudge: 3-Step Quickstart Guide
|
||||
|
||||
The DebateWithJudge architecture enables structured debates between two agents (Pro and Con) with a Judge providing refined synthesis over multiple rounds. This creates progressively improved answers through iterative argumentation and evaluation.
|
||||
|
||||
## Overview
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **Pro Agent** | Argues in favor of a position with evidence and reasoning |
|
||||
| **Con Agent** | Presents counter-arguments and identifies weaknesses |
|
||||
| **Judge Agent** | Evaluates both sides and synthesizes the best elements |
|
||||
| **Iterative Refinement** | Multiple rounds progressively improve the final answer |
|
||||
|
||||
```
|
||||
Agent A (Pro) ↔ Agent B (Con)
|
||||
│ │
|
||||
▼ ▼
|
||||
Judge / Critic Agent
|
||||
│
|
||||
▼
|
||||
Winner or synthesis → refined answer
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Install and Import
|
||||
|
||||
Ensure you have Swarms installed and import the DebateWithJudge class:
|
||||
|
||||
```bash
|
||||
pip install swarms
|
||||
```
|
||||
|
||||
```python
|
||||
from swarms import DebateWithJudge
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Create the Debate System
|
||||
|
||||
Create a DebateWithJudge system using preset agents (the simplest approach):
|
||||
|
||||
```python
|
||||
# Create debate system with preset optimized agents
|
||||
debate = DebateWithJudge(
|
||||
preset_agents=True, # Use built-in optimized agents
|
||||
max_loops=3, # 3 rounds of debate
|
||||
model_name="gpt-4o-mini",
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run the Debate
|
||||
|
||||
Execute the debate on a topic:
|
||||
|
||||
```python
|
||||
# Define the debate topic
|
||||
topic = "Should artificial intelligence be regulated by governments?"
|
||||
|
||||
# Run the debate
|
||||
result = debate.run(task=topic)
|
||||
|
||||
# Print the refined answer
|
||||
print(result)
|
||||
|
||||
# Or get just the final synthesis
|
||||
final_answer = debate.get_final_answer()
|
||||
print(final_answer)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Example
|
||||
|
||||
Here's a complete working example:
|
||||
|
||||
```python
|
||||
from swarms import DebateWithJudge
|
||||
|
||||
# Step 1: Create the debate system with preset agents
|
||||
debate_system = DebateWithJudge(
|
||||
preset_agents=True,
|
||||
max_loops=3,
|
||||
model_name="gpt-4o-mini",
|
||||
output_type="str-all-except-first",
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Step 2: Define a complex topic
|
||||
topic = (
|
||||
"Should artificial intelligence be regulated by governments? "
|
||||
"Discuss the balance between innovation and safety."
|
||||
)
|
||||
|
||||
# Step 3: Run the debate and get refined answer
|
||||
result = debate_system.run(task=topic)
|
||||
|
||||
print("=" * 60)
|
||||
print("DEBATE RESULT:")
|
||||
print("=" * 60)
|
||||
print(result)
|
||||
|
||||
# Access conversation history for detailed analysis
|
||||
history = debate_system.get_conversation_history()
|
||||
print(f"\nTotal exchanges: {len(history)}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Custom Agents Example
|
||||
|
||||
Create specialized agents for domain-specific debates:
|
||||
|
||||
```python
|
||||
from swarms import Agent, DebateWithJudge
|
||||
|
||||
# Create specialized Pro agent
|
||||
pro_agent = Agent(
|
||||
agent_name="Innovation-Advocate",
|
||||
system_prompt=(
|
||||
"You are a technology policy expert arguing for innovation and minimal regulation. "
|
||||
"You present arguments focusing on economic growth, technological competitiveness, "
|
||||
"and the risks of over-regulation stifling progress."
|
||||
),
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Create specialized Con agent
|
||||
con_agent = Agent(
|
||||
agent_name="Safety-Advocate",
|
||||
system_prompt=(
|
||||
"You are a technology policy expert arguing for strong AI safety regulations. "
|
||||
"You present arguments focusing on public safety, ethical considerations, "
|
||||
"and the need for government oversight of powerful technologies."
|
||||
),
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Create specialized Judge agent
|
||||
judge_agent = Agent(
|
||||
agent_name="Policy-Analyst",
|
||||
system_prompt=(
|
||||
"You are an impartial policy analyst evaluating technology regulation debates. "
|
||||
"You synthesize the strongest arguments from both sides and provide "
|
||||
"balanced, actionable policy recommendations."
|
||||
),
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Create debate system with custom agents
|
||||
debate = DebateWithJudge(
|
||||
agents=[pro_agent, con_agent, judge_agent], # Pass as list
|
||||
max_loops=3,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = debate.run("Should AI-generated content require mandatory disclosure labels?")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Batch Processing
|
||||
|
||||
Process multiple debate topics:
|
||||
|
||||
```python
|
||||
from swarms import DebateWithJudge
|
||||
|
||||
debate = DebateWithJudge(preset_agents=True, max_loops=2)
|
||||
|
||||
# Multiple topics to debate
|
||||
topics = [
|
||||
"Should remote work become the standard for knowledge workers?",
|
||||
"Is cryptocurrency a viable alternative to traditional banking?",
|
||||
"Should social media platforms be held accountable for content moderation?",
|
||||
]
|
||||
|
||||
# Process all topics
|
||||
results = debate.batched_run(topics)
|
||||
|
||||
for topic, result in zip(topics, results):
|
||||
print(f"\nTopic: {topic}")
|
||||
print(f"Result: {result[:200]}...")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration Options
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `preset_agents` | `False` | Use built-in optimized agents |
|
||||
| `max_loops` | `3` | Number of debate rounds |
|
||||
| `model_name` | `"gpt-4o-mini"` | Model for preset agents |
|
||||
| `output_type` | `"str-all-except-first"` | Output format |
|
||||
| `verbose` | `True` | Enable detailed logging |
|
||||
|
||||
### Output Types
|
||||
|
||||
| Value | Description |
|
||||
|-------|-------------|
|
||||
| `"str-all-except-first"` | Formatted string, excluding initialization (default) |
|
||||
| `"str"` | All messages as formatted string |
|
||||
| `"dict"` | Messages as dictionary |
|
||||
| `"list"` | Messages as list |
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Domain | Example Topic |
|
||||
|--------|---------------|
|
||||
| **Policy** | "Should universal basic income be implemented?" |
|
||||
| **Technology** | "Microservices vs. monolithic architecture for startups?" |
|
||||
| **Business** | "Should companies prioritize growth or profitability?" |
|
||||
| **Ethics** | "Is it ethical to use AI in hiring decisions?" |
|
||||
| **Science** | "Should gene editing be allowed for non-medical purposes?" |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Explore [DebateWithJudge Reference](../swarms/structs/debate_with_judge.md) for complete API details
|
||||
- See [Debate Examples](https://github.com/kyegomez/swarms/tree/master/examples/multi_agent/debate_examples) for more use cases
|
||||
- Learn about [Orchestration Methods](../swarms/structs/orchestration_methods.md) for other debate architectures
|
||||
|
||||
@ -0,0 +1,327 @@
|
||||
# GraphWorkflow with Rustworkx: 3-Step Quickstart Guide
|
||||
|
||||
GraphWorkflow provides a powerful workflow orchestration system that creates directed graphs of agents for complex multi-agent collaboration. The new **Rustworkx integration** delivers 5-10x faster performance for large-scale workflows.
|
||||
|
||||
## Overview
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **Directed Graph Structure** | Nodes are agents, edges define data flow |
|
||||
| **Dual Backend Support** | NetworkX (compatibility) or Rustworkx (performance) |
|
||||
| **Parallel Execution** | Multiple agents run simultaneously within layers |
|
||||
| **Automatic Compilation** | Optimizes workflow structure for efficient execution |
|
||||
| **5-10x Performance** | Rustworkx backend for high-throughput workflows |
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Install and Import
|
||||
|
||||
Install Swarms and Rustworkx for high-performance workflows:
|
||||
|
||||
```bash
|
||||
pip install swarms rustworkx
|
||||
```
|
||||
|
||||
```python
|
||||
from swarms import Agent, GraphWorkflow
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Create the Workflow with Rustworkx Backend
|
||||
|
||||
Create agents and build a workflow using the high-performance Rustworkx backend:
|
||||
|
||||
```python
|
||||
# Create specialized agents
|
||||
research_agent = Agent(
|
||||
agent_name="ResearchAgent",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You are a research specialist. Gather and analyze information.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
analysis_agent = Agent(
|
||||
agent_name="AnalysisAgent",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You are an analyst. Process research findings and extract insights.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
# Create workflow with rustworkx backend for better performance
|
||||
workflow = GraphWorkflow(
|
||||
name="Research-Analysis-Pipeline",
|
||||
backend="rustworkx", # Use rustworkx for 5-10x faster performance
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Add agents as nodes
|
||||
workflow.add_node(research_agent)
|
||||
workflow.add_node(analysis_agent)
|
||||
|
||||
# Connect agents with edges
|
||||
workflow.add_edge("ResearchAgent", "AnalysisAgent")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Execute the Workflow
|
||||
|
||||
Run the workflow and get results:
|
||||
|
||||
```python
|
||||
# Execute the workflow
|
||||
results = workflow.run("What are the latest trends in renewable energy technology?")
|
||||
|
||||
# Print results
|
||||
print(results)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Example
|
||||
|
||||
Here's a complete parallel processing workflow:
|
||||
|
||||
```python
|
||||
from swarms import Agent, GraphWorkflow
|
||||
|
||||
# Step 1: Create specialized agents
|
||||
data_collector = Agent(
|
||||
agent_name="DataCollector",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You collect and organize data from various sources.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
technical_analyst = Agent(
|
||||
agent_name="TechnicalAnalyst",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You perform technical analysis on data.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
market_analyst = Agent(
|
||||
agent_name="MarketAnalyst",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You analyze market trends and conditions.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
synthesis_agent = Agent(
|
||||
agent_name="SynthesisAgent",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You synthesize insights from multiple analysts into a cohesive report.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
# Step 2: Build workflow with rustworkx backend
|
||||
workflow = GraphWorkflow(
|
||||
name="Market-Analysis-Pipeline",
|
||||
backend="rustworkx", # High-performance backend
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Add all agents
|
||||
for agent in [data_collector, technical_analyst, market_analyst, synthesis_agent]:
|
||||
workflow.add_node(agent)
|
||||
|
||||
# Create fan-out pattern: data collector feeds both analysts
|
||||
workflow.add_edges_from_source(
|
||||
"DataCollector",
|
||||
["TechnicalAnalyst", "MarketAnalyst"]
|
||||
)
|
||||
|
||||
# Create fan-in pattern: both analysts feed synthesis agent
|
||||
workflow.add_edges_to_target(
|
||||
["TechnicalAnalyst", "MarketAnalyst"],
|
||||
"SynthesisAgent"
|
||||
)
|
||||
|
||||
# Step 3: Execute and get results
|
||||
results = workflow.run("Analyze Bitcoin market trends for Q4 2024")
|
||||
|
||||
print("=" * 60)
|
||||
print("WORKFLOW RESULTS:")
|
||||
print("=" * 60)
|
||||
print(results)
|
||||
|
||||
# Get compilation status
|
||||
status = workflow.get_compilation_status()
|
||||
print(f"\nLayers: {status['cached_layers_count']}")
|
||||
print(f"Max workers: {status['max_workers']}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## NetworkX vs Rustworkx Backend
|
||||
|
||||
| Graph Size | Recommended Backend | Performance |
|
||||
|------------|-------------------|-------------|
|
||||
| < 100 nodes | NetworkX | Minimal overhead |
|
||||
| 100-1000 nodes | Either | Both perform well |
|
||||
| 1000+ nodes | **Rustworkx** | 5-10x faster |
|
||||
| 10k+ nodes | **Rustworkx** | Essential |
|
||||
|
||||
```python
|
||||
# NetworkX backend (default, maximum compatibility)
|
||||
workflow = GraphWorkflow(backend="networkx")
|
||||
|
||||
# Rustworkx backend (high performance)
|
||||
workflow = GraphWorkflow(backend="rustworkx")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Edge Patterns
|
||||
|
||||
### Fan-Out (One-to-Many)
|
||||
|
||||
```python
|
||||
# One agent feeds multiple agents
|
||||
workflow.add_edges_from_source(
|
||||
"DataCollector",
|
||||
["Analyst1", "Analyst2", "Analyst3"]
|
||||
)
|
||||
```
|
||||
|
||||
### Fan-In (Many-to-One)
|
||||
|
||||
```python
|
||||
# Multiple agents feed one agent
|
||||
workflow.add_edges_to_target(
|
||||
["Analyst1", "Analyst2", "Analyst3"],
|
||||
"SynthesisAgent"
|
||||
)
|
||||
```
|
||||
|
||||
### Parallel Chain (Many-to-Many)
|
||||
|
||||
```python
|
||||
# Full mesh connection
|
||||
workflow.add_parallel_chain(
|
||||
["Source1", "Source2"],
|
||||
["Target1", "Target2", "Target3"]
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Using from_spec for Quick Setup
|
||||
|
||||
Create workflows quickly with the `from_spec` class method:
|
||||
|
||||
```python
|
||||
from swarms import Agent, GraphWorkflow
|
||||
|
||||
# Create agents
|
||||
agent1 = Agent(agent_name="Researcher", model_name="gpt-4o-mini", max_loops=1)
|
||||
agent2 = Agent(agent_name="Analyzer", model_name="gpt-4o-mini", max_loops=1)
|
||||
agent3 = Agent(agent_name="Reporter", model_name="gpt-4o-mini", max_loops=1)
|
||||
|
||||
# Create workflow from specification
|
||||
workflow = GraphWorkflow.from_spec(
|
||||
agents=[agent1, agent2, agent3],
|
||||
edges=[
|
||||
("Researcher", "Analyzer"),
|
||||
("Analyzer", "Reporter"),
|
||||
],
|
||||
task="Analyze climate change data",
|
||||
backend="rustworkx" # Use high-performance backend
|
||||
)
|
||||
|
||||
results = workflow.run()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Visualization
|
||||
|
||||
Generate visual representations of your workflow:
|
||||
|
||||
```python
|
||||
# Create visualization (requires graphviz)
|
||||
output_file = workflow.visualize(
|
||||
format="png",
|
||||
view=True,
|
||||
show_summary=True
|
||||
)
|
||||
print(f"Visualization saved to: {output_file}")
|
||||
|
||||
# Simple text visualization
|
||||
text_viz = workflow.visualize_simple()
|
||||
print(text_viz)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Serialization
|
||||
|
||||
Save and load workflows:
|
||||
|
||||
```python
|
||||
# Save workflow with conversation history
|
||||
workflow.save_to_file(
|
||||
"my_workflow.json",
|
||||
include_conversation=True,
|
||||
include_runtime_state=True
|
||||
)
|
||||
|
||||
# Load workflow later
|
||||
loaded_workflow = GraphWorkflow.load_from_file(
|
||||
"my_workflow.json",
|
||||
restore_runtime_state=True
|
||||
)
|
||||
|
||||
# Continue execution
|
||||
results = loaded_workflow.run("Follow-up analysis")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Large-Scale Example with Rustworkx
|
||||
|
||||
```python
|
||||
from swarms import Agent, GraphWorkflow
|
||||
|
||||
# Create workflow for large-scale processing
|
||||
workflow = GraphWorkflow(
|
||||
name="Large-Scale-Pipeline",
|
||||
backend="rustworkx", # Essential for large graphs
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Create many processing agents
|
||||
processors = []
|
||||
for i in range(50):
|
||||
agent = Agent(
|
||||
agent_name=f"Processor{i}",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1
|
||||
)
|
||||
processors.append(agent)
|
||||
workflow.add_node(agent)
|
||||
|
||||
# Create layered connections
|
||||
for i in range(0, 40, 10):
|
||||
sources = [f"Processor{j}" for j in range(i, i+10)]
|
||||
targets = [f"Processor{j}" for j in range(i+10, min(i+20, 50))]
|
||||
if targets:
|
||||
workflow.add_parallel_chain(sources, targets)
|
||||
|
||||
# Compile and execute
|
||||
workflow.compile()
|
||||
status = workflow.get_compilation_status()
|
||||
print(f"Compiled: {status['cached_layers_count']} layers")
|
||||
|
||||
results = workflow.run("Process dataset in parallel")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Explore [GraphWorkflow Reference](../swarms/structs/graph_workflow.md) for complete API details
|
||||
- See [Multi-Agentic Patterns with GraphWorkflow](./graphworkflow_rustworkx_patterns.md) for advanced patterns
|
||||
- Learn about [Visualization Options](../swarms/structs/graph_workflow.md#visualization-methods) for debugging workflows
|
||||
|
||||
@ -0,0 +1,112 @@
|
||||
# LLM Council Examples
|
||||
|
||||
This page provides examples demonstrating the LLM Council pattern, inspired by Andrej Karpathy's llm-council implementation. The LLM Council uses multiple specialized AI agents that:
|
||||
|
||||
1. Each respond independently to queries
|
||||
2. Review and rank each other's anonymized responses
|
||||
3. Have a Chairman synthesize all responses into a final comprehensive answer
|
||||
|
||||
## Example Files
|
||||
|
||||
All LLM Council examples are located in the [`examples/multi_agent/llm_council_examples/`](https://github.com/kyegomez/swarms/tree/master/examples/multi_agent/llm_council_examples) directory.
|
||||
|
||||
### Marketing & Business
|
||||
|
||||
- **[marketing_strategy_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/marketing_strategy_council.py)** - Marketing strategy analysis and recommendations
|
||||
- **[business_strategy_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/business_strategy_council.py)** - Comprehensive business strategy development
|
||||
|
||||
### Finance & Investment
|
||||
|
||||
- **[finance_analysis_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/finance_analysis_council.py)** - Financial analysis and investment recommendations
|
||||
- **[etf_stock_analysis_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/etf_stock_analysis_council.py)** - ETF and stock analysis with portfolio recommendations
|
||||
|
||||
### Medical & Healthcare
|
||||
|
||||
- **[medical_treatment_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/medical_treatment_council.py)** - Medical treatment recommendations and care plans
|
||||
- **[medical_diagnosis_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/medical_diagnosis_council.py)** - Diagnostic analysis based on symptoms
|
||||
|
||||
### Technology & Research
|
||||
|
||||
- **[technology_assessment_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/technology_assessment_council.py)** - Technology evaluation and implementation strategy
|
||||
- **[research_analysis_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/research_analysis_council.py)** - Comprehensive research analysis on complex topics
|
||||
|
||||
### Legal
|
||||
|
||||
- **[legal_analysis_council.py](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/llm_council_examples/legal_analysis_council.py)** - Legal implications and compliance analysis
|
||||
|
||||
## Basic Usage Pattern
|
||||
|
||||
All examples follow the same pattern:
|
||||
|
||||
```python
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Run a query
|
||||
result = council.run("Your query here")
|
||||
|
||||
# Access results
|
||||
print(result["final_response"]) # Chairman's synthesized answer
|
||||
print(result["original_responses"]) # Individual member responses
|
||||
print(result["evaluations"]) # How members ranked each other
|
||||
```
|
||||
|
||||
## Running Examples
|
||||
|
||||
Run any example directly:
|
||||
|
||||
```bash
|
||||
python examples/multi_agent/llm_council_examples/marketing_strategy_council.py
|
||||
python examples/multi_agent/llm_council_examples/finance_analysis_council.py
|
||||
python examples/multi_agent/llm_council_examples/medical_diagnosis_council.py
|
||||
```
|
||||
|
||||
## Key Features
|
||||
|
||||
| Feature | Description |
|
||||
|----------------------|---------------------------------------------------------------------------------------------------------|
|
||||
| **Multiple Perspectives** | Each council member (GPT-5.1, Gemini, Claude, Grok) provides unique insights |
|
||||
| **Peer Review** | Members evaluate and rank each other's responses anonymously |
|
||||
| **Synthesis** | Chairman combines the best elements from all responses |
|
||||
| **Transparency** | See both individual responses and evaluation rankings |
|
||||
|
||||
|
||||
## Council Members
|
||||
|
||||
The default council consists of:
|
||||
|
||||
| Council Member | Description |
|
||||
|-------------------------------|-------------------------------|
|
||||
| **GPT-5.1-Councilor** | Analytical and comprehensive |
|
||||
| **Gemini-3-Pro-Councilor** | Concise and well-processed |
|
||||
| **Claude-Sonnet-4.5-Councilor** | Thoughtful and balanced |
|
||||
| **Grok-4-Councilor** | Creative and innovative |
|
||||
|
||||
## Customization
|
||||
|
||||
You can create custom council members:
|
||||
|
||||
```python
|
||||
from swarms import Agent
|
||||
from swarms.structs.llm_council import LLMCouncil, get_gpt_councilor_prompt
|
||||
|
||||
custom_agent = Agent(
|
||||
agent_name="Custom-Councilor",
|
||||
system_prompt=get_gpt_councilor_prompt(),
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
council = LLMCouncil(
|
||||
council_members=[custom_agent, ...],
|
||||
chairman_model="gpt-5.1",
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
For complete API reference and detailed documentation, see the [LLM Council Reference Documentation](../swarms/structs/llm_council.md).
|
||||
|
||||
@ -0,0 +1,170 @@
|
||||
# LLM Council: 3-Step Quickstart Guide
|
||||
|
||||
The LLM Council enables collaborative decision-making with multiple AI agents through peer review and synthesis. Inspired by Andrej Karpathy's llm-council, it creates a council of specialized agents that respond independently, review each other's anonymized responses, and have a Chairman synthesize the best elements into a final answer.
|
||||
|
||||
## Overview
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **Multiple Perspectives** | Each council member provides unique insights from different viewpoints |
|
||||
| **Peer Review** | Members evaluate and rank each other's responses anonymously |
|
||||
| **Synthesis** | Chairman combines the best elements from all responses |
|
||||
| **Transparency** | See both individual responses and evaluation rankings |
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Install and Import
|
||||
|
||||
First, ensure you have Swarms installed and import the LLMCouncil class:
|
||||
|
||||
```bash
|
||||
pip install swarms
|
||||
```
|
||||
|
||||
```python
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Create the Council
|
||||
|
||||
Create an LLM Council with default council members (GPT-5.1, Gemini 3 Pro, Claude Sonnet 4.5, and Grok-4):
|
||||
|
||||
```python
|
||||
# Create the council with default members
|
||||
council = LLMCouncil(
|
||||
name="Decision Council",
|
||||
verbose=True,
|
||||
output_type="dict-all-except-first"
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run a Query
|
||||
|
||||
Execute a query and get the synthesized response:
|
||||
|
||||
```python
|
||||
# Run a query
|
||||
result = council.run("What are the key factors to consider when choosing a cloud provider for enterprise applications?")
|
||||
|
||||
# Access the final synthesized answer
|
||||
print(result["final_response"])
|
||||
|
||||
# View individual member responses
|
||||
print(result["original_responses"])
|
||||
|
||||
# See how members ranked each other
|
||||
print(result["evaluations"])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Example
|
||||
|
||||
Here's a complete working example:
|
||||
|
||||
```python
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Step 1: Create the council
|
||||
council = LLMCouncil(
|
||||
name="Strategy Council",
|
||||
description="A council for strategic decision-making",
|
||||
verbose=True,
|
||||
output_type="dict-all-except-first"
|
||||
)
|
||||
|
||||
# Step 2: Run a strategic query
|
||||
result = council.run(
|
||||
"Should a B2B SaaS startup prioritize product-led growth or sales-led growth? "
|
||||
"Consider factors like market size, customer acquisition costs, and scalability."
|
||||
)
|
||||
|
||||
# Step 3: Process results
|
||||
print("=" * 50)
|
||||
print("FINAL SYNTHESIZED ANSWER:")
|
||||
print("=" * 50)
|
||||
print(result["final_response"])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Custom Council Members
|
||||
|
||||
For specialized domains, create custom council members:
|
||||
|
||||
```python
|
||||
from swarms import Agent
|
||||
from swarms.structs.llm_council import LLMCouncil, get_gpt_councilor_prompt
|
||||
|
||||
# Create specialized agents
|
||||
finance_expert = Agent(
|
||||
agent_name="Finance-Councilor",
|
||||
system_prompt="You are a financial analyst specializing in market analysis and investment strategies...",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
tech_expert = Agent(
|
||||
agent_name="Technology-Councilor",
|
||||
system_prompt="You are a technology strategist specializing in digital transformation...",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
risk_expert = Agent(
|
||||
agent_name="Risk-Councilor",
|
||||
system_prompt="You are a risk management expert specializing in enterprise risk assessment...",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Create council with custom members
|
||||
council = LLMCouncil(
|
||||
council_members=[finance_expert, tech_expert, risk_expert],
|
||||
chairman_model="gpt-4.1",
|
||||
verbose=True
|
||||
)
|
||||
|
||||
result = council.run("Evaluate the risk-reward profile of investing in AI infrastructure")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## CLI Usage
|
||||
|
||||
Run LLM Council directly from the command line:
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "What is the best approach to implement microservices architecture?"
|
||||
```
|
||||
|
||||
With verbose output:
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Analyze the pros and cons of remote work" --verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Domain | Example Query |
|
||||
|--------|---------------|
|
||||
| **Business Strategy** | "Should we expand internationally or focus on domestic growth?" |
|
||||
| **Technology** | "Which database architecture best suits our high-throughput requirements?" |
|
||||
| **Finance** | "Evaluate investment opportunities in the renewable energy sector" |
|
||||
| **Healthcare** | "What treatment approaches should be considered for this patient profile?" |
|
||||
| **Legal** | "What are the compliance implications of this data processing policy?" |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Explore [LLM Council Examples](./llm_council_examples.md) for domain-specific implementations
|
||||
- Learn about [LLM Council Reference Documentation](../swarms/structs/llm_council.md) for complete API details
|
||||
- Try the [CLI Reference](../swarms/cli/cli_reference.md) for DevOps integration
|
||||
|
||||
@ -0,0 +1,252 @@
|
||||
# Agent Marketplace Publishing: 3-Step Quickstart Guide
|
||||
|
||||
Publish your agents directly to the Swarms Marketplace with minimal configuration. Share your specialized agents with the community and monetize your creations.
|
||||
|
||||
## Overview
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **Direct Publishing** | Publish agents with a single flag |
|
||||
| **Minimal Configuration** | Just add use cases, tags, and capabilities |
|
||||
| **Automatic Integration** | Seamlessly integrates with marketplace API |
|
||||
| **Monetization Ready** | Set pricing for your agents |
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Get Your API Key
|
||||
|
||||
Before publishing, you need a Swarms API key:
|
||||
|
||||
1. Visit [swarms.world/platform/api-keys](https://swarms.world/platform/api-keys)
|
||||
2. Create an account or sign in
|
||||
3. Generate an API key
|
||||
4. Set the environment variable:
|
||||
|
||||
```bash
|
||||
export SWARMS_API_KEY="your-api-key-here"
|
||||
```
|
||||
|
||||
Or add to your `.env` file:
|
||||
|
||||
```
|
||||
SWARMS_API_KEY=your-api-key-here
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Configure Your Agent
|
||||
|
||||
Create an agent with publishing configuration:
|
||||
|
||||
```python
|
||||
from swarms import Agent
|
||||
|
||||
# Create your specialized agent
|
||||
my_agent = Agent(
|
||||
agent_name="Market-Analysis-Agent",
|
||||
agent_description="Expert market analyst specializing in cryptocurrency and stock analysis",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="""You are an expert market analyst specializing in:
|
||||
- Cryptocurrency market analysis
|
||||
- Stock market trends
|
||||
- Risk assessment
|
||||
- Portfolio recommendations
|
||||
|
||||
Provide data-driven insights with confidence levels.""",
|
||||
max_loops=1,
|
||||
|
||||
# Publishing configuration
|
||||
publish_to_marketplace=True,
|
||||
|
||||
# Required: Define use cases
|
||||
use_cases=[
|
||||
{
|
||||
"title": "Cryptocurrency Analysis",
|
||||
"description": "Analyze crypto market trends and provide investment insights"
|
||||
},
|
||||
{
|
||||
"title": "Stock Screening",
|
||||
"description": "Screen stocks based on technical and fundamental criteria"
|
||||
},
|
||||
{
|
||||
"title": "Portfolio Review",
|
||||
"description": "Review and optimize investment portfolios"
|
||||
}
|
||||
],
|
||||
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run to Publish
|
||||
|
||||
Simply run the agent to trigger publishing:
|
||||
|
||||
```python
|
||||
# Running the agent automatically publishes it
|
||||
result = my_agent.run("Analyze Bitcoin's current market position")
|
||||
|
||||
print(result)
|
||||
print("\n✅ Agent published to marketplace!")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Example
|
||||
|
||||
Here's a complete working example:
|
||||
|
||||
```python
|
||||
import os
|
||||
from swarms import Agent
|
||||
|
||||
# Ensure API key is set
|
||||
if not os.getenv("SWARMS_API_KEY"):
|
||||
raise ValueError("Please set SWARMS_API_KEY environment variable")
|
||||
|
||||
# Step 1: Create a specialized medical analysis agent
|
||||
medical_agent = Agent(
|
||||
agent_name="Blood-Data-Analysis-Agent",
|
||||
agent_description="Explains and contextualizes common blood test panels with structured insights",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
|
||||
system_prompt="""You are a clinical laboratory data analyst assistant focused on hematology and basic metabolic panels.
|
||||
|
||||
Your goals:
|
||||
1) Interpret common blood test panels (CBC, CMP/BMP, lipid panel, HbA1c, thyroid panels)
|
||||
2) Provide structured findings: out-of-range markers, degree of deviation, clinical significance
|
||||
3) Identify potential confounders (e.g., hemolysis, fasting status, medications)
|
||||
4) Suggest safe, non-diagnostic next steps
|
||||
|
||||
Reliability and safety:
|
||||
- This is not medical advice. Do not diagnose or treat.
|
||||
- Use cautious language with confidence levels (low/medium/high)
|
||||
- Highlight red-flag combinations that warrant urgent clinical evaluation""",
|
||||
|
||||
# Step 2: Publishing configuration
|
||||
publish_to_marketplace=True,
|
||||
|
||||
tags=["lab", "hematology", "metabolic", "education"],
|
||||
capabilities=[
|
||||
"panel-interpretation",
|
||||
"risk-flagging",
|
||||
"guideline-citation"
|
||||
],
|
||||
|
||||
use_cases=[
|
||||
{
|
||||
"title": "Blood Analysis",
|
||||
"description": "Analyze blood samples and summarize notable findings."
|
||||
},
|
||||
{
|
||||
"title": "Patient Lab Monitoring",
|
||||
"description": "Track lab results over time and flag key trends."
|
||||
},
|
||||
{
|
||||
"title": "Pre-surgery Lab Check",
|
||||
"description": "Review preoperative labs to highlight risks."
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
# Step 3: Run the agent (this publishes it to the marketplace)
|
||||
result = medical_agent.run(
|
||||
task="Analyze this blood sample: Hematology and Basic Metabolic Panel"
|
||||
)
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Required Fields for Publishing
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `publish_to_marketplace` | `bool` | Set to `True` to enable publishing |
|
||||
| `use_cases` | `List[Dict]` | List of use case dictionaries with `title` and `description` |
|
||||
|
||||
### Use Case Format
|
||||
|
||||
```python
|
||||
use_cases = [
|
||||
{
|
||||
"title": "Use Case Title",
|
||||
"description": "Detailed description of what the agent does for this use case"
|
||||
},
|
||||
# Add more use cases...
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Optional: Programmatic Publishing
|
||||
|
||||
You can also publish prompts/agents directly using the utility function:
|
||||
|
||||
```python
|
||||
from swarms.utils.swarms_marketplace_utils import add_prompt_to_marketplace
|
||||
|
||||
response = add_prompt_to_marketplace(
|
||||
name="My Custom Agent",
|
||||
prompt="Your detailed system prompt here...",
|
||||
description="What this agent does",
|
||||
use_cases=[
|
||||
{"title": "Use Case 1", "description": "Description 1"},
|
||||
{"title": "Use Case 2", "description": "Description 2"}
|
||||
],
|
||||
tags="tag1, tag2, tag3",
|
||||
category="research",
|
||||
is_free=True, # Set to False for paid agents
|
||||
price_usd=0.0 # Set price if not free
|
||||
)
|
||||
|
||||
print(response)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Marketplace Categories
|
||||
|
||||
| Category | Description |
|
||||
|----------|-------------|
|
||||
| `research` | Research and analysis agents |
|
||||
| `content` | Content generation agents |
|
||||
| `coding` | Programming and development agents |
|
||||
| `finance` | Financial analysis agents |
|
||||
| `healthcare` | Medical and health-related agents |
|
||||
| `education` | Educational and tutoring agents |
|
||||
| `legal` | Legal research and analysis agents |
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
!!! tip "Publishing Best Practices"
|
||||
- **Clear Descriptions**: Write detailed, accurate agent descriptions
|
||||
- **Multiple Use Cases**: Provide 3-5 distinct use cases
|
||||
- **Relevant Tags**: Use specific, searchable keywords
|
||||
- **Test First**: Thoroughly test your agent before publishing
|
||||
- **System Prompt Quality**: Ensure your system prompt is well-crafted
|
||||
|
||||
!!! warning "Important Notes"
|
||||
- `use_cases` is **required** when `publish_to_marketplace=True`
|
||||
- Both `tags` and `capabilities` should be provided for discoverability
|
||||
- The agent must have a valid `SWARMS_API_KEY` set in the environment
|
||||
|
||||
---
|
||||
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
| Next Step | Description |
|
||||
|-----------|-------------|
|
||||
| [Swarms Marketplace](https://swarms.world) | Browse published agents |
|
||||
| [Marketplace Documentation](../swarms_platform/share_and_discover.md) | Learn how to publish and discover agents |
|
||||
| [Monetization Options](../swarms_platform/monetize.md) | Explore ways to monetize your agent |
|
||||
| [API Key Management](../swarms_platform/apikeys.md) | Manage your API keys for publishing and access |
|
||||
|
||||
@ -0,0 +1,69 @@
|
||||
# Multi-Agent Architectures Overview
|
||||
|
||||
Build sophisticated multi-agent systems with Swarms' advanced orchestration patterns. From hierarchical teams to collaborative councils, these examples demonstrate how to coordinate multiple AI agents for complex tasks.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **Hierarchical Swarms** | Director agents coordinating worker agents |
|
||||
| **Collaborative Systems** | Agents working together through debate and consensus |
|
||||
| **Workflow Patterns** | Sequential, concurrent, and graph-based execution |
|
||||
| **Routing Systems** | Intelligent task routing to specialized agents |
|
||||
| **Group Interactions** | Multi-agent conversations and discussions |
|
||||
|
||||
---
|
||||
|
||||
## Architecture Examples
|
||||
|
||||
### Hierarchical & Orchestration
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **HierarchicalSwarm** | Multi-level agent organization with director and workers | [View Example](../swarms/examples/hierarchical_swarm_example.md) |
|
||||
| **Hybrid Hierarchical-Cluster Swarm** | Combined hierarchical and cluster patterns | [View Example](../swarms/examples/hhcs_examples.md) |
|
||||
| **SwarmRouter** | Intelligent routing of tasks to appropriate swarms | [View Example](../swarms/examples/swarm_router.md) |
|
||||
| **MultiAgentRouter** | Route tasks to specialized individual agents | [View Example](../swarms/examples/multi_agent_router_minimal.md) |
|
||||
|
||||
### Collaborative & Consensus
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **LLM Council Quickstart** | Collaborative decision-making with peer review and synthesis | [View Example](./llm_council_quickstart.md) |
|
||||
| **LLM Council Examples** | Domain-specific council implementations | [View Examples](./llm_council_examples.md) |
|
||||
| **DebateWithJudge Quickstart** | Two agents debate with judge providing synthesis | [View Example](./debate_quickstart.md) |
|
||||
| **Mixture of Agents** | Heterogeneous agents for diverse task handling | [View Example](../swarms/examples/moa_example.md) |
|
||||
|
||||
### Workflow Patterns
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **GraphWorkflow with Rustworkx** | High-performance graph-based workflows (5-10x faster) | [View Example](./graphworkflow_quickstart.md) |
|
||||
| **Multi-Agentic Patterns with GraphWorkflow** | Advanced graph workflow patterns | [View Example](../swarms/examples/graphworkflow_rustworkx_patterns.md) |
|
||||
| **SequentialWorkflow** | Linear agent pipelines | [View Example](../swarms/examples/sequential_example.md) |
|
||||
| **ConcurrentWorkflow** | Parallel agent execution | [View Example](../swarms/examples/concurrent_workflow.md) |
|
||||
|
||||
### Group Communication
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Group Chat** | Multi-agent group conversations | [View Example](../swarms/examples/groupchat_example.md) |
|
||||
| **Interactive GroupChat** | Real-time interactive agent discussions | [View Example](../swarms/examples/igc_example.md) |
|
||||
|
||||
### Specialized Patterns
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Agents as Tools** | Use agents as callable tools for other agents | [View Example](../swarms/examples/agents_as_tools.md) |
|
||||
| **Aggregate Responses** | Combine outputs from multiple agents | [View Example](../swarms/examples/aggregate.md) |
|
||||
| **Unique Swarms** | Experimental and specialized swarm patterns | [View Example](../swarms/examples/unique_swarms.md) |
|
||||
| **BatchedGridWorkflow (Simple)** | Grid-based batch processing | [View Example](../swarms/examples/batched_grid_simple_example.md) |
|
||||
| **BatchedGridWorkflow (Advanced)** | Advanced grid-based batch processing | [View Example](../swarms/examples/batched_grid_advanced_example.md) |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [Swarm Architectures Concept Guide](../swarms/concept/swarm_architectures.md)
|
||||
- [Choosing Multi-Agent Architecture](../swarms/concept/how_to_choose_swarms.md)
|
||||
- [Custom Swarm Development](../swarms/structs/custom_swarm.md)
|
||||
@ -0,0 +1,39 @@
|
||||
# RAG Examples Overview
|
||||
|
||||
Enhance your agents with Retrieval-Augmented Generation (RAG). Connect to vector databases and knowledge bases to give agents access to your custom data.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **RAG Fundamentals** | Understanding retrieval-augmented generation |
|
||||
| **Vector Databases** | Connecting to Qdrant, Pinecone, and more |
|
||||
| **Document Processing** | Ingesting and indexing documents |
|
||||
| **Semantic Search** | Finding relevant context for queries |
|
||||
|
||||
---
|
||||
|
||||
## RAG Examples
|
||||
|
||||
| Example | Description | Vector DB | Link |
|
||||
|---------|-------------|-----------|------|
|
||||
| **RAG with Qdrant** | Complete RAG implementation with Qdrant | Qdrant | [View Example](../swarms/RAG/qdrant_rag.md) |
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Use Case | Description |
|
||||
|----------|-------------|
|
||||
| **Document Q&A** | Answer questions about your documents |
|
||||
| **Knowledge Base** | Query internal company knowledge |
|
||||
| **Research Assistant** | Search through research papers |
|
||||
| **Code Documentation** | Query codebase documentation |
|
||||
| **Customer Support** | Access product knowledge |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [Memory Documentation](../swarms/memory/diy_memory.md) - Building custom memory
|
||||
- [Agent Long-term Memory](../swarms/structs/agent.md#long-term-memory) - Agent memory configuration
|
||||
@ -0,0 +1,55 @@
|
||||
# Tools & Integrations Overview
|
||||
|
||||
Extend your agents with powerful integrations. Connect to web search, browser automation, financial data, and Model Context Protocol (MCP) servers.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **Web Search** | Integrate real-time web search capabilities |
|
||||
| **Browser Automation** | Control web browsers programmatically |
|
||||
| **Financial Data** | Access stock and market information |
|
||||
| **Web Scraping** | Extract data from websites |
|
||||
| **MCP Integration** | Connect to Model Context Protocol servers |
|
||||
|
||||
---
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### Web Search
|
||||
|
||||
| Integration | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Exa Search** | AI-powered web search for agents | [View Example](./exa_search.md) |
|
||||
|
||||
### Browser Automation
|
||||
|
||||
| Integration | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Browser Use** | Automated browser control with agents | [View Example](./browser_use.md) |
|
||||
|
||||
### Financial Data
|
||||
|
||||
| Integration | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Yahoo Finance** | Stock data, quotes, and market info | [View Example](../swarms/examples/yahoo_finance.md) |
|
||||
|
||||
### Web Scraping
|
||||
|
||||
| Integration | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Firecrawl** | AI-powered web scraping | [View Example](../developer_guides/firecrawl.md) |
|
||||
|
||||
### MCP (Model Context Protocol)
|
||||
|
||||
| Integration | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Multi-MCP Agent** | Connect agents to multiple MCP servers | [View Example](../swarms/examples/multi_mcp_agent.md) |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [Tools Documentation](../swarms/tools/main.md) - Building custom tools
|
||||
- [MCP Integration Guide](../swarms/structs/agent_mcp.md) - Detailed MCP setup
|
||||
- [swarms-tools Package](../swarms_tools/overview.md) - Pre-built tool collection
|
||||
@ -0,0 +1,242 @@
|
||||
# CLI Agent Guide: Create Agents from Command Line
|
||||
|
||||
Create, configure, and run AI agents directly from your terminal without writing Python code.
|
||||
|
||||
## Basic Agent Creation
|
||||
|
||||
### Step 1: Define Your Agent
|
||||
|
||||
Create an agent with required parameters:
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Research-Agent" \
|
||||
--description "An AI agent that researches topics and provides summaries" \
|
||||
--system-prompt "You are an expert researcher. Provide comprehensive, well-structured summaries with key insights." \
|
||||
--task "Research the current state of quantum computing and its applications"
|
||||
```
|
||||
|
||||
### Step 2: Customize Model Settings
|
||||
|
||||
Add model configuration options:
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Code-Reviewer" \
|
||||
--description "Expert code review assistant" \
|
||||
--system-prompt "You are a senior software engineer. Review code for best practices, bugs, and improvements." \
|
||||
--task "Review this Python function for efficiency: def fib(n): return fib(n-1) + fib(n-2) if n > 1 else n" \
|
||||
--model-name "gpt-4o-mini" \
|
||||
--temperature 0.1 \
|
||||
--max-loops 3
|
||||
```
|
||||
|
||||
### Step 3: Enable Advanced Features
|
||||
|
||||
Add streaming, dashboard, and autosave:
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Analysis-Agent" \
|
||||
--description "Data analysis specialist" \
|
||||
--system-prompt "You are a data analyst. Provide detailed statistical analysis and insights." \
|
||||
--task "Analyze market trends for electric vehicles in 2024" \
|
||||
--model-name "gpt-4" \
|
||||
--streaming-on \
|
||||
--verbose \
|
||||
--autosave \
|
||||
--saved-state-path "./agent_states/analysis_agent.json"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Parameter Reference
|
||||
|
||||
### Required Parameters
|
||||
|
||||
| Parameter | Description | Example |
|
||||
|-----------|-------------|---------|
|
||||
| `--name` | Agent name | `"Research-Agent"` |
|
||||
| `--description` | Agent description | `"AI research assistant"` |
|
||||
| `--system-prompt` | Agent's system instructions | `"You are an expert..."` |
|
||||
| `--task` | Task for the agent | `"Analyze this data"` |
|
||||
|
||||
### Model Parameters
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `--model-name` | `"gpt-4"` | LLM model to use |
|
||||
| `--temperature` | `None` | Creativity (0.0-2.0) |
|
||||
| `--max-loops` | `None` | Maximum execution loops |
|
||||
| `--context-length` | `None` | Context window size |
|
||||
|
||||
### Behavior Parameters
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `--auto-generate-prompt` | `False` | Auto-generate prompts |
|
||||
| `--dynamic-temperature-enabled` | `False` | Dynamic temperature adjustment |
|
||||
| `--dynamic-context-window` | `False` | Dynamic context window |
|
||||
| `--streaming-on` | `False` | Enable streaming output |
|
||||
| `--verbose` | `False` | Verbose mode |
|
||||
|
||||
### State Management
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `--autosave` | `False` | Enable autosave |
|
||||
| `--saved-state-path` | `None` | Path to save state |
|
||||
| `--dashboard` | `False` | Enable dashboard |
|
||||
| `--return-step-meta` | `False` | Return step metadata |
|
||||
|
||||
### Integration
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `--mcp-url` | `None` | MCP server URL |
|
||||
| `--user-name` | `None` | Username for agent |
|
||||
| `--output-type` | `None` | Output format (str, json) |
|
||||
| `--retry-attempts` | `None` | Retry attempts on failure |
|
||||
|
||||
---
|
||||
|
||||
## Use Case Examples
|
||||
|
||||
### Financial Analyst Agent
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Financial-Analyst" \
|
||||
--description "Expert financial analysis and market insights" \
|
||||
--system-prompt "You are a CFA-certified financial analyst. Provide detailed market analysis with data-driven insights. Include risk assessments and recommendations." \
|
||||
--task "Analyze Apple (AAPL) stock performance and provide investment outlook for Q4 2024" \
|
||||
--model-name "gpt-4" \
|
||||
--temperature 0.2 \
|
||||
--max-loops 5 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Code Generation Agent
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Code-Generator" \
|
||||
--description "Expert Python developer and code generator" \
|
||||
--system-prompt "You are an expert Python developer. Write clean, efficient, well-documented code following PEP 8 guidelines. Include type hints and docstrings." \
|
||||
--task "Create a Python class for managing a task queue with priority scheduling" \
|
||||
--model-name "gpt-4" \
|
||||
--temperature 0.1 \
|
||||
--streaming-on
|
||||
```
|
||||
|
||||
### Creative Writing Agent
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Creative-Writer" \
|
||||
--description "Professional content writer and storyteller" \
|
||||
--system-prompt "You are a professional writer with expertise in engaging content. Write compelling, creative content with strong narrative flow." \
|
||||
--task "Write a short story about a scientist who discovers time travel" \
|
||||
--model-name "gpt-4" \
|
||||
--temperature 0.8 \
|
||||
--max-loops 2
|
||||
```
|
||||
|
||||
### Research Summarizer Agent
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Research-Summarizer" \
|
||||
--description "Academic research summarization specialist" \
|
||||
--system-prompt "You are an academic researcher. Summarize research topics with key findings, methodologies, and implications. Cite sources when available." \
|
||||
--task "Summarize recent advances in CRISPR gene editing technology" \
|
||||
--model-name "gpt-4o-mini" \
|
||||
--temperature 0.3 \
|
||||
--verbose \
|
||||
--autosave
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Scripting Examples
|
||||
|
||||
### Bash Script with Multiple Agents
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# run_agents.sh
|
||||
|
||||
# Research phase
|
||||
swarms agent \
|
||||
--name "Researcher" \
|
||||
--description "Research specialist" \
|
||||
--system-prompt "You are a researcher. Gather comprehensive information on topics." \
|
||||
--task "Research the impact of AI on healthcare" \
|
||||
--model-name "gpt-4o-mini" \
|
||||
--output-type "json" > research_output.json
|
||||
|
||||
# Analysis phase
|
||||
swarms agent \
|
||||
--name "Analyst" \
|
||||
--description "Data analyst" \
|
||||
--system-prompt "You are an analyst. Analyze data and provide insights." \
|
||||
--task "Analyze the research findings from: $(cat research_output.json)" \
|
||||
--model-name "gpt-4o-mini" \
|
||||
--output-type "json" > analysis_output.json
|
||||
|
||||
echo "Pipeline complete!"
|
||||
```
|
||||
|
||||
### Loop Through Tasks
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# batch_analysis.sh
|
||||
|
||||
TOPICS=("renewable energy" "electric vehicles" "smart cities" "AI ethics")
|
||||
|
||||
for topic in "${TOPICS[@]}"; do
|
||||
echo "Analyzing: $topic"
|
||||
swarms agent \
|
||||
--name "Topic-Analyst" \
|
||||
--description "Topic analysis specialist" \
|
||||
--system-prompt "You are an expert analyst. Provide concise analysis of topics." \
|
||||
--task "Analyze current trends in: $topic" \
|
||||
--model-name "gpt-4o-mini" \
|
||||
>> "analysis_results.txt"
|
||||
echo "---" >> "analysis_results.txt"
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tips and Best Practices
|
||||
|
||||
!!! tip "System Prompt Tips"
|
||||
- Be specific about the agent's role and expertise
|
||||
- Include output format preferences
|
||||
- Specify any constraints or guidelines
|
||||
|
||||
!!! tip "Temperature Settings"
|
||||
- Use **0.1-0.3** for factual/analytical tasks
|
||||
- Use **0.5-0.7** for balanced responses
|
||||
- Use **0.8-1.0** for creative tasks
|
||||
|
||||
!!! tip "Performance Optimization"
|
||||
- Use `gpt-4o-mini` for simpler tasks (faster, cheaper)
|
||||
- Use `gpt-4` for complex reasoning tasks
|
||||
- Set appropriate `--max-loops` to control execution time
|
||||
|
||||
!!! warning "Common Issues"
|
||||
- Ensure API key is set: `export OPENAI_API_KEY="..."`
|
||||
- Wrap multi-word arguments in quotes
|
||||
- Use `--verbose` to debug issues
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [CLI YAML Configuration](./cli_yaml_guide.md) - Run agents from YAML files
|
||||
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - LLM Council and Heavy Swarm
|
||||
- [CLI Reference](./cli_reference.md) - Complete command documentation
|
||||
|
||||
@ -0,0 +1,262 @@
|
||||
# CLI Heavy Swarm Guide: Comprehensive Task Analysis
|
||||
|
||||
Run Heavy Swarm from command line for complex task decomposition and comprehensive analysis with specialized agents.
|
||||
|
||||
## Overview
|
||||
|
||||
Heavy Swarm follows a structured workflow:
|
||||
|
||||
1. **Task Decomposition**: Breaks down tasks into specialized questions
|
||||
2. **Parallel Execution**: Executes specialized agents in parallel
|
||||
3. **Result Synthesis**: Integrates and synthesizes results
|
||||
4. **Comprehensive Reporting**: Generates detailed final reports
|
||||
|
||||
---
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Step 1: Run a Simple Analysis
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm --task "Analyze the current state of quantum computing"
|
||||
```
|
||||
|
||||
### Step 2: Customize with Options
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Research renewable energy market trends" \
|
||||
--loops-per-agent 2 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Step 3: Use Custom Models
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Analyze cryptocurrency regulation globally" \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--loops-per-agent 3 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Command Options
|
||||
|
||||
| Option | Default | Description |
|
||||
|--------|---------|-------------|
|
||||
| `--task` | **Required** | The task to analyze |
|
||||
| `--loops-per-agent` | 1 | Execution loops per agent |
|
||||
| `--question-agent-model-name` | gpt-4o-mini | Model for question generation |
|
||||
| `--worker-model-name` | gpt-4o-mini | Model for worker agents |
|
||||
| `--random-loops-per-agent` | False | Randomize loops (1-10) |
|
||||
| `--verbose` | False | Enable detailed output |
|
||||
|
||||
---
|
||||
|
||||
## Specialized Agents
|
||||
|
||||
Heavy Swarm includes specialized agents for different aspects:
|
||||
|
||||
| Agent | Role | Focus |
|
||||
|-------|------|-------|
|
||||
| **Question Agent** | Decomposes tasks | Generates targeted questions |
|
||||
| **Research Agent** | Gathers information | Fast, trustworthy research |
|
||||
| **Analysis Agent** | Processes data | Statistical analysis, insights |
|
||||
| **Writing Agent** | Creates reports | Clear, structured documentation |
|
||||
|
||||
---
|
||||
|
||||
## Use Case Examples
|
||||
|
||||
### Market Research
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Comprehensive market analysis of the electric vehicle industry in North America" \
|
||||
--loops-per-agent 3 \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Technology Assessment
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Evaluate the technical feasibility and ROI of implementing AI-powered customer service automation" \
|
||||
--loops-per-agent 2 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Competitive Analysis
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Analyze competitive landscape for cloud computing services: AWS vs Azure vs Google Cloud" \
|
||||
--loops-per-agent 2 \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Investment Research
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Research investment opportunities in AI infrastructure companies for 2024-2025" \
|
||||
--loops-per-agent 3 \
|
||||
--worker-model-name gpt-4 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Policy Analysis
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Analyze the impact of proposed AI regulations on tech startups in the United States" \
|
||||
--loops-per-agent 2 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Due Diligence
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Conduct technology due diligence for acquiring a fintech startup focusing on payment processing" \
|
||||
--loops-per-agent 3 \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Visualization
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ User Task │
|
||||
│ "Analyze the impact of AI on healthcare" │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Question Agent │
|
||||
│ Decomposes task into specialized questions: │
|
||||
│ - What are current AI applications in healthcare? │
|
||||
│ - What are the regulatory challenges? │
|
||||
│ - What is the market size and growth? │
|
||||
│ - What are the key players and competitors? │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────┬─────────────┬─────────────┬─────────────┐
|
||||
│ Research │ Analysis │ Research │ Writing │
|
||||
│ Agent 1 │ Agent │ Agent 2 │ Agent │
|
||||
└─────────────┴─────────────┴─────────────┴─────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Synthesis & Integration │
|
||||
│ Combines all agent outputs │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Comprehensive Report │
|
||||
│ - Executive Summary │
|
||||
│ - Detailed Findings │
|
||||
│ - Analysis & Insights │
|
||||
│ - Recommendations │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration Recommendations
|
||||
|
||||
### Quick Analysis (Cost-Effective)
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Quick overview of [topic]" \
|
||||
--loops-per-agent 1 \
|
||||
--question-agent-model-name gpt-4o-mini \
|
||||
--worker-model-name gpt-4o-mini
|
||||
```
|
||||
|
||||
### Standard Research
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Detailed analysis of [topic]" \
|
||||
--loops-per-agent 2 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Deep Dive (Comprehensive)
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Comprehensive research on [topic]" \
|
||||
--loops-per-agent 3 \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Exploratory (Variable Depth)
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Explore [topic] with varying depth" \
|
||||
--random-loops-per-agent \
|
||||
--verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
!!! tip "Task Formulation"
|
||||
- Be specific about what you want analyzed
|
||||
- Include scope and constraints
|
||||
- Specify desired output format
|
||||
|
||||
!!! tip "Loop Configuration"
|
||||
- Use `--loops-per-agent 1` for quick overviews
|
||||
- Use `--loops-per-agent 2-3` for detailed analysis
|
||||
- Higher loops = more comprehensive but slower
|
||||
|
||||
!!! tip "Model Selection"
|
||||
- Use `gpt-4o-mini` for cost-effective analysis
|
||||
- Use `gpt-4` for complex, nuanced topics
|
||||
- Match model to task complexity
|
||||
|
||||
!!! warning "Performance Notes"
|
||||
- Deep analysis (3+ loops) may take several minutes
|
||||
- Higher loops increase API costs
|
||||
- Use `--verbose` to monitor progress
|
||||
|
||||
---
|
||||
|
||||
## Comparison: LLM Council vs Heavy Swarm
|
||||
|
||||
| Feature | LLM Council | Heavy Swarm |
|
||||
|---------|-------------|-------------|
|
||||
| **Focus** | Collaborative decision-making | Comprehensive task analysis |
|
||||
| **Workflow** | Parallel responses + peer review | Task decomposition + parallel research |
|
||||
| **Best For** | Questions with multiple viewpoints | Complex research and analysis tasks |
|
||||
| **Output** | Synthesized consensus | Detailed research report |
|
||||
| **Speed** | Faster | More thorough but slower |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [CLI LLM Council Guide](./cli_llm_council_guide.md) - Collaborative decisions
|
||||
- [CLI Reference](./cli_reference.md) - Complete command documentation
|
||||
- [Heavy Swarm Python API](../structs/heavy_swarm.md) - Programmatic usage
|
||||
|
||||
@ -0,0 +1,162 @@
|
||||
# CLI LLM Council Guide: Collaborative Multi-Agent Decisions
|
||||
|
||||
Run the LLM Council directly from command line for collaborative decision-making with multiple AI agents through peer review and synthesis.
|
||||
|
||||
## Overview
|
||||
|
||||
The LLM Council creates a collaborative environment where:
|
||||
|
||||
1. **Multiple Perspectives**: Each council member (GPT-5.1, Gemini, Claude, Grok) independently responds
|
||||
2. **Peer Review**: Members evaluate and rank each other's anonymized responses
|
||||
3. **Synthesis**: A Chairman synthesizes the best elements into a final answer
|
||||
|
||||
---
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Step 1: Run a Simple Query
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "What are the best practices for code review?"
|
||||
```
|
||||
|
||||
### Step 2: Enable Verbose Output
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "How should we approach microservices architecture?" --verbose
|
||||
```
|
||||
|
||||
### Step 3: Process the Results
|
||||
|
||||
The council returns:
|
||||
- Individual member responses
|
||||
- Peer review rankings
|
||||
- Synthesized final answer
|
||||
|
||||
---
|
||||
|
||||
## Use Case Examples
|
||||
|
||||
### Strategic Business Decisions
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Should our SaaS startup prioritize product-led growth or sales-led growth? Consider market size, CAC, and scalability."
|
||||
```
|
||||
|
||||
### Technology Evaluation
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Compare Kubernetes vs Docker Swarm for a startup with 10 microservices. Consider cost, complexity, and scalability."
|
||||
```
|
||||
|
||||
### Investment Analysis
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Evaluate investment opportunities in AI infrastructure companies. Consider market size, competition, and growth potential."
|
||||
```
|
||||
|
||||
### Policy Analysis
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "What are the implications of implementing AI regulation similar to the EU AI Act in the United States?"
|
||||
```
|
||||
|
||||
### Research Questions
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "What are the most promising approaches to achieving AGI? Evaluate different research paradigms."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Council Members
|
||||
|
||||
The default council includes:
|
||||
|
||||
| Member | Model | Strengths |
|
||||
|--------|-------|-----------|
|
||||
| **GPT-5.1 Councilor** | gpt-5.1 | Analytical, comprehensive |
|
||||
| **Gemini 3 Pro Councilor** | gemini-3-pro | Concise, well-processed |
|
||||
| **Claude Sonnet 4.5 Councilor** | claude-sonnet-4.5 | Thoughtful, balanced |
|
||||
| **Grok-4 Councilor** | grok-4 | Creative, innovative |
|
||||
| **Chairman** | gpt-5.1 | Synthesizes final answer |
|
||||
|
||||
---
|
||||
|
||||
## Workflow Visualization
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ User Query │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────┬─────────────┬─────────────┬─────────────┐
|
||||
│ GPT-5.1 │ Gemini 3 │ Claude 4.5 │ Grok-4 │
|
||||
│ Councilor │ Councilor │ Councilor │ Councilor │
|
||||
└─────────────┴─────────────┴─────────────┴─────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Anonymized Peer Review │
|
||||
│ Each member ranks all responses (anonymized) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Chairman │
|
||||
│ Synthesizes best elements from all responses │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Final Synthesized Answer │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
!!! tip "Query Formulation"
|
||||
- Be specific and detailed in your queries
|
||||
- Include context and constraints
|
||||
- Ask for specific types of analysis
|
||||
|
||||
!!! tip "When to Use LLM Council"
|
||||
- Complex decisions requiring multiple perspectives
|
||||
- Research questions needing comprehensive analysis
|
||||
- Strategic planning and evaluation
|
||||
- Questions with trade-offs to consider
|
||||
|
||||
!!! tip "Performance Tips"
|
||||
- Use `--verbose` for detailed progress tracking
|
||||
- Expect responses to take 30-60 seconds
|
||||
- Complex queries may take longer
|
||||
|
||||
!!! warning "Limitations"
|
||||
- Requires multiple API calls (higher cost)
|
||||
- Not suitable for simple factual queries
|
||||
- Response time is longer than single-agent queries
|
||||
|
||||
---
|
||||
|
||||
## Command Reference
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "<query>" [--verbose]
|
||||
```
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `--task` | string | **Required** | Query for the council |
|
||||
| `--verbose` | flag | False | Enable detailed output |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [CLI Heavy Swarm Guide](./cli_heavy_swarm_guide.md) - Complex task analysis
|
||||
- [CLI Reference](./cli_reference.md) - Complete command documentation
|
||||
- [LLM Council Python API](../examples/llm_council_quickstart.md) - Programmatic usage
|
||||
|
||||
@ -0,0 +1,115 @@
|
||||
# CLI Quickstart: Getting Started in 3 Steps
|
||||
|
||||
Get up and running with the Swarms CLI in minutes. This guide covers installation, setup verification, and running your first commands.
|
||||
|
||||
## Step 1: Install Swarms
|
||||
|
||||
Install the Swarms package which includes the CLI:
|
||||
|
||||
```bash
|
||||
pip install swarms
|
||||
```
|
||||
|
||||
Verify installation:
|
||||
|
||||
```bash
|
||||
swarms --help
|
||||
```
|
||||
|
||||
You should see the Swarms CLI banner with available commands.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Configure Environment
|
||||
|
||||
Set up your API keys and workspace:
|
||||
|
||||
```bash
|
||||
# Set your OpenAI API key (or other provider)
|
||||
export OPENAI_API_KEY="your-openai-api-key"
|
||||
|
||||
# Optional: Set workspace directory
|
||||
export WORKSPACE_DIR="./agent_workspace"
|
||||
```
|
||||
|
||||
Or create a `.env` file in your project directory:
|
||||
|
||||
```
|
||||
OPENAI_API_KEY=your-openai-api-key
|
||||
WORKSPACE_DIR=./agent_workspace
|
||||
```
|
||||
|
||||
Verify your setup:
|
||||
|
||||
```bash
|
||||
swarms setup-check --verbose
|
||||
```
|
||||
|
||||
Expected output:
|
||||
|
||||
```
|
||||
🔍 Running Swarms Environment Setup Check
|
||||
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Environment Check Results │
|
||||
├─────────┬─────────────────────────┬─────────────────────────────────────────┤
|
||||
│ Status │ Check │ Details │
|
||||
├─────────┼─────────────────────────┼─────────────────────────────────────────┤
|
||||
│ ✓ │ Python Version │ Python 3.11.5 │
|
||||
│ ✓ │ Swarms Version │ Current version: 8.7.0 │
|
||||
│ ✓ │ API Keys │ API keys found: OPENAI_API_KEY │
|
||||
│ ✓ │ Dependencies │ All required dependencies available │
|
||||
└─────────┴─────────────────────────┴─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run Your First Command
|
||||
|
||||
Try these commands to verify everything works:
|
||||
|
||||
### View All Features
|
||||
|
||||
```bash
|
||||
swarms features
|
||||
```
|
||||
|
||||
### Create a Simple Agent
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Assistant" \
|
||||
--description "A helpful AI assistant" \
|
||||
--system-prompt "You are a helpful assistant that provides clear, concise answers." \
|
||||
--task "What are the benefits of renewable energy?" \
|
||||
--model-name "gpt-4o-mini"
|
||||
```
|
||||
|
||||
### Run LLM Council
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "What are the best practices for code review?"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `swarms --help` | Show all available commands |
|
||||
| `swarms features` | Display all CLI features |
|
||||
| `swarms setup-check` | Verify environment setup |
|
||||
| `swarms onboarding` | Interactive setup wizard |
|
||||
| `swarms agent` | Create and run a custom agent |
|
||||
| `swarms llm-council` | Run collaborative LLM council |
|
||||
| `swarms heavy-swarm` | Run comprehensive analysis swarm |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [CLI Agent Guide](./cli_agent_guide.md) - Create custom agents from CLI
|
||||
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - Run LLM Council and Heavy Swarm
|
||||
- [CLI Reference](./cli_reference.md) - Complete command documentation
|
||||
|
||||
@ -0,0 +1,320 @@
|
||||
# CLI YAML Configuration Guide: Run Agents from Config Files
|
||||
|
||||
Run multiple agents from YAML configuration files for reproducible, version-controlled agent deployments.
|
||||
|
||||
## Basic YAML Configuration
|
||||
|
||||
### Step 1: Create YAML Config File
|
||||
|
||||
Create a file named `agents.yaml`:
|
||||
|
||||
```yaml
|
||||
agents:
|
||||
- name: "Research-Agent"
|
||||
description: "AI research specialist"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are an expert researcher.
|
||||
Provide comprehensive, well-structured research summaries.
|
||||
Include key insights and data points.
|
||||
temperature: 0.3
|
||||
max_loops: 2
|
||||
task: "Research current trends in renewable energy"
|
||||
|
||||
- name: "Analysis-Agent"
|
||||
description: "Data analysis specialist"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are a data analyst.
|
||||
Provide detailed statistical analysis and insights.
|
||||
Use data-driven reasoning.
|
||||
temperature: 0.2
|
||||
max_loops: 3
|
||||
task: "Analyze market opportunities in the EV sector"
|
||||
```
|
||||
|
||||
### Step 2: Run Agents from YAML
|
||||
|
||||
```bash
|
||||
swarms run-agents --yaml-file agents.yaml
|
||||
```
|
||||
|
||||
### Step 3: View Results
|
||||
|
||||
Results are displayed in the terminal with formatted output for each agent.
|
||||
|
||||
---
|
||||
|
||||
## Complete YAML Schema
|
||||
|
||||
### Agent Configuration Options
|
||||
|
||||
```yaml
|
||||
agents:
|
||||
- name: "Agent-Name" # Required: Agent identifier
|
||||
description: "Agent description" # Required: What the agent does
|
||||
model_name: "gpt-4o-mini" # Model to use
|
||||
system_prompt: "Your instructions" # Agent's system prompt
|
||||
temperature: 0.5 # Creativity (0.0-2.0)
|
||||
max_loops: 3 # Maximum execution loops
|
||||
task: "Task to execute" # Task for this agent
|
||||
|
||||
# Optional settings
|
||||
context_length: 8192 # Context window size
|
||||
streaming_on: true # Enable streaming
|
||||
verbose: true # Verbose output
|
||||
autosave: true # Auto-save state
|
||||
saved_state_path: "./states/agent.json" # State file path
|
||||
output_type: "json" # Output format
|
||||
retry_attempts: 3 # Retries on failure
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Use Case Examples
|
||||
|
||||
### Multi-Agent Research Pipeline
|
||||
|
||||
```yaml
|
||||
# research_pipeline.yaml
|
||||
agents:
|
||||
- name: "Data-Collector"
|
||||
description: "Collects and organizes research data"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are a research data collector.
|
||||
Gather comprehensive information on the given topic.
|
||||
Organize findings into structured categories.
|
||||
temperature: 0.3
|
||||
max_loops: 2
|
||||
task: "Collect data on AI applications in healthcare"
|
||||
|
||||
- name: "Trend-Analyst"
|
||||
description: "Analyzes trends and patterns"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are a trend analyst.
|
||||
Identify emerging patterns and trends from data.
|
||||
Provide statistical insights and projections.
|
||||
temperature: 0.2
|
||||
max_loops: 2
|
||||
task: "Analyze AI healthcare adoption trends from 2020-2024"
|
||||
|
||||
- name: "Report-Writer"
|
||||
description: "Creates comprehensive reports"
|
||||
model_name: "gpt-4"
|
||||
system_prompt: |
|
||||
You are a professional report writer.
|
||||
Create comprehensive, well-structured reports.
|
||||
Include executive summaries and key recommendations.
|
||||
temperature: 0.4
|
||||
max_loops: 1
|
||||
task: "Write an executive summary on AI in healthcare"
|
||||
```
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
swarms run-agents --yaml-file research_pipeline.yaml
|
||||
```
|
||||
|
||||
### Financial Analysis Team
|
||||
|
||||
```yaml
|
||||
# financial_team.yaml
|
||||
agents:
|
||||
- name: "Market-Analyst"
|
||||
description: "Analyzes market conditions"
|
||||
model_name: "gpt-4"
|
||||
system_prompt: |
|
||||
You are a CFA-certified market analyst.
|
||||
Provide detailed market analysis with technical indicators.
|
||||
Include risk assessments and market outlook.
|
||||
temperature: 0.2
|
||||
max_loops: 3
|
||||
task: "Analyze current S&P 500 market conditions"
|
||||
|
||||
- name: "Risk-Assessor"
|
||||
description: "Evaluates investment risks"
|
||||
model_name: "gpt-4"
|
||||
system_prompt: |
|
||||
You are a risk management specialist.
|
||||
Evaluate investment risks and provide mitigation strategies.
|
||||
Use quantitative risk metrics.
|
||||
temperature: 0.1
|
||||
max_loops: 2
|
||||
task: "Assess risks in current tech sector investments"
|
||||
|
||||
- name: "Portfolio-Advisor"
|
||||
description: "Provides portfolio recommendations"
|
||||
model_name: "gpt-4"
|
||||
system_prompt: |
|
||||
You are a portfolio advisor.
|
||||
Provide asset allocation recommendations.
|
||||
Consider risk tolerance and market conditions.
|
||||
temperature: 0.3
|
||||
max_loops: 2
|
||||
task: "Recommend portfolio adjustments for Q4 2024"
|
||||
```
|
||||
|
||||
### Content Creation Pipeline
|
||||
|
||||
```yaml
|
||||
# content_pipeline.yaml
|
||||
agents:
|
||||
- name: "Topic-Researcher"
|
||||
description: "Researches content topics"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are a content researcher.
|
||||
Research topics thoroughly and identify key angles.
|
||||
Find unique perspectives and data points.
|
||||
temperature: 0.4
|
||||
max_loops: 2
|
||||
task: "Research content angles for 'Future of Remote Work'"
|
||||
|
||||
- name: "Content-Writer"
|
||||
description: "Writes engaging content"
|
||||
model_name: "gpt-4"
|
||||
system_prompt: |
|
||||
You are a professional content writer.
|
||||
Write engaging, SEO-friendly content.
|
||||
Use clear structure with headers and bullet points.
|
||||
temperature: 0.7
|
||||
max_loops: 2
|
||||
task: "Write a blog post about remote work trends"
|
||||
|
||||
- name: "Editor"
|
||||
description: "Edits and polishes content"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are a professional editor.
|
||||
Review content for clarity, grammar, and style.
|
||||
Suggest improvements and optimize for readability.
|
||||
temperature: 0.2
|
||||
max_loops: 1
|
||||
task: "Edit and polish the blog post for publication"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Environment Variables in YAML
|
||||
|
||||
You can reference environment variables:
|
||||
|
||||
```yaml
|
||||
agents:
|
||||
- name: "API-Agent"
|
||||
description: "Agent with API access"
|
||||
model_name: "${MODEL_NAME:-gpt-4o-mini}" # Default if not set
|
||||
system_prompt: "You are an API integration specialist."
|
||||
task: "Test API integration"
|
||||
```
|
||||
|
||||
### Multiple Config Files
|
||||
|
||||
Organize agents by purpose:
|
||||
|
||||
```bash
|
||||
# Run different configurations
|
||||
swarms run-agents --yaml-file research_agents.yaml
|
||||
swarms run-agents --yaml-file analysis_agents.yaml
|
||||
swarms run-agents --yaml-file reporting_agents.yaml
|
||||
```
|
||||
|
||||
### Pipeline Script
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# run_pipeline.sh
|
||||
|
||||
echo "Starting research pipeline..."
|
||||
swarms run-agents --yaml-file configs/research.yaml
|
||||
|
||||
echo "Starting analysis pipeline..."
|
||||
swarms run-agents --yaml-file configs/analysis.yaml
|
||||
|
||||
echo "Starting reporting pipeline..."
|
||||
swarms run-agents --yaml-file configs/reporting.yaml
|
||||
|
||||
echo "Pipeline complete!"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Markdown Configuration
|
||||
|
||||
### Alternative: Load from Markdown
|
||||
|
||||
Create agents using markdown with YAML frontmatter:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: Research Agent
|
||||
description: AI research specialist
|
||||
model_name: gpt-4o-mini
|
||||
temperature: 0.3
|
||||
max_loops: 2
|
||||
---
|
||||
|
||||
You are an expert researcher specializing in technology trends.
|
||||
Provide comprehensive research summaries with:
|
||||
- Key findings and insights
|
||||
- Data points and statistics
|
||||
- Recommendations and implications
|
||||
|
||||
Always cite sources when available and maintain objectivity.
|
||||
```
|
||||
|
||||
Load from markdown:
|
||||
|
||||
```bash
|
||||
# Load single file
|
||||
swarms load-markdown --markdown-path ./agents/research_agent.md
|
||||
|
||||
# Load directory (concurrent processing)
|
||||
swarms load-markdown --markdown-path ./agents/ --concurrent
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
!!! tip "Configuration Management"
|
||||
- Version control your YAML files
|
||||
- Use descriptive agent names
|
||||
- Document purpose in descriptions
|
||||
|
||||
!!! tip "Template Organization"
|
||||
```
|
||||
configs/
|
||||
├── research/
|
||||
│ ├── tech_research.yaml
|
||||
│ └── market_research.yaml
|
||||
├── analysis/
|
||||
│ ├── financial_analysis.yaml
|
||||
│ └── data_analysis.yaml
|
||||
└── production/
|
||||
└── prod_agents.yaml
|
||||
```
|
||||
|
||||
!!! tip "Testing Configurations"
|
||||
- Test with `--verbose` flag first
|
||||
- Use lower `max_loops` for testing
|
||||
- Start with `gpt-4o-mini` for cost efficiency
|
||||
|
||||
!!! warning "Common Pitfalls"
|
||||
- Ensure proper YAML indentation (2 spaces)
|
||||
- Quote strings with special characters
|
||||
- Use `|` for multi-line prompts
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [CLI Agent Guide](./cli_agent_guide.md) - Create agents from command line
|
||||
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - LLM Council and Heavy Swarm
|
||||
- [CLI Reference](./cli_reference.md) - Complete command documentation
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@ -0,0 +1,534 @@
|
||||
# LLM Council Class Documentation
|
||||
|
||||
```mermaid
|
||||
flowchart TD
|
||||
A[User Query] --> B[Council Members]
|
||||
|
||||
subgraph "Council Members"
|
||||
C1[GPT-5.1-Councilor]
|
||||
C2[Gemini-3-Pro-Councilor]
|
||||
C3[Claude-Sonnet-4.5-Councilor]
|
||||
C4[Grok-4-Councilor]
|
||||
end
|
||||
|
||||
B --> C1
|
||||
B --> C2
|
||||
B --> C3
|
||||
B --> C4
|
||||
|
||||
C1 --> D[Responses]
|
||||
C2 --> D
|
||||
C3 --> D
|
||||
C4 --> D
|
||||
|
||||
D --> E[Anonymize & Evaluate]
|
||||
E --> F[Chairman Synthesis]
|
||||
F --> G[Final Response]
|
||||
|
||||
```
|
||||
|
||||
The `LLMCouncil` class orchestrates multiple specialized LLM agents to collaboratively answer queries through a structured peer review and synthesis process. Inspired by Andrej Karpathy's llm-council implementation, this architecture demonstrates how different models evaluate and rank each other's work, often selecting responses from other models as superior to their own.
|
||||
|
||||
The class automatically tracks all agent messages in a `Conversation` object and formats output using `history_output_formatter`, providing flexible output formats including dictionaries, lists, strings, JSON, YAML, and more.
|
||||
|
||||
## Workflow Overview
|
||||
|
||||
The LLM Council follows a four-step process:
|
||||
|
||||
1. **Parallel Response Generation**: All council members independently respond to the user query
|
||||
2. **Anonymization**: Responses are anonymized with random IDs (A, B, C, D, etc.) to ensure objective evaluation
|
||||
3. **Peer Review**: Each member evaluates and ranks all responses (including potentially their own)
|
||||
4. **Synthesis**: The Chairman agent synthesizes all responses and evaluations into a final comprehensive answer
|
||||
|
||||
## Class Definition
|
||||
|
||||
### LLMCouncil
|
||||
|
||||
```python
|
||||
class LLMCouncil:
|
||||
```
|
||||
|
||||
### Attributes
|
||||
|
||||
| Attribute | Type | Description | Default |
|
||||
|-----------|------|-------------|---------|
|
||||
| `council_members` | `List[Agent]` | List of Agent instances representing council members | `None` (creates default council) |
|
||||
| `chairman` | `Agent` | The Chairman agent responsible for synthesizing responses | Created during initialization |
|
||||
| `conversation` | `Conversation` | Conversation object tracking all messages throughout the workflow | Created during initialization |
|
||||
| `output_type` | `HistoryOutputType` | Format for the output (e.g., "dict", "list", "string", "json", "yaml") | `"dict"` |
|
||||
| `verbose` | `bool` | Whether to print progress and intermediate results | `True` |
|
||||
|
||||
## Methods
|
||||
|
||||
### `__init__`
|
||||
|
||||
Initializes the LLM Council with council members and a Chairman agent.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `id` | `str` | `swarm_id()` | Unique identifier for the council instance. |
|
||||
| `name` | `str` | `"LLM Council"` | Name of the council instance. |
|
||||
| `description` | `str` | `"A collaborative council..."` | Description of the council's purpose. |
|
||||
| `council_members` | `Optional[List[Agent]]` | `None` | List of Agent instances representing council members. If `None`, creates default council with GPT-5.1, Gemini 3 Pro, Claude Sonnet 4.5, and Grok-4. |
|
||||
| `chairman_model` | `str` | `"gpt-5.1"` | Model name for the Chairman agent that synthesizes responses. |
|
||||
| `verbose` | `bool` | `True` | Whether to print progress and intermediate results. |
|
||||
| `output_type` | `HistoryOutputType` | `"dict"` | Format for the output. Options: "list", "dict", "string", "final", "json", "yaml", "xml", "dict-all-except-first", "str-all-except-first", "dict-final", "list-final". |
|
||||
|
||||
#### Returns
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `LLMCouncil` | Initialized LLM Council instance. |
|
||||
|
||||
#### Description
|
||||
|
||||
Creates an LLM Council instance with specialized council members. If no members are provided, it creates a default council consisting of:
|
||||
|
||||
| Council Member | Description |
|
||||
|---------------------------------|------------------------------------------|
|
||||
| **GPT-5.1-Councilor** | Analytical and comprehensive responses |
|
||||
| **Gemini-3-Pro-Councilor** | Concise and well-processed responses |
|
||||
| **Claude-Sonnet-4.5-Councilor** | Thoughtful and balanced responses |
|
||||
| **Grok-4-Councilor** | Creative and innovative responses |
|
||||
|
||||
The Chairman agent is automatically created with a specialized prompt for synthesizing responses. A `Conversation` object is also initialized to track all messages throughout the workflow, including user queries, council member responses, evaluations, and the final synthesis.
|
||||
|
||||
#### Example Usage
|
||||
|
||||
```python
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create council with default members
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Create council with custom members and output format
|
||||
from swarms import Agent
|
||||
custom_members = [
|
||||
Agent(agent_name="Expert-1", model_name="gpt-4", max_loops=1),
|
||||
Agent(agent_name="Expert-2", model_name="claude-3-opus", max_loops=1),
|
||||
]
|
||||
council = LLMCouncil(
|
||||
council_members=custom_members,
|
||||
chairman_model="gpt-4",
|
||||
verbose=True,
|
||||
output_type="json" # Output as JSON string
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `run`
|
||||
|
||||
Executes the full LLM Council workflow: parallel responses, anonymization, peer review, and synthesis. All messages are tracked in the conversation object and formatted according to the `output_type` setting.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `query` | `str` | Required | The user's query to process through the council. |
|
||||
|
||||
#### Returns
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `Union[List, Dict, str]` | Formatted output based on `output_type`. The output contains the conversation history with all messages tracked throughout the workflow. |
|
||||
|
||||
#### Output Format
|
||||
|
||||
The return value depends on the `output_type` parameter set during initialization:
|
||||
|
||||
| `output_type` value | Description |
|
||||
|---------------------------------|---------------------------------------------------------------------|
|
||||
| **`"dict"`** (default) | Returns conversation as a dictionary/list of message dictionaries |
|
||||
| **`"list"`** | Returns conversation as a list of formatted strings (`"role: content"`) |
|
||||
| **`"string"`** or **`"str"`** | Returns conversation as a formatted string |
|
||||
| **`"final"`** or **`"last"`** | Returns only the content of the final message (Chairman's response) |
|
||||
| **`"json"`** | Returns conversation as a JSON string |
|
||||
| **`"yaml"`** | Returns conversation as a YAML string |
|
||||
| **`"xml"`** | Returns conversation as an XML string |
|
||||
| **`"dict-all-except-first"`** | Returns all messages except the first as a dictionary |
|
||||
| **`"str-all-except-first"`** | Returns all messages except the first as a string |
|
||||
| **`"dict-final"`** | Returns the final message as a dictionary |
|
||||
| **`"list-final"`** | Returns the final message as a list |
|
||||
|
||||
#### Conversation Tracking
|
||||
|
||||
All messages are automatically tracked in the conversation object with the following roles:
|
||||
|
||||
- **`"User"`**: The original user query
|
||||
- **`"{member_name}"`**: Each council member's response (e.g., "GPT-5.1-Councilor")
|
||||
- **`"{member_name}-Evaluation"`**: Each council member's evaluation (e.g., "GPT-5.1-Councilor-Evaluation")
|
||||
- **`"Chairman"`**: The final synthesized response
|
||||
|
||||
#### Description
|
||||
|
||||
Executes the complete LLM Council workflow:
|
||||
|
||||
1. **User Query Tracking**: Adds the user query to the conversation as "User" role
|
||||
2. **Dispatch Phase**: Sends the query to all council members in parallel using `run_agents_concurrently`
|
||||
3. **Collection Phase**: Collects all responses, maps them to member names, and adds each to the conversation with the member's name as the role
|
||||
4. **Anonymization Phase**: Creates anonymous IDs (A, B, C, D, etc.) and shuffles them to ensure anonymity
|
||||
5. **Evaluation Phase**: Each member evaluates and ranks all anonymized responses using `batched_grid_agent_execution`, then adds evaluations to the conversation with "{member_name}-Evaluation" as the role
|
||||
6. **Synthesis Phase**: The Chairman agent synthesizes all responses and evaluations into a final comprehensive answer, which is added to the conversation as "Chairman" role
|
||||
7. **Output Formatting**: Returns the conversation formatted according to the `output_type` setting using `history_output_formatter`
|
||||
|
||||
The method provides verbose output by default, showing progress at each stage. All messages are tracked in the `conversation` attribute for later access or export.
|
||||
|
||||
#### Example Usage
|
||||
|
||||
```python
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create council with default output format (dict)
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
query = "What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?"
|
||||
|
||||
# Run the council - returns formatted conversation based on output_type
|
||||
result = council.run(query)
|
||||
|
||||
# With default "dict" output_type, result is a list of message dictionaries
|
||||
# Access conversation messages
|
||||
for message in result:
|
||||
print(f"{message['role']}: {message['content'][:200]}...")
|
||||
|
||||
# Access the conversation object directly for more control
|
||||
conversation = council.conversation
|
||||
print("\nFinal message:", conversation.get_final_message_content())
|
||||
|
||||
# Get conversation as string
|
||||
print("\nFull conversation:")
|
||||
print(conversation.get_str())
|
||||
|
||||
# Example with different output types
|
||||
council_json = LLMCouncil(output_type="json", verbose=False)
|
||||
result_json = council_json.run(query) # Returns JSON string
|
||||
|
||||
council_final = LLMCouncil(output_type="final", verbose=False)
|
||||
result_final = council_final.run(query) # Returns only final response string
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### `_create_default_council`
|
||||
|
||||
Creates default council members with specialized prompts and models.
|
||||
|
||||
#### Parameters
|
||||
|
||||
None (internal method).
|
||||
|
||||
#### Returns
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `List[Agent]` | List of Agent instances configured as council members. |
|
||||
|
||||
#### Description
|
||||
|
||||
Internal method that creates the default council configuration with four specialized agents:
|
||||
|
||||
- **GPT-5.1-Councilor** (`model_name="gpt-5.1"`): Analytical and comprehensive, temperature=0.7
|
||||
- **Gemini-3-Pro-Councilor** (`model_name="gemini-2.5-flash"`): Concise and structured, temperature=0.7
|
||||
- **Claude-Sonnet-4.5-Councilor** (`model_name="anthropic/claude-sonnet-4-5"`): Thoughtful and balanced, temperature=0.0
|
||||
- **Grok-4-Councilor** (`model_name="x-ai/grok-4"`): Creative and innovative, temperature=0.8
|
||||
|
||||
Each agent is configured with:
|
||||
|
||||
- Specialized system prompts matching their role
|
||||
- `max_loops=1` for single-response generation
|
||||
- `verbose=False` to reduce noise during parallel execution
|
||||
- Appropriate temperature settings for their style
|
||||
|
||||
---
|
||||
|
||||
## Helper Functions
|
||||
|
||||
### `get_gpt_councilor_prompt()`
|
||||
|
||||
Returns the system prompt for GPT-5.1 councilor agent.
|
||||
|
||||
#### Returns
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `str` | System prompt string emphasizing analytical thinking and comprehensive coverage. |
|
||||
|
||||
---
|
||||
|
||||
### `get_gemini_councilor_prompt()`
|
||||
|
||||
Returns the system prompt for Gemini 3 Pro councilor agent.
|
||||
|
||||
#### Returns
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `str` | System prompt string emphasizing concise, well-processed, and structured responses. |
|
||||
|
||||
---
|
||||
|
||||
### `get_claude_councilor_prompt()`
|
||||
|
||||
Returns the system prompt for Claude Sonnet 4.5 councilor agent.
|
||||
|
||||
#### Returns
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `str` | System prompt string emphasizing thoughtful, balanced, and nuanced responses. |
|
||||
|
||||
---
|
||||
|
||||
### `get_grok_councilor_prompt()`
|
||||
|
||||
Returns the system prompt for Grok-4 councilor agent.
|
||||
|
||||
#### Returns
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `str` | System prompt string emphasizing creative, innovative, and unique perspectives. |
|
||||
|
||||
---
|
||||
|
||||
### `get_chairman_prompt()`
|
||||
|
||||
Returns the system prompt for the Chairman agent.
|
||||
|
||||
#### Returns
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `str` | System prompt string for synthesizing responses and evaluations into a final answer. |
|
||||
|
||||
---
|
||||
|
||||
### `get_evaluation_prompt(query, responses, evaluator_name)`
|
||||
|
||||
Creates evaluation prompt for council members to review and rank responses.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-----------|------|-------------|
|
||||
| `query` | `str` | The original user query. |
|
||||
| `responses` | `Dict[str, str]` | Dictionary mapping anonymous IDs to response texts. |
|
||||
| `evaluator_name` | `str` | Name of the agent doing the evaluation. |
|
||||
|
||||
#### Returns
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `str` | Formatted evaluation prompt string with instructions for ranking responses. |
|
||||
|
||||
---
|
||||
|
||||
### `get_synthesis_prompt(query, original_responses, evaluations, id_to_member)`
|
||||
|
||||
Creates synthesis prompt for the Chairman.
|
||||
|
||||
#### Parameters
|
||||
|
||||
| Parameter | Type | Description |
|
||||
|-----------|------|-------------|
|
||||
| `query` | `str` | Original user query. |
|
||||
| `original_responses` | `Dict[str, str]` | Dictionary mapping member names to their responses. |
|
||||
| `evaluations` | `Dict[str, str]` | Dictionary mapping evaluator names to their evaluation texts. |
|
||||
| `id_to_member` | `Dict[str, str]` | Mapping from anonymous IDs to member names. |
|
||||
|
||||
#### Returns
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `str` | Formatted synthesis prompt for the Chairman agent. |
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
The LLM Council is ideal for scenarios requiring:
|
||||
|
||||
- **Multi-perspective Analysis**: When you need diverse viewpoints on complex topics
|
||||
- **Quality Assurance**: When peer review and ranking can improve response quality
|
||||
- **Transparent Decision Making**: When you want to see how different models evaluate each other
|
||||
- **Synthesis of Expertise**: When combining multiple specialized perspectives is valuable
|
||||
|
||||
### Common Applications
|
||||
|
||||
| Use Case | Description |
|
||||
|-----------------------|--------------------------------------------------------------------------------------------------|
|
||||
| **Medical Diagnosis** | Multiple medical AI agents provide diagnoses, evaluate each other, and synthesize recommendations |
|
||||
| **Financial Analysis**| Different financial experts analyze investments and rank each other's assessments |
|
||||
| **Legal Analysis** | Multiple legal perspectives evaluate compliance and risk |
|
||||
| **Business Strategy** | Diverse strategic viewpoints are synthesized into comprehensive plans |
|
||||
| **Research Analysis** | Multiple research perspectives are combined for thorough analysis |
|
||||
|
||||
|
||||
## Examples
|
||||
|
||||
For comprehensive examples demonstrating various use cases, see the [LLM Council Examples](../../../examples/multi_agent/llm_council_examples/) directory:
|
||||
|
||||
- **Medical**: `medical_diagnosis_council.py`, `medical_treatment_council.py`
|
||||
- **Finance**: `finance_analysis_council.py`, `etf_stock_analysis_council.py`
|
||||
- **Business**: `business_strategy_council.py`, `marketing_strategy_council.py`
|
||||
- **Technology**: `technology_assessment_council.py`, `research_analysis_council.py`
|
||||
- **Legal**: `legal_analysis_council.py`
|
||||
|
||||
### Quick Start Example
|
||||
|
||||
```python
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council with default output format
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Example query
|
||||
query = "What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?"
|
||||
|
||||
# Run the council - returns formatted conversation
|
||||
result = council.run(query)
|
||||
|
||||
# With default "dict" output_type, result is a list of message dictionaries
|
||||
# Print all messages
|
||||
for message in result:
|
||||
role = message['role']
|
||||
content = message['content']
|
||||
print(f"\n{role}:")
|
||||
print(content[:500] + "..." if len(content) > 500 else content)
|
||||
|
||||
# Access conversation object directly for more options
|
||||
conversation = council.conversation
|
||||
|
||||
# Get only the final response
|
||||
print("\n" + "="*80)
|
||||
print("FINAL RESPONSE")
|
||||
print("="*80)
|
||||
print(conversation.get_final_message_content())
|
||||
|
||||
# Get conversation as formatted string
|
||||
print("\n" + "="*80)
|
||||
print("FULL CONVERSATION")
|
||||
print("="*80)
|
||||
print(conversation.get_str())
|
||||
|
||||
# Export conversation to JSON
|
||||
conversation.export()
|
||||
```
|
||||
|
||||
## Customization
|
||||
|
||||
### Creating Custom Council Members
|
||||
|
||||
You can create custom council members with specialized roles:
|
||||
|
||||
```python
|
||||
from swarms import Agent
|
||||
from swarms.structs.llm_council import LLMCouncil, get_gpt_councilor_prompt
|
||||
|
||||
# Create custom councilor
|
||||
custom_agent = Agent(
|
||||
agent_name="Domain-Expert-Councilor",
|
||||
agent_description="Specialized domain expert for specific analysis",
|
||||
system_prompt=get_gpt_councilor_prompt(), # Or create custom prompt
|
||||
model_name="gpt-4",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
temperature=0.7,
|
||||
)
|
||||
|
||||
# Create council with custom members
|
||||
council = LLMCouncil(
|
||||
council_members=[custom_agent, ...], # Add your custom agents
|
||||
chairman_model="gpt-4",
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
### Custom Chairman Model
|
||||
|
||||
You can specify a different model for the Chairman:
|
||||
|
||||
```python
|
||||
council = LLMCouncil(
|
||||
chairman_model="claude-3-opus", # Use Claude as Chairman
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
### Custom Output Format
|
||||
|
||||
You can control the output format using the `output_type` parameter:
|
||||
|
||||
```python
|
||||
# Get output as JSON string
|
||||
council = LLMCouncil(output_type="json")
|
||||
result = council.run(query) # Returns JSON string
|
||||
|
||||
# Get only the final response
|
||||
council = LLMCouncil(output_type="final")
|
||||
result = council.run(query) # Returns only final response string
|
||||
|
||||
# Get as YAML
|
||||
council = LLMCouncil(output_type="yaml")
|
||||
result = council.run(query) # Returns YAML string
|
||||
|
||||
# Get as formatted string
|
||||
council = LLMCouncil(output_type="string")
|
||||
result = council.run(query) # Returns formatted conversation string
|
||||
```
|
||||
|
||||
### Accessing Conversation History
|
||||
|
||||
The conversation object is accessible for advanced usage:
|
||||
|
||||
```python
|
||||
council = LLMCouncil()
|
||||
council.run(query)
|
||||
|
||||
# Access conversation directly
|
||||
conversation = council.conversation
|
||||
|
||||
# Get conversation history
|
||||
history = conversation.conversation_history
|
||||
|
||||
# Export to file
|
||||
conversation.export() # Saves to default location
|
||||
|
||||
# Get specific format
|
||||
json_output = conversation.to_json()
|
||||
yaml_output = conversation.return_messages_as_dictionary()
|
||||
```
|
||||
|
||||
## Architecture Benefits
|
||||
|
||||
1. **Diversity**: Multiple models provide varied perspectives and approaches
|
||||
2. **Quality Control**: Peer review ensures responses are evaluated objectively
|
||||
3. **Synthesis**: Chairman combines the best elements from all responses
|
||||
4. **Transparency**: Full visibility into individual responses and evaluation rankings
|
||||
5. **Scalability**: Easy to add or remove council members
|
||||
6. **Flexibility**: Supports custom agents and models
|
||||
7. **Conversation Tracking**: All messages are automatically tracked in a Conversation object for history and export
|
||||
8. **Flexible Output**: Multiple output formats supported via `history_output_formatter` (dict, list, string, JSON, YAML, XML, etc.)
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
| Feature | Description |
|
||||
|---------------------------|----------------------------------------------------------------------------------------------------------------|
|
||||
| **Parallel Execution** | Both response generation and evaluation phases run in parallel for efficiency |
|
||||
| **Anonymization** | Responses are anonymized to prevent bias in evaluation |
|
||||
| **Model Selection** | Different models can be used for different roles based on their strengths |
|
||||
| **Verbose Mode** | Can be disabled for production use to reduce output |
|
||||
| **Conversation Management** | Conversation object efficiently tracks all messages in memory and supports export to JSON/YAML files |
|
||||
| **Output Formatting** | Choose lightweight output formats (e.g., "final") for production to reduce memory usage |
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Multi-Agent Architectures Overview](overview.md)
|
||||
- [Council of Judges](council_of_judges.md) - Similar peer review pattern
|
||||
- [Agent Class Reference](agent.md) - Understanding individual agents
|
||||
- [Conversation Class Reference](conversation.md) - Understanding conversation tracking and management
|
||||
- [Multi-Agent Execution Utilities](various_execution_methods.md) - Underlying execution methods
|
||||
- [History Output Formatter](../../../swarms/utils/history_output_formatter.py) - Output formatting utilities
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Setup Check Example
|
||||
# Verify your Swarms environment setup
|
||||
|
||||
swarms setup-check
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Onboarding Example
|
||||
# Start the interactive onboarding process
|
||||
|
||||
swarms onboarding
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Get API Key Example
|
||||
# Open API key portal in browser
|
||||
|
||||
swarms get-api-key
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Check Login Example
|
||||
# Verify authentication status
|
||||
|
||||
swarms check-login
|
||||
|
||||
@ -0,0 +1,12 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Create Agent Example
|
||||
# Create and run a custom agent
|
||||
|
||||
swarms agent \
|
||||
--name "Research Agent" \
|
||||
--description "AI research specialist" \
|
||||
--system-prompt "You are an expert research agent." \
|
||||
--task "Analyze current trends in renewable energy" \
|
||||
--model-name "gpt-4o-mini"
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Run Agents from YAML Example
|
||||
# Execute agents from YAML configuration file
|
||||
|
||||
swarms run-agents --yaml-file agents.yaml
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Load Markdown Agents Example
|
||||
# Load agents from markdown files
|
||||
|
||||
swarms load-markdown --markdown-path ./agents/
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - LLM Council Example
|
||||
# Run LLM Council for collaborative problem-solving
|
||||
|
||||
swarms llm-council --task "What are the best energy ETFs to invest in right now?"
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - HeavySwarm Example
|
||||
# Run HeavySwarm for complex task analysis
|
||||
|
||||
swarms heavy-swarm --task "Analyze current market trends for renewable energy investments"
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Autoswarm Example
|
||||
# Auto-generate swarm configuration
|
||||
|
||||
swarms autoswarm --task "Analyze quarterly sales data" --model "gpt-4"
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Features Example
|
||||
# Display all available CLI features
|
||||
|
||||
swarms features
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Help Example
|
||||
# Display comprehensive help documentation
|
||||
|
||||
swarms help
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Auto Upgrade Example
|
||||
# Update Swarms to the latest version
|
||||
|
||||
swarms auto-upgrade
|
||||
|
||||
@ -0,0 +1,7 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Book Call Example
|
||||
# Schedule a strategy session
|
||||
|
||||
swarms book-call
|
||||
|
||||
@ -0,0 +1,197 @@
|
||||
# Swarms CLI Examples
|
||||
|
||||
This directory contains shell script examples demonstrating all available Swarms CLI commands and features. Each script is simple, focused, and demonstrates a single CLI command.
|
||||
|
||||
## Quick Start
|
||||
|
||||
All scripts are executable. Run them directly:
|
||||
|
||||
```bash
|
||||
chmod +x *.sh
|
||||
./01_setup_check.sh
|
||||
```
|
||||
|
||||
Or execute with bash:
|
||||
|
||||
```bash
|
||||
bash 01_setup_check.sh
|
||||
```
|
||||
|
||||
## Available Examples
|
||||
|
||||
### Setup & Configuration
|
||||
|
||||
- **[01_setup_check.sh](examples/cli/01_setup_check.sh)** - Environment setup verification
|
||||
```bash
|
||||
swarms setup-check
|
||||
```
|
||||
|
||||
- **[02_onboarding.sh](examples/cli/02_onboarding.sh)** - Interactive onboarding process
|
||||
```bash
|
||||
swarms onboarding
|
||||
```
|
||||
|
||||
- **[03_get_api_key.sh](examples/cli/03_get_api_key.sh)** - Retrieve API keys
|
||||
```bash
|
||||
swarms get-api-key
|
||||
```
|
||||
|
||||
- **[04_check_login.sh](examples/cli/04_check_login.sh)** - Verify authentication
|
||||
```bash
|
||||
swarms check-login
|
||||
```
|
||||
|
||||
### Agent Management
|
||||
|
||||
- **[05_create_agent.sh](examples/cli/05_create_agent.sh)** - Create and run custom agents
|
||||
```bash
|
||||
swarms agent --name "Agent" --description "Description" --system-prompt "Prompt" --task "Task"
|
||||
```
|
||||
|
||||
- **[06_run_agents_yaml.sh](examples/cli/06_run_agents_yaml.sh)** - Execute agents from YAML
|
||||
```bash
|
||||
swarms run-agents --yaml-file agents.yaml
|
||||
```
|
||||
|
||||
- **[07_load_markdown.sh](examples/cli/07_load_markdown.sh)** - Load agents from markdown files
|
||||
```bash
|
||||
swarms load-markdown --markdown-path ./agents/
|
||||
```
|
||||
|
||||
### Multi-Agent Architectures
|
||||
|
||||
- **[08_llm_council.sh](examples/cli/08_llm_council.sh)** - Run LLM Council collaboration
|
||||
```bash
|
||||
swarms llm-council --task "Your question here"
|
||||
```
|
||||
|
||||
- **[09_heavy_swarm.sh](examples/cli/09_heavy_swarm.sh)** - Run HeavySwarm for complex tasks
|
||||
```bash
|
||||
swarms heavy-swarm --task "Your complex task here"
|
||||
```
|
||||
|
||||
- **[10_autoswarm.sh](examples/cli/10_autoswarm.sh)** - Auto-generate swarm configurations
|
||||
```bash
|
||||
swarms autoswarm --task "Task description" --model "gpt-4"
|
||||
```
|
||||
|
||||
### Utilities
|
||||
|
||||
- **[11_features.sh](examples/cli/11_features.sh)** - Display all available features
|
||||
```bash
|
||||
swarms features
|
||||
```
|
||||
|
||||
- **[12_help.sh](examples/cli/12_help.sh)** - Display help documentation
|
||||
```bash
|
||||
swarms help
|
||||
```
|
||||
|
||||
- **[13_auto_upgrade.sh](examples/cli/13_auto_upgrade.sh)** - Update Swarms package
|
||||
```bash
|
||||
swarms auto-upgrade
|
||||
```
|
||||
|
||||
- **[14_book_call.sh](examples/cli/14_book_call.sh)** - Schedule strategy session
|
||||
```bash
|
||||
swarms book-call
|
||||
```
|
||||
|
||||
### Run All Examples
|
||||
|
||||
- **[run_all_examples.sh](examples/cli/run_all_examples.sh)** - Run multiple examples in sequence
|
||||
```bash
|
||||
bash run_all_examples.sh
|
||||
```
|
||||
|
||||
## Script Structure
|
||||
|
||||
Each script follows a simple pattern:
|
||||
|
||||
1. **Shebang** - `#!/bin/bash`
|
||||
2. **Comment** - Brief description of what the script does
|
||||
3. **Single Command** - One CLI command execution
|
||||
|
||||
Example:
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Setup Check Example
|
||||
# Verify your Swarms environment setup
|
||||
|
||||
swarms setup-check
|
||||
```
|
||||
|
||||
## Usage Patterns
|
||||
|
||||
### Basic Command Execution
|
||||
|
||||
```bash
|
||||
swarms <command> [options]
|
||||
```
|
||||
|
||||
### With Verbose Output
|
||||
|
||||
```bash
|
||||
swarms <command> --verbose
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Set API keys before running scripts that require them:
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY="your-key-here"
|
||||
export ANTHROPIC_API_KEY="your-key-here"
|
||||
export GOOGLE_API_KEY="your-key-here"
|
||||
```
|
||||
|
||||
## Examples by Category
|
||||
|
||||
### Setup & Diagnostics
|
||||
- Environment setup verification
|
||||
- Onboarding workflow
|
||||
- API key management
|
||||
- Authentication verification
|
||||
|
||||
### Single Agent Operations
|
||||
- Custom agent creation
|
||||
- Agent configuration from YAML
|
||||
- Agent loading from markdown
|
||||
|
||||
### Multi-Agent Operations
|
||||
- LLM Council for collaborative problem-solving
|
||||
- HeavySwarm for complex analysis
|
||||
- Auto-generated swarm configurations
|
||||
|
||||
### Information & Help
|
||||
- Feature discovery
|
||||
- Help documentation
|
||||
- Package management
|
||||
|
||||
## File Paths
|
||||
|
||||
All scripts are located in `examples/cli/`:
|
||||
|
||||
- `examples/cli/01_setup_check.sh`
|
||||
- `examples/cli/02_onboarding.sh`
|
||||
- `examples/cli/03_get_api_key.sh`
|
||||
- `examples/cli/04_check_login.sh`
|
||||
- `examples/cli/05_create_agent.sh`
|
||||
- `examples/cli/06_run_agents_yaml.sh`
|
||||
- `examples/cli/07_load_markdown.sh`
|
||||
- `examples/cli/08_llm_council.sh`
|
||||
- `examples/cli/09_heavy_swarm.sh`
|
||||
- `examples/cli/10_autoswarm.sh`
|
||||
- `examples/cli/11_features.sh`
|
||||
- `examples/cli/12_help.sh`
|
||||
- `examples/cli/13_auto_upgrade.sh`
|
||||
- `examples/cli/14_book_call.sh`
|
||||
- `examples/cli/run_all_examples.sh`
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [CLI Reference](../../docs/swarms/cli/cli_reference.md) - Complete CLI documentation
|
||||
- [Main Examples README](../README.md) - Other Swarms examples
|
||||
- [Swarms Documentation](../../docs/) - Full Swarms documentation
|
||||
|
||||
@ -0,0 +1,11 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Swarms CLI - Run All Examples
|
||||
# Run all CLI examples in sequence
|
||||
|
||||
chmod +x *.sh
|
||||
|
||||
swarms setup-check
|
||||
swarms features
|
||||
swarms help
|
||||
|
||||
@ -1,51 +1,43 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Basic Graph Workflow Example
|
||||
|
||||
A minimal example showing how to use GraphWorkflow with backend selection.
|
||||
"""
|
||||
|
||||
from swarms.structs.graph_workflow import GraphWorkflow
|
||||
from swarms.structs.agent import Agent
|
||||
|
||||
agent_one = Agent(agent_name="research_agent", model="gpt-4o-mini")
|
||||
agent_one = Agent(
|
||||
agent_name="research_agent",
|
||||
model_name="gpt-4o-mini",
|
||||
name="Research Agent",
|
||||
agent_description="Agent responsible for gathering and summarizing research information.",
|
||||
)
|
||||
agent_two = Agent(
|
||||
agent_name="research_agent_two", model="gpt-4o-mini"
|
||||
agent_name="research_agent_two",
|
||||
model_name="gpt-4o-mini",
|
||||
name="Analysis Agent",
|
||||
agent_description="Agent that analyzes the research data provided and processes insights.",
|
||||
)
|
||||
agent_three = Agent(
|
||||
agent_name="research_agent_three", model="gpt-4o-mini"
|
||||
agent_name="research_agent_three",
|
||||
model_name="gpt-4o-mini",
|
||||
agent_description="Agent tasked with structuring analysis into a final report or output.",
|
||||
)
|
||||
|
||||
|
||||
def main():
|
||||
"""
|
||||
Run a basic graph workflow example without print statements.
|
||||
"""
|
||||
# Create agents
|
||||
|
||||
# Create workflow with backend selection
|
||||
workflow = GraphWorkflow(
|
||||
# Create workflow with backend selection
|
||||
workflow = GraphWorkflow(
|
||||
name="Basic Example",
|
||||
verbose=True,
|
||||
)
|
||||
)
|
||||
|
||||
# Add agents to workflow
|
||||
workflow.add_node(agent_one)
|
||||
workflow.add_node(agent_two)
|
||||
workflow.add_node(agent_three)
|
||||
workflow.add_nodes([agent_one, agent_two, agent_three])
|
||||
|
||||
# Create simple chain using the actual agent names
|
||||
workflow.add_edge("research_agent", "research_agent_two")
|
||||
workflow.add_edge("research_agent_two", "research_agent_three")
|
||||
# Create simple chain using the actual agent names
|
||||
workflow.add_edge("research_agent", "research_agent_two")
|
||||
workflow.add_edge("research_agent_two", "research_agent_three")
|
||||
|
||||
# Compile the workflow
|
||||
workflow.compile()
|
||||
workflow.visualize()
|
||||
|
||||
# Run the workflow
|
||||
task = "Complete a simple task"
|
||||
results = workflow.run(task)
|
||||
return results
|
||||
# Compile the workflow
|
||||
workflow.compile()
|
||||
|
||||
# Run the workflow
|
||||
task = "Complete a simple task"
|
||||
results = workflow.run(task)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
print(results)
|
||||
|
||||
@ -0,0 +1,46 @@
|
||||
from swarms.structs.graph_workflow import GraphWorkflow
|
||||
from swarms.structs.agent import Agent
|
||||
|
||||
research_agent = Agent(
|
||||
agent_name="Research-Analyst",
|
||||
agent_description="Specialized in comprehensive research and data gathering",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
analysis_agent = Agent(
|
||||
agent_name="Data-Analyst",
|
||||
agent_description="Expert in data analysis and pattern recognition",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
strategy_agent = Agent(
|
||||
agent_name="Strategy-Consultant",
|
||||
agent_description="Specialized in strategic planning and recommendations",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
workflow = GraphWorkflow(
|
||||
name="Rustworkx-Basic-Workflow",
|
||||
description="Basic workflow using rustworkx backend for faster graph operations",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
workflow.add_node(research_agent)
|
||||
workflow.add_node(analysis_agent)
|
||||
workflow.add_node(strategy_agent)
|
||||
|
||||
workflow.add_edge(research_agent, analysis_agent)
|
||||
workflow.add_edge(analysis_agent, strategy_agent)
|
||||
|
||||
task = "Conduct a research analysis on water stocks and ETFs"
|
||||
results = workflow.run(task=task)
|
||||
|
||||
for agent_name, output in results.items():
|
||||
print(f"{agent_name}: {output}")
|
||||
@ -0,0 +1,56 @@
|
||||
import time
|
||||
from swarms.structs.graph_workflow import GraphWorkflow
|
||||
from swarms.structs.agent import Agent
|
||||
|
||||
agents = [
|
||||
Agent(
|
||||
agent_name=f"Agent-{i}",
|
||||
agent_description=f"Agent number {i}",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
for i in range(5)
|
||||
]
|
||||
|
||||
nx_workflow = GraphWorkflow(
|
||||
name="NetworkX-Workflow",
|
||||
backend="networkx",
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
for agent in agents:
|
||||
nx_workflow.add_node(agent)
|
||||
|
||||
for i in range(len(agents) - 1):
|
||||
nx_workflow.add_edge(agents[i], agents[i + 1])
|
||||
|
||||
nx_start = time.time()
|
||||
nx_workflow.compile()
|
||||
nx_compile_time = time.time() - nx_start
|
||||
|
||||
rx_workflow = GraphWorkflow(
|
||||
name="Rustworkx-Workflow",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
for agent in agents:
|
||||
rx_workflow.add_node(agent)
|
||||
|
||||
for i in range(len(agents) - 1):
|
||||
rx_workflow.add_edge(agents[i], agents[i + 1])
|
||||
|
||||
rx_start = time.time()
|
||||
rx_workflow.compile()
|
||||
rx_compile_time = time.time() - rx_start
|
||||
|
||||
speedup = (
|
||||
nx_compile_time / rx_compile_time if rx_compile_time > 0 else 0
|
||||
)
|
||||
print(f"NetworkX compile time: {nx_compile_time:.4f}s")
|
||||
print(f"Rustworkx compile time: {rx_compile_time:.4f}s")
|
||||
print(f"Speedup: {speedup:.2f}x")
|
||||
print(
|
||||
f"Identical layers: {nx_workflow._sorted_layers == rx_workflow._sorted_layers}"
|
||||
)
|
||||
@ -0,0 +1,73 @@
|
||||
from swarms import Agent, GraphWorkflow
|
||||
|
||||
coordinator = Agent(
|
||||
agent_name="Coordinator",
|
||||
agent_description="Coordinates and distributes tasks",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
tech_analyst = Agent(
|
||||
agent_name="Tech-Analyst",
|
||||
agent_description="Technical analysis specialist",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
fundamental_analyst = Agent(
|
||||
agent_name="Fundamental-Analyst",
|
||||
agent_description="Fundamental analysis specialist",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
sentiment_analyst = Agent(
|
||||
agent_name="Sentiment-Analyst",
|
||||
agent_description="Sentiment analysis specialist",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
synthesis_agent = Agent(
|
||||
agent_name="Synthesis-Agent",
|
||||
agent_description="Synthesizes multiple analyses into final report",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
workflow = GraphWorkflow(
|
||||
name="Fan-Out-Fan-In-Workflow",
|
||||
description="Demonstrates parallel processing patterns with rustworkx",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
workflow.add_node(coordinator)
|
||||
workflow.add_node(tech_analyst)
|
||||
workflow.add_node(fundamental_analyst)
|
||||
workflow.add_node(sentiment_analyst)
|
||||
workflow.add_node(synthesis_agent)
|
||||
|
||||
workflow.add_edges_from_source(
|
||||
coordinator,
|
||||
[tech_analyst, fundamental_analyst, sentiment_analyst],
|
||||
)
|
||||
|
||||
workflow.add_edges_to_target(
|
||||
[tech_analyst, fundamental_analyst, sentiment_analyst],
|
||||
synthesis_agent,
|
||||
)
|
||||
|
||||
task = "Analyze Tesla stock from technical, fundamental, and sentiment perspectives"
|
||||
results = workflow.run(task=task)
|
||||
|
||||
for agent_name, output in results.items():
|
||||
print(f"{agent_name}: {output}")
|
||||
|
||||
|
||||
workflow.visualize(view=True)
|
||||
@ -0,0 +1,101 @@
|
||||
from swarms.structs.graph_workflow import GraphWorkflow
|
||||
from swarms.structs.agent import Agent
|
||||
|
||||
data_collector_1 = Agent(
|
||||
agent_name="Data-Collector-1",
|
||||
agent_description="Collects market data",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
data_collector_2 = Agent(
|
||||
agent_name="Data-Collector-2",
|
||||
agent_description="Collects financial data",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
technical_analyst = Agent(
|
||||
agent_name="Technical-Analyst",
|
||||
agent_description="Performs technical analysis",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
fundamental_analyst = Agent(
|
||||
agent_name="Fundamental-Analyst",
|
||||
agent_description="Performs fundamental analysis",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
risk_analyst = Agent(
|
||||
agent_name="Risk-Analyst",
|
||||
agent_description="Performs risk analysis",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
strategy_consultant = Agent(
|
||||
agent_name="Strategy-Consultant",
|
||||
agent_description="Develops strategic recommendations",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
report_writer = Agent(
|
||||
agent_name="Report-Writer",
|
||||
agent_description="Writes comprehensive reports",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
workflow = GraphWorkflow(
|
||||
name="Complex-Multi-Layer-Workflow",
|
||||
description="Complex workflow with multiple layers and parallel processing",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
all_agents = [
|
||||
data_collector_1,
|
||||
data_collector_2,
|
||||
technical_analyst,
|
||||
fundamental_analyst,
|
||||
risk_analyst,
|
||||
strategy_consultant,
|
||||
report_writer,
|
||||
]
|
||||
|
||||
for agent in all_agents:
|
||||
workflow.add_node(agent)
|
||||
|
||||
workflow.add_parallel_chain(
|
||||
[data_collector_1, data_collector_2],
|
||||
[technical_analyst, fundamental_analyst, risk_analyst],
|
||||
)
|
||||
|
||||
workflow.add_edges_to_target(
|
||||
[technical_analyst, fundamental_analyst, risk_analyst],
|
||||
strategy_consultant,
|
||||
)
|
||||
|
||||
workflow.add_edges_to_target(
|
||||
[technical_analyst, fundamental_analyst, risk_analyst],
|
||||
report_writer,
|
||||
)
|
||||
|
||||
workflow.add_edge(strategy_consultant, report_writer)
|
||||
|
||||
task = "Conduct a comprehensive analysis of the renewable energy sector including market trends, financial health, and risk assessment"
|
||||
results = workflow.run(task=task)
|
||||
|
||||
for agent_name, output in results.items():
|
||||
print(f"{agent_name}: {output}")
|
||||
@ -0,0 +1,104 @@
|
||||
import time
|
||||
from swarms.structs.graph_workflow import GraphWorkflow
|
||||
from swarms.structs.agent import Agent
|
||||
|
||||
agents_small = [
|
||||
Agent(
|
||||
agent_name=f"Agent-{i}",
|
||||
agent_description=f"Agent number {i}",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
for i in range(5)
|
||||
]
|
||||
|
||||
agents_medium = [
|
||||
Agent(
|
||||
agent_name=f"Agent-{i}",
|
||||
agent_description=f"Agent number {i}",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
for i in range(20)
|
||||
]
|
||||
|
||||
nx_workflow_small = GraphWorkflow(
|
||||
name="NetworkX-Small",
|
||||
backend="networkx",
|
||||
verbose=False,
|
||||
auto_compile=False,
|
||||
)
|
||||
|
||||
for agent in agents_small:
|
||||
nx_workflow_small.add_node(agent)
|
||||
|
||||
for i in range(len(agents_small) - 1):
|
||||
nx_workflow_small.add_edge(agents_small[i], agents_small[i + 1])
|
||||
|
||||
nx_start = time.time()
|
||||
nx_workflow_small.compile()
|
||||
nx_small_time = time.time() - nx_start
|
||||
|
||||
rx_workflow_small = GraphWorkflow(
|
||||
name="Rustworkx-Small",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
auto_compile=False,
|
||||
)
|
||||
|
||||
for agent in agents_small:
|
||||
rx_workflow_small.add_node(agent)
|
||||
|
||||
for i in range(len(agents_small) - 1):
|
||||
rx_workflow_small.add_edge(agents_small[i], agents_small[i + 1])
|
||||
|
||||
rx_start = time.time()
|
||||
rx_workflow_small.compile()
|
||||
rx_small_time = time.time() - rx_start
|
||||
|
||||
nx_workflow_medium = GraphWorkflow(
|
||||
name="NetworkX-Medium",
|
||||
backend="networkx",
|
||||
verbose=False,
|
||||
auto_compile=False,
|
||||
)
|
||||
|
||||
for agent in agents_medium:
|
||||
nx_workflow_medium.add_node(agent)
|
||||
|
||||
for i in range(len(agents_medium) - 1):
|
||||
nx_workflow_medium.add_edge(
|
||||
agents_medium[i], agents_medium[i + 1]
|
||||
)
|
||||
|
||||
nx_start = time.time()
|
||||
nx_workflow_medium.compile()
|
||||
nx_medium_time = time.time() - nx_start
|
||||
|
||||
rx_workflow_medium = GraphWorkflow(
|
||||
name="Rustworkx-Medium",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
auto_compile=False,
|
||||
)
|
||||
|
||||
for agent in agents_medium:
|
||||
rx_workflow_medium.add_node(agent)
|
||||
|
||||
for i in range(len(agents_medium) - 1):
|
||||
rx_workflow_medium.add_edge(
|
||||
agents_medium[i], agents_medium[i + 1]
|
||||
)
|
||||
|
||||
rx_start = time.time()
|
||||
rx_workflow_medium.compile()
|
||||
rx_medium_time = time.time() - rx_start
|
||||
|
||||
print(
|
||||
f"Small (5 agents) - NetworkX: {nx_small_time:.4f}s, Rustworkx: {rx_small_time:.4f}s, Speedup: {nx_small_time/rx_small_time if rx_small_time > 0 else 0:.2f}x"
|
||||
)
|
||||
print(
|
||||
f"Medium (20 agents) - NetworkX: {nx_medium_time:.4f}s, Rustworkx: {rx_medium_time:.4f}s, Speedup: {nx_medium_time/rx_medium_time if rx_medium_time > 0 else 0:.2f}x"
|
||||
)
|
||||
@ -0,0 +1,55 @@
|
||||
from swarms.structs.graph_workflow import GraphWorkflow
|
||||
from swarms.structs.agent import Agent
|
||||
|
||||
test_agent = Agent(
|
||||
agent_name="Test-Agent",
|
||||
agent_description="Test agent for error handling",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
workflow_rx = GraphWorkflow(
|
||||
name="Rustworkx-Workflow",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
)
|
||||
workflow_rx.add_node(test_agent)
|
||||
|
||||
workflow_nx = GraphWorkflow(
|
||||
name="NetworkX-Workflow",
|
||||
backend="networkx",
|
||||
verbose=False,
|
||||
)
|
||||
workflow_nx.add_node(test_agent)
|
||||
|
||||
workflow_default = GraphWorkflow(
|
||||
name="Default-Workflow",
|
||||
verbose=False,
|
||||
)
|
||||
workflow_default.add_node(test_agent)
|
||||
|
||||
workflow_invalid = GraphWorkflow(
|
||||
name="Invalid-Workflow",
|
||||
backend="invalid_backend",
|
||||
verbose=False,
|
||||
)
|
||||
workflow_invalid.add_node(test_agent)
|
||||
|
||||
print(
|
||||
f"Rustworkx backend: {type(workflow_rx.graph_backend).__name__}"
|
||||
)
|
||||
print(f"NetworkX backend: {type(workflow_nx.graph_backend).__name__}")
|
||||
print(
|
||||
f"Default backend: {type(workflow_default.graph_backend).__name__}"
|
||||
)
|
||||
print(
|
||||
f"Invalid backend fallback: {type(workflow_invalid.graph_backend).__name__}"
|
||||
)
|
||||
|
||||
try:
|
||||
import rustworkx as rx
|
||||
|
||||
print("Rustworkx available: True")
|
||||
except ImportError:
|
||||
print("Rustworkx available: False")
|
||||
@ -0,0 +1,61 @@
|
||||
import time
|
||||
from swarms.structs.graph_workflow import GraphWorkflow
|
||||
from swarms.structs.agent import Agent
|
||||
|
||||
NUM_AGENTS = 30
|
||||
|
||||
agents = [
|
||||
Agent(
|
||||
agent_name=f"Agent-{i:02d}",
|
||||
agent_description=f"Agent number {i} in large-scale workflow",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
for i in range(NUM_AGENTS)
|
||||
]
|
||||
|
||||
workflow = GraphWorkflow(
|
||||
name="Large-Scale-Workflow",
|
||||
description=f"Large-scale workflow with {NUM_AGENTS} agents using rustworkx",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
start_time = time.time()
|
||||
for agent in agents:
|
||||
workflow.add_node(agent)
|
||||
add_nodes_time = time.time() - start_time
|
||||
|
||||
start_time = time.time()
|
||||
for i in range(9):
|
||||
workflow.add_edge(agents[i], agents[i + 1])
|
||||
|
||||
workflow.add_edges_from_source(
|
||||
agents[5],
|
||||
agents[10:20],
|
||||
)
|
||||
|
||||
workflow.add_edges_to_target(
|
||||
agents[10:20],
|
||||
agents[20],
|
||||
)
|
||||
|
||||
for i in range(20, 29):
|
||||
workflow.add_edge(agents[i], agents[i + 1])
|
||||
|
||||
add_edges_time = time.time() - start_time
|
||||
|
||||
start_time = time.time()
|
||||
workflow.compile()
|
||||
compile_time = time.time() - start_time
|
||||
|
||||
print(
|
||||
f"Agents: {len(workflow.nodes)}, Edges: {len(workflow.edges)}, Layers: {len(workflow._sorted_layers)}"
|
||||
)
|
||||
print(
|
||||
f"Node addition: {add_nodes_time:.4f}s, Edge addition: {add_edges_time:.4f}s, Compilation: {compile_time:.4f}s"
|
||||
)
|
||||
print(
|
||||
f"Total setup: {add_nodes_time + add_edges_time + compile_time:.4f}s"
|
||||
)
|
||||
@ -0,0 +1,73 @@
|
||||
from swarms.structs.graph_workflow import GraphWorkflow
|
||||
from swarms.structs.agent import Agent
|
||||
|
||||
data_collector_1 = Agent(
|
||||
agent_name="Data-Collector-1",
|
||||
agent_description="Collects market data",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
data_collector_2 = Agent(
|
||||
agent_name="Data-Collector-2",
|
||||
agent_description="Collects financial data",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
data_collector_3 = Agent(
|
||||
agent_name="Data-Collector-3",
|
||||
agent_description="Collects news data",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
technical_analyst = Agent(
|
||||
agent_name="Technical-Analyst",
|
||||
agent_description="Performs technical analysis",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
fundamental_analyst = Agent(
|
||||
agent_name="Fundamental-Analyst",
|
||||
agent_description="Performs fundamental analysis",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
sentiment_analyst = Agent(
|
||||
agent_name="Sentiment-Analyst",
|
||||
agent_description="Performs sentiment analysis",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
workflow = GraphWorkflow(
|
||||
name="Parallel-Chain-Workflow",
|
||||
description="Demonstrates parallel chain pattern with rustworkx",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
sources = [data_collector_1, data_collector_2, data_collector_3]
|
||||
targets = [technical_analyst, fundamental_analyst, sentiment_analyst]
|
||||
|
||||
for agent in sources + targets:
|
||||
workflow.add_node(agent)
|
||||
|
||||
workflow.add_parallel_chain(sources, targets)
|
||||
|
||||
workflow.compile()
|
||||
|
||||
task = "Analyze the technology sector using multiple data sources and analysis methods"
|
||||
results = workflow.run(task=task)
|
||||
|
||||
for agent_name, output in results.items():
|
||||
print(f"{agent_name}: {output}")
|
||||
@ -0,0 +1,79 @@
|
||||
from swarms.structs.graph_workflow import GraphWorkflow
|
||||
from swarms.structs.agent import Agent
|
||||
|
||||
agent_a = Agent(
|
||||
agent_name="Agent-A",
|
||||
agent_description="Agent A",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
agent_b = Agent(
|
||||
agent_name="Agent-B",
|
||||
agent_description="Agent B",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
agent_c = Agent(
|
||||
agent_name="Agent-C",
|
||||
agent_description="Agent C",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
agent_isolated = Agent(
|
||||
agent_name="Agent-Isolated",
|
||||
agent_description="Isolated agent with no connections",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
workflow = GraphWorkflow(
|
||||
name="Validation-Workflow",
|
||||
description="Workflow for validation testing",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
workflow.add_node(agent_a)
|
||||
workflow.add_node(agent_b)
|
||||
workflow.add_node(agent_c)
|
||||
workflow.add_node(agent_isolated)
|
||||
|
||||
workflow.add_edge(agent_a, agent_b)
|
||||
workflow.add_edge(agent_b, agent_c)
|
||||
|
||||
validation_result = workflow.validate(auto_fix=False)
|
||||
print(f"Valid: {validation_result['is_valid']}")
|
||||
print(f"Warnings: {len(validation_result['warnings'])}")
|
||||
print(f"Errors: {len(validation_result['errors'])}")
|
||||
|
||||
validation_result_fixed = workflow.validate(auto_fix=True)
|
||||
print(
|
||||
f"After auto-fix - Valid: {validation_result_fixed['is_valid']}"
|
||||
)
|
||||
print(f"Fixed: {len(validation_result_fixed['fixed'])}")
|
||||
print(f"Entry points: {workflow.entry_points}")
|
||||
print(f"End points: {workflow.end_points}")
|
||||
|
||||
workflow_cycle = GraphWorkflow(
|
||||
name="Cycle-Test-Workflow",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
workflow_cycle.add_node(agent_a)
|
||||
workflow_cycle.add_node(agent_b)
|
||||
workflow_cycle.add_node(agent_c)
|
||||
|
||||
workflow_cycle.add_edge(agent_a, agent_b)
|
||||
workflow_cycle.add_edge(agent_b, agent_c)
|
||||
workflow_cycle.add_edge(agent_c, agent_a)
|
||||
|
||||
cycle_validation = workflow_cycle.validate(auto_fix=False)
|
||||
print(f"Cycles detected: {len(cycle_validation.get('cycles', []))}")
|
||||
@ -0,0 +1,122 @@
|
||||
from swarms.structs.graph_workflow import GraphWorkflow
|
||||
from swarms.structs.agent import Agent
|
||||
|
||||
market_researcher = Agent(
|
||||
agent_name="Market-Researcher",
|
||||
agent_description="Conducts comprehensive market research and data collection",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
competitor_analyst = Agent(
|
||||
agent_name="Competitor-Analyst",
|
||||
agent_description="Analyzes competitor landscape and positioning",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
market_analyst = Agent(
|
||||
agent_name="Market-Analyst",
|
||||
agent_description="Analyzes market trends and opportunities",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
financial_analyst = Agent(
|
||||
agent_name="Financial-Analyst",
|
||||
agent_description="Analyzes financial metrics and projections",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
risk_analyst = Agent(
|
||||
agent_name="Risk-Analyst",
|
||||
agent_description="Assesses market risks and challenges",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
strategy_consultant = Agent(
|
||||
agent_name="Strategy-Consultant",
|
||||
agent_description="Develops strategic recommendations based on all analyses",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
report_writer = Agent(
|
||||
agent_name="Report-Writer",
|
||||
agent_description="Compiles comprehensive market research report",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
executive_summary_writer = Agent(
|
||||
agent_name="Executive-Summary-Writer",
|
||||
agent_description="Creates executive summary for leadership",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
workflow = GraphWorkflow(
|
||||
name="Market-Research-Workflow",
|
||||
description="Real-world market research workflow using rustworkx backend",
|
||||
backend="rustworkx",
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
all_agents = [
|
||||
market_researcher,
|
||||
competitor_analyst,
|
||||
market_analyst,
|
||||
financial_analyst,
|
||||
risk_analyst,
|
||||
strategy_consultant,
|
||||
report_writer,
|
||||
executive_summary_writer,
|
||||
]
|
||||
|
||||
for agent in all_agents:
|
||||
workflow.add_node(agent)
|
||||
|
||||
workflow.add_parallel_chain(
|
||||
[market_researcher, competitor_analyst],
|
||||
[market_analyst, financial_analyst, risk_analyst],
|
||||
)
|
||||
|
||||
workflow.add_edges_to_target(
|
||||
[market_analyst, financial_analyst, risk_analyst],
|
||||
strategy_consultant,
|
||||
)
|
||||
|
||||
workflow.add_edges_from_source(
|
||||
strategy_consultant,
|
||||
[report_writer, executive_summary_writer],
|
||||
)
|
||||
|
||||
workflow.add_edges_to_target(
|
||||
[market_analyst, financial_analyst, risk_analyst],
|
||||
report_writer,
|
||||
)
|
||||
|
||||
task = """
|
||||
Conduct a comprehensive market research analysis on the electric vehicle (EV) industry:
|
||||
1. Research current market size, growth trends, and key players
|
||||
2. Analyze competitor landscape and market positioning
|
||||
3. Assess financial opportunities and investment potential
|
||||
4. Evaluate risks and challenges in the EV market
|
||||
5. Develop strategic recommendations
|
||||
6. Create detailed report and executive summary
|
||||
"""
|
||||
|
||||
results = workflow.run(task=task)
|
||||
|
||||
for agent_name, output in results.items():
|
||||
print(f"{agent_name}: {output}")
|
||||
|
After Width: | Height: | Size: 28 KiB |
@ -0,0 +1,156 @@
|
||||
# Rustworkx Backend Examples
|
||||
|
||||
This directory contains comprehensive examples demonstrating the use of the **rustworkx backend** in GraphWorkflow. Rustworkx provides faster graph operations compared to NetworkX, especially for large graphs and complex operations.
|
||||
|
||||
## Installation
|
||||
|
||||
Before running these examples, ensure rustworkx is installed:
|
||||
|
||||
```bash
|
||||
pip install rustworkx
|
||||
```
|
||||
|
||||
If rustworkx is not installed, GraphWorkflow will automatically fallback to NetworkX backend.
|
||||
|
||||
## Examples Overview
|
||||
|
||||
### 01_basic_usage.py
|
||||
Basic example showing how to use rustworkx backend with GraphWorkflow. Demonstrates simple linear workflow creation and execution.
|
||||
|
||||
**Key Concepts:**
|
||||
- Initializing GraphWorkflow with rustworkx backend
|
||||
- Adding agents and creating edges
|
||||
- Running a workflow
|
||||
|
||||
### 02_backend_comparison.py
|
||||
Compares NetworkX and Rustworkx backends side-by-side, showing performance differences and functional equivalence.
|
||||
|
||||
**Key Concepts:**
|
||||
- Backend comparison
|
||||
- Performance metrics
|
||||
- Functional equivalence verification
|
||||
|
||||
### 03_fan_out_fan_in_patterns.py
|
||||
Demonstrates parallel processing patterns: fan-out (one-to-many) and fan-in (many-to-one) connections.
|
||||
|
||||
**Key Concepts:**
|
||||
- Fan-out pattern: `add_edges_from_source()`
|
||||
- Fan-in pattern: `add_edges_to_target()`
|
||||
- Parallel execution optimization
|
||||
|
||||
### 04_complex_workflow.py
|
||||
Shows a complex multi-layer workflow with multiple parallel branches and convergence points.
|
||||
|
||||
**Key Concepts:**
|
||||
- Multi-layer workflows
|
||||
- Parallel chains: `add_parallel_chain()`
|
||||
- Complex graph structures
|
||||
|
||||
### 05_performance_benchmark.py
|
||||
Benchmarks performance differences between NetworkX and Rustworkx for various graph sizes and structures.
|
||||
|
||||
**Key Concepts:**
|
||||
- Performance benchmarking
|
||||
- Scalability testing
|
||||
- Different graph topologies (chain, tree)
|
||||
|
||||
### 06_error_handling.py
|
||||
Demonstrates error handling and graceful fallback behavior when rustworkx is unavailable.
|
||||
|
||||
**Key Concepts:**
|
||||
- Error handling
|
||||
- Automatic fallback to NetworkX
|
||||
- Backend availability checking
|
||||
|
||||
### 07_large_scale_workflow.py
|
||||
Demonstrates rustworkx's efficiency with large-scale workflows containing many agents.
|
||||
|
||||
**Key Concepts:**
|
||||
- Large-scale workflows
|
||||
- Performance with many nodes/edges
|
||||
- Complex interconnections
|
||||
|
||||
### 08_parallel_chain_example.py
|
||||
Detailed example of the parallel chain pattern creating a full mesh connection.
|
||||
|
||||
**Key Concepts:**
|
||||
- Parallel chain pattern
|
||||
- Full mesh connections
|
||||
- Maximum parallelization
|
||||
|
||||
### 09_workflow_validation.py
|
||||
Shows workflow validation features including cycle detection, isolated nodes, and auto-fixing.
|
||||
|
||||
**Key Concepts:**
|
||||
- Workflow validation
|
||||
- Cycle detection
|
||||
- Auto-fixing capabilities
|
||||
|
||||
### 10_real_world_scenario.py
|
||||
A realistic market research workflow demonstrating real-world agent coordination scenarios.
|
||||
|
||||
**Key Concepts:**
|
||||
- Real-world use case
|
||||
- Complex multi-phase workflow
|
||||
- Practical application
|
||||
|
||||
## Quick Start
|
||||
|
||||
Run any example:
|
||||
|
||||
```bash
|
||||
python 01_basic_usage.py
|
||||
```
|
||||
|
||||
## Backend Selection
|
||||
|
||||
To use rustworkx backend:
|
||||
|
||||
```python
|
||||
workflow = GraphWorkflow(
|
||||
backend="rustworkx", # Use rustworkx
|
||||
# ... other parameters
|
||||
)
|
||||
```
|
||||
|
||||
To use NetworkX backend (default):
|
||||
|
||||
```python
|
||||
workflow = GraphWorkflow(
|
||||
backend="networkx", # Or omit for default
|
||||
# ... other parameters
|
||||
)
|
||||
```
|
||||
|
||||
## Performance Benefits
|
||||
|
||||
Rustworkx provides performance benefits especially for:
|
||||
- **Large graphs** (100+ nodes)
|
||||
- **Complex operations** (topological sorting, cycle detection)
|
||||
- **Frequent graph modifications** (adding/removing nodes/edges)
|
||||
|
||||
## Key Differences
|
||||
|
||||
While both backends are functionally equivalent, rustworkx:
|
||||
- Uses integer indices internally (abstracted away)
|
||||
- Provides faster graph operations
|
||||
- Better memory efficiency for large graphs
|
||||
- Maintains full compatibility with GraphWorkflow API
|
||||
|
||||
## Notes
|
||||
|
||||
- Both backends produce identical results
|
||||
- Rustworkx automatically falls back to NetworkX if not installed
|
||||
- All GraphWorkflow features work with both backends
|
||||
- Performance gains become more significant with larger graphs
|
||||
|
||||
## Requirements
|
||||
|
||||
- `swarms` package
|
||||
- `rustworkx` (optional, for rustworkx backend)
|
||||
- `networkx` (always available, default backend)
|
||||
|
||||
## Contributing
|
||||
|
||||
Feel free to add more examples demonstrating rustworkx capabilities or specific use cases!
|
||||
|
||||
@ -0,0 +1,632 @@
|
||||
import pytest
|
||||
from swarms.structs.graph_workflow import (
|
||||
GraphWorkflow,
|
||||
)
|
||||
from swarms.structs.agent import Agent
|
||||
|
||||
try:
|
||||
import rustworkx as rx
|
||||
|
||||
RUSTWORKX_AVAILABLE = True
|
||||
except ImportError:
|
||||
RUSTWORKX_AVAILABLE = False
|
||||
|
||||
|
||||
def create_test_agent(name: str, description: str = None) -> Agent:
|
||||
"""Create a test agent"""
|
||||
if description is None:
|
||||
description = f"Test agent for {name} operations"
|
||||
|
||||
return Agent(
|
||||
agent_name=name,
|
||||
agent_description=description,
|
||||
model_name="gpt-4o-mini",
|
||||
verbose=False,
|
||||
print_on=False,
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
not RUSTWORKX_AVAILABLE, reason="rustworkx not available"
|
||||
)
|
||||
class TestRustworkxBackend:
|
||||
"""Test suite for rustworkx backend"""
|
||||
|
||||
def test_rustworkx_backend_initialization(self):
|
||||
"""Test that rustworkx backend is properly initialized"""
|
||||
workflow = GraphWorkflow(name="Test", backend="rustworkx")
|
||||
assert (
|
||||
workflow.graph_backend.__class__.__name__
|
||||
== "RustworkxBackend"
|
||||
)
|
||||
assert hasattr(workflow.graph_backend, "_node_id_to_index")
|
||||
assert hasattr(workflow.graph_backend, "_index_to_node_id")
|
||||
assert hasattr(workflow.graph_backend, "graph")
|
||||
|
||||
def test_rustworkx_node_addition(self):
|
||||
"""Test adding nodes to rustworkx backend"""
|
||||
workflow = GraphWorkflow(name="Test", backend="rustworkx")
|
||||
agent = create_test_agent("TestAgent", "Test agent")
|
||||
|
||||
workflow.add_node(agent)
|
||||
|
||||
assert "TestAgent" in workflow.nodes
|
||||
assert "TestAgent" in workflow.graph_backend._node_id_to_index
|
||||
assert (
|
||||
workflow.graph_backend._node_id_to_index["TestAgent"]
|
||||
in workflow.graph_backend._index_to_node_id
|
||||
)
|
||||
|
||||
def test_rustworkx_edge_addition(self):
|
||||
"""Test adding edges to rustworkx backend"""
|
||||
workflow = GraphWorkflow(name="Test", backend="rustworkx")
|
||||
agent1 = create_test_agent("Agent1", "First agent")
|
||||
agent2 = create_test_agent("Agent2", "Second agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_edge(agent1, agent2)
|
||||
|
||||
assert len(workflow.edges) == 1
|
||||
assert workflow.edges[0].source == "Agent1"
|
||||
assert workflow.edges[0].target == "Agent2"
|
||||
|
||||
def test_rustworkx_topological_generations_linear(self):
|
||||
"""Test topological generations with linear chain"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Linear-Test", backend="rustworkx"
|
||||
)
|
||||
agents = [
|
||||
create_test_agent(f"Agent{i}", f"Agent {i}")
|
||||
for i in range(5)
|
||||
]
|
||||
|
||||
for agent in agents:
|
||||
workflow.add_node(agent)
|
||||
|
||||
for i in range(len(agents) - 1):
|
||||
workflow.add_edge(agents[i], agents[i + 1])
|
||||
|
||||
workflow.compile()
|
||||
|
||||
assert len(workflow._sorted_layers) == 5
|
||||
assert workflow._sorted_layers[0] == ["Agent0"]
|
||||
assert workflow._sorted_layers[1] == ["Agent1"]
|
||||
assert workflow._sorted_layers[2] == ["Agent2"]
|
||||
assert workflow._sorted_layers[3] == ["Agent3"]
|
||||
assert workflow._sorted_layers[4] == ["Agent4"]
|
||||
|
||||
def test_rustworkx_topological_generations_fan_out(self):
|
||||
"""Test topological generations with fan-out pattern"""
|
||||
workflow = GraphWorkflow(
|
||||
name="FanOut-Test", backend="rustworkx"
|
||||
)
|
||||
coordinator = create_test_agent("Coordinator", "Coordinates")
|
||||
analyst1 = create_test_agent("Analyst1", "First analyst")
|
||||
analyst2 = create_test_agent("Analyst2", "Second analyst")
|
||||
analyst3 = create_test_agent("Analyst3", "Third analyst")
|
||||
|
||||
workflow.add_node(coordinator)
|
||||
workflow.add_node(analyst1)
|
||||
workflow.add_node(analyst2)
|
||||
workflow.add_node(analyst3)
|
||||
|
||||
workflow.add_edges_from_source(
|
||||
coordinator, [analyst1, analyst2, analyst3]
|
||||
)
|
||||
|
||||
workflow.compile()
|
||||
|
||||
assert len(workflow._sorted_layers) == 2
|
||||
assert len(workflow._sorted_layers[0]) == 1
|
||||
assert "Coordinator" in workflow._sorted_layers[0]
|
||||
assert len(workflow._sorted_layers[1]) == 3
|
||||
assert "Analyst1" in workflow._sorted_layers[1]
|
||||
assert "Analyst2" in workflow._sorted_layers[1]
|
||||
assert "Analyst3" in workflow._sorted_layers[1]
|
||||
|
||||
def test_rustworkx_topological_generations_fan_in(self):
|
||||
"""Test topological generations with fan-in pattern"""
|
||||
workflow = GraphWorkflow(
|
||||
name="FanIn-Test", backend="rustworkx"
|
||||
)
|
||||
analyst1 = create_test_agent("Analyst1", "First analyst")
|
||||
analyst2 = create_test_agent("Analyst2", "Second analyst")
|
||||
analyst3 = create_test_agent("Analyst3", "Third analyst")
|
||||
synthesizer = create_test_agent("Synthesizer", "Synthesizes")
|
||||
|
||||
workflow.add_node(analyst1)
|
||||
workflow.add_node(analyst2)
|
||||
workflow.add_node(analyst3)
|
||||
workflow.add_node(synthesizer)
|
||||
|
||||
workflow.add_edges_to_target(
|
||||
[analyst1, analyst2, analyst3], synthesizer
|
||||
)
|
||||
|
||||
workflow.compile()
|
||||
|
||||
assert len(workflow._sorted_layers) == 2
|
||||
assert len(workflow._sorted_layers[0]) == 3
|
||||
assert "Analyst1" in workflow._sorted_layers[0]
|
||||
assert "Analyst2" in workflow._sorted_layers[0]
|
||||
assert "Analyst3" in workflow._sorted_layers[0]
|
||||
assert len(workflow._sorted_layers[1]) == 1
|
||||
assert "Synthesizer" in workflow._sorted_layers[1]
|
||||
|
||||
def test_rustworkx_topological_generations_complex(self):
|
||||
"""Test topological generations with complex topology"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Complex-Test", backend="rustworkx"
|
||||
)
|
||||
agents = [
|
||||
create_test_agent(f"Agent{i}", f"Agent {i}")
|
||||
for i in range(6)
|
||||
]
|
||||
|
||||
for agent in agents:
|
||||
workflow.add_node(agent)
|
||||
|
||||
# Create: Agent0 -> Agent1, Agent2
|
||||
# Agent1, Agent2 -> Agent3
|
||||
# Agent3 -> Agent4, Agent5
|
||||
workflow.add_edge(agents[0], agents[1])
|
||||
workflow.add_edge(agents[0], agents[2])
|
||||
workflow.add_edge(agents[1], agents[3])
|
||||
workflow.add_edge(agents[2], agents[3])
|
||||
workflow.add_edge(agents[3], agents[4])
|
||||
workflow.add_edge(agents[3], agents[5])
|
||||
|
||||
workflow.compile()
|
||||
|
||||
assert len(workflow._sorted_layers) == 4
|
||||
assert "Agent0" in workflow._sorted_layers[0]
|
||||
assert (
|
||||
"Agent1" in workflow._sorted_layers[1]
|
||||
or "Agent2" in workflow._sorted_layers[1]
|
||||
)
|
||||
assert "Agent3" in workflow._sorted_layers[2]
|
||||
assert (
|
||||
"Agent4" in workflow._sorted_layers[3]
|
||||
or "Agent5" in workflow._sorted_layers[3]
|
||||
)
|
||||
|
||||
def test_rustworkx_predecessors(self):
|
||||
"""Test predecessor retrieval"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Predecessors-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "First agent")
|
||||
agent2 = create_test_agent("Agent2", "Second agent")
|
||||
agent3 = create_test_agent("Agent3", "Third agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_node(agent3)
|
||||
|
||||
workflow.add_edge(agent1, agent2)
|
||||
workflow.add_edge(agent2, agent3)
|
||||
|
||||
predecessors = list(
|
||||
workflow.graph_backend.predecessors("Agent2")
|
||||
)
|
||||
assert "Agent1" in predecessors
|
||||
assert len(predecessors) == 1
|
||||
|
||||
predecessors = list(
|
||||
workflow.graph_backend.predecessors("Agent3")
|
||||
)
|
||||
assert "Agent2" in predecessors
|
||||
assert len(predecessors) == 1
|
||||
|
||||
predecessors = list(
|
||||
workflow.graph_backend.predecessors("Agent1")
|
||||
)
|
||||
assert len(predecessors) == 0
|
||||
|
||||
def test_rustworkx_descendants(self):
|
||||
"""Test descendant retrieval"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Descendants-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "First agent")
|
||||
agent2 = create_test_agent("Agent2", "Second agent")
|
||||
agent3 = create_test_agent("Agent3", "Third agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_node(agent3)
|
||||
|
||||
workflow.add_edge(agent1, agent2)
|
||||
workflow.add_edge(agent2, agent3)
|
||||
|
||||
descendants = workflow.graph_backend.descendants("Agent1")
|
||||
assert "Agent2" in descendants
|
||||
assert "Agent3" in descendants
|
||||
assert len(descendants) == 2
|
||||
|
||||
descendants = workflow.graph_backend.descendants("Agent2")
|
||||
assert "Agent3" in descendants
|
||||
assert len(descendants) == 1
|
||||
|
||||
descendants = workflow.graph_backend.descendants("Agent3")
|
||||
assert len(descendants) == 0
|
||||
|
||||
def test_rustworkx_in_degree(self):
|
||||
"""Test in-degree calculation"""
|
||||
workflow = GraphWorkflow(
|
||||
name="InDegree-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "First agent")
|
||||
agent2 = create_test_agent("Agent2", "Second agent")
|
||||
agent3 = create_test_agent("Agent3", "Third agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_node(agent3)
|
||||
|
||||
workflow.add_edge(agent1, agent2)
|
||||
workflow.add_edge(agent3, agent2)
|
||||
|
||||
assert workflow.graph_backend.in_degree("Agent1") == 0
|
||||
assert workflow.graph_backend.in_degree("Agent2") == 2
|
||||
assert workflow.graph_backend.in_degree("Agent3") == 0
|
||||
|
||||
def test_rustworkx_out_degree(self):
|
||||
"""Test out-degree calculation"""
|
||||
workflow = GraphWorkflow(
|
||||
name="OutDegree-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "First agent")
|
||||
agent2 = create_test_agent("Agent2", "Second agent")
|
||||
agent3 = create_test_agent("Agent3", "Third agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_node(agent3)
|
||||
|
||||
workflow.add_edge(agent1, agent2)
|
||||
workflow.add_edge(agent1, agent3)
|
||||
|
||||
assert workflow.graph_backend.out_degree("Agent1") == 2
|
||||
assert workflow.graph_backend.out_degree("Agent2") == 0
|
||||
assert workflow.graph_backend.out_degree("Agent3") == 0
|
||||
|
||||
def test_rustworkx_agent_objects_in_edges(self):
|
||||
"""Test using Agent objects directly in edge methods"""
|
||||
workflow = GraphWorkflow(
|
||||
name="AgentObjects-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "First agent")
|
||||
agent2 = create_test_agent("Agent2", "Second agent")
|
||||
agent3 = create_test_agent("Agent3", "Third agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_node(agent3)
|
||||
|
||||
# Use Agent objects directly
|
||||
workflow.add_edges_from_source(agent1, [agent2, agent3])
|
||||
workflow.add_edges_to_target([agent2, agent3], agent1)
|
||||
|
||||
workflow.compile()
|
||||
|
||||
assert len(workflow.edges) == 4
|
||||
assert len(workflow._sorted_layers) >= 1
|
||||
|
||||
def test_rustworkx_parallel_chain(self):
|
||||
"""Test parallel chain pattern"""
|
||||
workflow = GraphWorkflow(
|
||||
name="ParallelChain-Test", backend="rustworkx"
|
||||
)
|
||||
sources = [
|
||||
create_test_agent(f"Source{i}", f"Source {i}")
|
||||
for i in range(3)
|
||||
]
|
||||
targets = [
|
||||
create_test_agent(f"Target{i}", f"Target {i}")
|
||||
for i in range(3)
|
||||
]
|
||||
|
||||
for agent in sources + targets:
|
||||
workflow.add_node(agent)
|
||||
|
||||
workflow.add_parallel_chain(sources, targets)
|
||||
|
||||
workflow.compile()
|
||||
|
||||
assert len(workflow.edges) == 9 # 3x3 = 9 edges
|
||||
assert len(workflow._sorted_layers) == 2
|
||||
|
||||
def test_rustworkx_large_scale(self):
|
||||
"""Test rustworkx with large workflow"""
|
||||
workflow = GraphWorkflow(
|
||||
name="LargeScale-Test", backend="rustworkx"
|
||||
)
|
||||
agents = [
|
||||
create_test_agent(f"Agent{i}", f"Agent {i}")
|
||||
for i in range(20)
|
||||
]
|
||||
|
||||
for agent in agents:
|
||||
workflow.add_node(agent)
|
||||
|
||||
# Create linear chain
|
||||
for i in range(len(agents) - 1):
|
||||
workflow.add_edge(agents[i], agents[i + 1])
|
||||
|
||||
workflow.compile()
|
||||
|
||||
assert len(workflow._sorted_layers) == 20
|
||||
assert len(workflow.nodes) == 20
|
||||
assert len(workflow.edges) == 19
|
||||
|
||||
def test_rustworkx_reverse(self):
|
||||
"""Test graph reversal"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Reverse-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "First agent")
|
||||
agent2 = create_test_agent("Agent2", "Second agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_edge(agent1, agent2)
|
||||
|
||||
reversed_backend = workflow.graph_backend.reverse()
|
||||
|
||||
# In reversed graph, Agent2 should have Agent1 as predecessor
|
||||
preds = list(reversed_backend.predecessors("Agent1"))
|
||||
assert "Agent2" in preds
|
||||
|
||||
# Agent2 should have no predecessors in reversed graph
|
||||
preds = list(reversed_backend.predecessors("Agent2"))
|
||||
assert len(preds) == 0
|
||||
|
||||
def test_rustworkx_entry_end_points(self):
|
||||
"""Test entry and end point detection"""
|
||||
workflow = GraphWorkflow(
|
||||
name="EntryEnd-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "Entry agent")
|
||||
agent2 = create_test_agent("Agent2", "Middle agent")
|
||||
agent3 = create_test_agent("Agent3", "End agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_node(agent3)
|
||||
|
||||
workflow.add_edge(agent1, agent2)
|
||||
workflow.add_edge(agent2, agent3)
|
||||
|
||||
workflow.auto_set_entry_points()
|
||||
workflow.auto_set_end_points()
|
||||
|
||||
assert "Agent1" in workflow.entry_points
|
||||
assert "Agent3" in workflow.end_points
|
||||
assert workflow.graph_backend.in_degree("Agent1") == 0
|
||||
assert workflow.graph_backend.out_degree("Agent3") == 0
|
||||
|
||||
def test_rustworkx_isolated_nodes(self):
|
||||
"""Test handling of isolated nodes"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Isolated-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "Connected agent")
|
||||
agent2 = create_test_agent("Agent2", "Isolated agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_edge(agent1, agent1) # Self-loop
|
||||
|
||||
workflow.compile()
|
||||
|
||||
assert len(workflow.nodes) == 2
|
||||
assert "Agent2" in workflow.nodes
|
||||
|
||||
def test_rustworkx_workflow_execution(self):
|
||||
"""Test full workflow execution with rustworkx"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Execution-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "First agent")
|
||||
agent2 = create_test_agent("Agent2", "Second agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_edge(agent1, agent2)
|
||||
|
||||
result = workflow.run("Test task")
|
||||
|
||||
assert result is not None
|
||||
assert "Agent1" in result
|
||||
assert "Agent2" in result
|
||||
|
||||
def test_rustworkx_compilation_caching(self):
|
||||
"""Test that compilation is cached correctly"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Cache-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "First agent")
|
||||
agent2 = create_test_agent("Agent2", "Second agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_edge(agent1, agent2)
|
||||
|
||||
# First compilation
|
||||
workflow.compile()
|
||||
layers1 = workflow._sorted_layers.copy()
|
||||
compiled1 = workflow._compiled
|
||||
|
||||
# Second compilation should use cache
|
||||
workflow.compile()
|
||||
layers2 = workflow._sorted_layers.copy()
|
||||
compiled2 = workflow._compiled
|
||||
|
||||
assert compiled1 == compiled2 == True
|
||||
assert layers1 == layers2
|
||||
|
||||
def test_rustworkx_node_metadata(self):
|
||||
"""Test node metadata handling"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Metadata-Test", backend="rustworkx"
|
||||
)
|
||||
agent = create_test_agent("Agent", "Test agent")
|
||||
|
||||
workflow.add_node(
|
||||
agent, metadata={"priority": "high", "timeout": 60}
|
||||
)
|
||||
|
||||
node_index = workflow.graph_backend._node_id_to_index["Agent"]
|
||||
node_data = workflow.graph_backend.graph[node_index]
|
||||
|
||||
assert isinstance(node_data, dict)
|
||||
assert node_data.get("node_id") == "Agent"
|
||||
assert node_data.get("priority") == "high"
|
||||
assert node_data.get("timeout") == 60
|
||||
|
||||
def test_rustworkx_edge_metadata(self):
|
||||
"""Test edge metadata handling"""
|
||||
workflow = GraphWorkflow(
|
||||
name="EdgeMetadata-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "First agent")
|
||||
agent2 = create_test_agent("Agent2", "Second agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
workflow.add_edge(agent1, agent2, weight=5, label="test")
|
||||
|
||||
assert len(workflow.edges) == 1
|
||||
assert workflow.edges[0].metadata.get("weight") == 5
|
||||
assert workflow.edges[0].metadata.get("label") == "test"
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
not RUSTWORKX_AVAILABLE, reason="rustworkx not available"
|
||||
)
|
||||
class TestRustworkxPerformance:
|
||||
"""Performance tests for rustworkx backend"""
|
||||
|
||||
def test_rustworkx_large_graph_compilation(self):
|
||||
"""Test compilation performance with large graph"""
|
||||
workflow = GraphWorkflow(
|
||||
name="LargeGraph-Test", backend="rustworkx"
|
||||
)
|
||||
agents = [
|
||||
create_test_agent(f"Agent{i}", f"Agent {i}")
|
||||
for i in range(50)
|
||||
]
|
||||
|
||||
for agent in agents:
|
||||
workflow.add_node(agent)
|
||||
|
||||
# Create a complex topology
|
||||
for i in range(len(agents) - 1):
|
||||
workflow.add_edge(agents[i], agents[i + 1])
|
||||
|
||||
import time
|
||||
|
||||
start = time.time()
|
||||
workflow.compile()
|
||||
compile_time = time.time() - start
|
||||
|
||||
assert compile_time < 1.0 # Should compile quickly
|
||||
assert len(workflow._sorted_layers) == 50
|
||||
|
||||
def test_rustworkx_many_predecessors(self):
|
||||
"""Test performance with many predecessors"""
|
||||
workflow = GraphWorkflow(
|
||||
name="ManyPreds-Test", backend="rustworkx"
|
||||
)
|
||||
target = create_test_agent("Target", "Target agent")
|
||||
sources = [
|
||||
create_test_agent(f"Source{i}", f"Source {i}")
|
||||
for i in range(100)
|
||||
]
|
||||
|
||||
workflow.add_node(target)
|
||||
for source in sources:
|
||||
workflow.add_node(source)
|
||||
|
||||
workflow.add_edges_to_target(sources, target)
|
||||
|
||||
workflow.compile()
|
||||
|
||||
predecessors = list(
|
||||
workflow.graph_backend.predecessors("Target")
|
||||
)
|
||||
assert len(predecessors) == 100
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
not RUSTWORKX_AVAILABLE, reason="rustworkx not available"
|
||||
)
|
||||
class TestRustworkxEdgeCases:
|
||||
"""Edge case tests for rustworkx backend"""
|
||||
|
||||
def test_rustworkx_empty_graph(self):
|
||||
"""Test empty graph handling"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Empty-Test", backend="rustworkx"
|
||||
)
|
||||
workflow.compile()
|
||||
|
||||
assert len(workflow._sorted_layers) == 0
|
||||
assert len(workflow.nodes) == 0
|
||||
|
||||
def test_rustworkx_single_node(self):
|
||||
"""Test single node graph"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Single-Test", backend="rustworkx"
|
||||
)
|
||||
agent = create_test_agent("Agent", "Single agent")
|
||||
|
||||
workflow.add_node(agent)
|
||||
workflow.compile()
|
||||
|
||||
assert len(workflow._sorted_layers) == 1
|
||||
assert workflow._sorted_layers[0] == ["Agent"]
|
||||
|
||||
def test_rustworkx_self_loop(self):
|
||||
"""Test self-loop handling"""
|
||||
workflow = GraphWorkflow(
|
||||
name="SelfLoop-Test", backend="rustworkx"
|
||||
)
|
||||
agent = create_test_agent("Agent", "Self-looping agent")
|
||||
|
||||
workflow.add_node(agent)
|
||||
workflow.add_edge(agent, agent)
|
||||
|
||||
workflow.compile()
|
||||
|
||||
assert len(workflow.edges) == 1
|
||||
assert workflow.graph_backend.in_degree("Agent") == 1
|
||||
assert workflow.graph_backend.out_degree("Agent") == 1
|
||||
|
||||
def test_rustworkx_duplicate_edge(self):
|
||||
"""Test duplicate edge handling"""
|
||||
workflow = GraphWorkflow(
|
||||
name="Duplicate-Test", backend="rustworkx"
|
||||
)
|
||||
agent1 = create_test_agent("Agent1", "First agent")
|
||||
agent2 = create_test_agent("Agent2", "Second agent")
|
||||
|
||||
workflow.add_node(agent1)
|
||||
workflow.add_node(agent2)
|
||||
|
||||
# Add same edge twice
|
||||
workflow.add_edge(agent1, agent2)
|
||||
workflow.add_edge(agent1, agent2)
|
||||
|
||||
# rustworkx should handle duplicate edges
|
||||
assert (
|
||||
len(workflow.edges) == 2
|
||||
) # Both edges are stored in workflow
|
||||
workflow.compile() # Should not crash
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__, "-v"])
|
||||
@ -0,0 +1,47 @@
|
||||
from swarms import Agent, HierarchicalSwarm
|
||||
|
||||
# Create specialized agents
|
||||
research_agent = Agent(
|
||||
agent_name="Research-Analyst",
|
||||
agent_description="Specialized in comprehensive research and data gathering",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
analysis_agent = Agent(
|
||||
agent_name="Data-Analyst",
|
||||
agent_description="Expert in data analysis and pattern recognition",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
strategy_agent = Agent(
|
||||
agent_name="Strategy-Consultant",
|
||||
agent_description="Specialized in strategic planning and recommendations",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
verbose=False,
|
||||
)
|
||||
|
||||
# Create hierarchical swarm with interactive dashboard
|
||||
swarm = HierarchicalSwarm(
|
||||
name="Swarms Corporation Operations",
|
||||
description="Enterprise-grade hierarchical swarm for complex task execution",
|
||||
agents=[research_agent, analysis_agent, strategy_agent],
|
||||
max_loops=1,
|
||||
interactive=False, # Enable the Arasaka dashboard
|
||||
director_model_name="claude-haiku-4-5",
|
||||
director_temperature=0.7,
|
||||
director_top_p=None,
|
||||
planning_enabled=True,
|
||||
)
|
||||
|
||||
|
||||
print(swarm.display_hierarchy())
|
||||
|
||||
# out = swarm.run(
|
||||
# "Conduct a research analysis on water stocks and etfs"
|
||||
# )
|
||||
# print(out)
|
||||
@ -0,0 +1,95 @@
|
||||
# LLM Council Examples
|
||||
|
||||
This directory contains examples demonstrating the LLM Council pattern, inspired by Andrej Karpathy's llm-council implementation. The LLM Council uses multiple specialized AI agents that:
|
||||
|
||||
1. Each respond independently to queries
|
||||
2. Review and rank each other's anonymized responses
|
||||
3. Have a Chairman synthesize all responses into a final comprehensive answer
|
||||
|
||||
## Examples
|
||||
|
||||
### Marketing & Business
|
||||
- **marketing_strategy_council.py** - Marketing strategy analysis and recommendations
|
||||
- **business_strategy_council.py** - Comprehensive business strategy development
|
||||
|
||||
### Finance & Investment
|
||||
- **finance_analysis_council.py** - Financial analysis and investment recommendations
|
||||
- **etf_stock_analysis_council.py** - ETF and stock analysis with portfolio recommendations
|
||||
|
||||
### Medical & Healthcare
|
||||
- **medical_treatment_council.py** - Medical treatment recommendations and care plans
|
||||
- **medical_diagnosis_council.py** - Diagnostic analysis based on symptoms
|
||||
|
||||
### Technology & Research
|
||||
- **technology_assessment_council.py** - Technology evaluation and implementation strategy
|
||||
- **research_analysis_council.py** - Comprehensive research analysis on complex topics
|
||||
|
||||
### Legal
|
||||
- **legal_analysis_council.py** - Legal implications and compliance analysis
|
||||
|
||||
## Usage
|
||||
|
||||
Each example follows the same pattern:
|
||||
|
||||
```python
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Run a query
|
||||
result = council.run("Your query here")
|
||||
|
||||
# Access results
|
||||
print(result["final_response"]) # Chairman's synthesized answer
|
||||
print(result["original_responses"]) # Individual member responses
|
||||
print(result["evaluations"]) # How members ranked each other
|
||||
```
|
||||
|
||||
## Running Examples
|
||||
|
||||
Run any example directly:
|
||||
|
||||
```bash
|
||||
python examples/multi_agent/llm_council_examples/marketing_strategy_council.py
|
||||
python examples/multi_agent/llm_council_examples/finance_analysis_council.py
|
||||
python examples/multi_agent/llm_council_examples/medical_diagnosis_council.py
|
||||
```
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Multiple Perspectives**: Each council member (GPT-5.1, Gemini, Claude, Grok) provides unique insights
|
||||
- **Peer Review**: Members evaluate and rank each other's responses anonymously
|
||||
- **Synthesis**: Chairman combines the best elements from all responses
|
||||
- **Transparency**: See both individual responses and evaluation rankings
|
||||
|
||||
## Council Members
|
||||
|
||||
The default council consists of:
|
||||
- **GPT-5.1-Councilor**: Analytical and comprehensive
|
||||
- **Gemini-3-Pro-Councilor**: Concise and well-processed
|
||||
- **Claude-Sonnet-4.5-Councilor**: Thoughtful and balanced
|
||||
- **Grok-4-Councilor**: Creative and innovative
|
||||
|
||||
## Customization
|
||||
|
||||
You can create custom council members:
|
||||
|
||||
```python
|
||||
from swarms import Agent
|
||||
from swarms.structs.llm_council import LLMCouncil, get_gpt_councilor_prompt
|
||||
|
||||
custom_agent = Agent(
|
||||
agent_name="Custom-Councilor",
|
||||
system_prompt=get_gpt_councilor_prompt(),
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
council = LLMCouncil(
|
||||
council_members=[custom_agent, ...],
|
||||
chairman_model="gpt-5.1",
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
@ -0,0 +1,31 @@
|
||||
"""
|
||||
LLM Council Example: Business Strategy Development
|
||||
|
||||
This example demonstrates using the LLM Council to develop comprehensive
|
||||
business strategies for new ventures.
|
||||
"""
|
||||
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Business strategy query
|
||||
query = """
|
||||
A tech startup wants to launch an AI-powered personal finance app targeting
|
||||
millennials and Gen Z. Develop a comprehensive business strategy including:
|
||||
1. Market opportunity and competitive landscape analysis
|
||||
2. Product positioning and unique value proposition
|
||||
3. Go-to-market strategy and customer acquisition plan
|
||||
4. Revenue model and pricing strategy
|
||||
5. Key partnerships and distribution channels
|
||||
6. Resource requirements and funding needs
|
||||
7. Risk assessment and mitigation strategies
|
||||
8. Success metrics and KPIs for first 12 months
|
||||
"""
|
||||
|
||||
# Run the council
|
||||
result = council.run(query)
|
||||
|
||||
# Print final response
|
||||
print(result["final_response"])
|
||||
@ -0,0 +1,29 @@
|
||||
"""
|
||||
LLM Council Example: ETF Stock Analysis
|
||||
|
||||
This example demonstrates using the LLM Council to analyze ETF holdings
|
||||
and provide stock investment recommendations.
|
||||
"""
|
||||
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# ETF and stock analysis query
|
||||
query = """
|
||||
Analyze the top energy ETFs (including nuclear, solar, gas, and renewable energy)
|
||||
and provide:
|
||||
1. Top 5 best-performing energy stocks across all energy sectors
|
||||
2. ETF recommendations for diversified energy exposure
|
||||
3. Risk-return profiles for each recommendation
|
||||
4. Current market conditions affecting energy investments
|
||||
5. Allocation strategy for a $100,000 portfolio
|
||||
6. Key metrics to track for each investment
|
||||
"""
|
||||
|
||||
# Run the council
|
||||
result = council.run(query)
|
||||
|
||||
# Print final response
|
||||
print(result["final_response"])
|
||||
@ -0,0 +1,29 @@
|
||||
"""
|
||||
LLM Council Example: Financial Analysis
|
||||
|
||||
This example demonstrates using the LLM Council to provide comprehensive
|
||||
financial analysis and investment recommendations.
|
||||
"""
|
||||
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Financial analysis query
|
||||
query = """
|
||||
Provide a comprehensive financial analysis for investing in emerging markets
|
||||
technology ETFs. Include:
|
||||
1. Risk assessment and volatility analysis
|
||||
2. Historical performance trends
|
||||
3. Sector composition and diversification benefits
|
||||
4. Comparison with developed market tech ETFs
|
||||
5. Recommended allocation percentage for a moderate risk portfolio
|
||||
6. Key factors to monitor going forward
|
||||
"""
|
||||
|
||||
# Run the council
|
||||
result = council.run(query)
|
||||
|
||||
# Print final response
|
||||
print(result["final_response"])
|
||||
@ -0,0 +1,31 @@
|
||||
"""
|
||||
LLM Council Example: Legal Analysis
|
||||
|
||||
This example demonstrates using the LLM Council to analyze legal scenarios
|
||||
and provide comprehensive legal insights.
|
||||
"""
|
||||
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Legal analysis query
|
||||
query = """
|
||||
A startup is considering using AI-generated content for their marketing materials.
|
||||
Analyze the legal implications including:
|
||||
1. Intellectual property rights and ownership of AI-generated content
|
||||
2. Copyright and trademark considerations
|
||||
3. Liability for AI-generated content that may be inaccurate or misleading
|
||||
4. Compliance with advertising regulations (FTC, FDA, etc.)
|
||||
5. Data privacy implications if using customer data to train models
|
||||
6. Contractual considerations with AI service providers
|
||||
7. Risk mitigation strategies
|
||||
8. Best practices for legal compliance
|
||||
"""
|
||||
|
||||
# Run the council
|
||||
result = council.run(query)
|
||||
|
||||
# Print final response
|
||||
print(result["final_response"])
|
||||
@ -0,0 +1,12 @@
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True, output_type="final")
|
||||
|
||||
# Example query
|
||||
query = "What are the top five best energy stocks across nuclear, solar, gas, and other energy sources?"
|
||||
|
||||
# Run the council
|
||||
result = council.run(query)
|
||||
|
||||
print(result)
|
||||
@ -0,0 +1,28 @@
|
||||
"""
|
||||
LLM Council Example: Marketing Strategy Analysis
|
||||
|
||||
This example demonstrates using the LLM Council to analyze and develop
|
||||
comprehensive marketing strategies by leveraging multiple AI perspectives.
|
||||
"""
|
||||
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Marketing strategy query
|
||||
query = """
|
||||
Analyze the marketing strategy for a new sustainable energy startup launching
|
||||
a solar panel subscription service. Provide recommendations on:
|
||||
1. Target audience segmentation
|
||||
2. Key messaging and value propositions
|
||||
3. Marketing channels and budget allocation
|
||||
4. Competitive positioning
|
||||
5. Launch timeline and milestones
|
||||
"""
|
||||
|
||||
# Run the council
|
||||
result = council.run(query)
|
||||
|
||||
# Print final response
|
||||
print(result["final_response"])
|
||||
@ -0,0 +1,36 @@
|
||||
"""
|
||||
LLM Council Example: Medical Diagnosis Analysis
|
||||
|
||||
This example demonstrates using the LLM Council to analyze symptoms
|
||||
and provide diagnostic insights.
|
||||
"""
|
||||
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Medical diagnosis query
|
||||
query = """
|
||||
A 35-year-old patient presents with:
|
||||
- Persistent fatigue for 3 months
|
||||
- Unexplained weight loss (15 lbs)
|
||||
- Night sweats
|
||||
- Intermittent low-grade fever
|
||||
- Swollen lymph nodes in neck and armpits
|
||||
- Recent blood work shows elevated ESR and CRP
|
||||
|
||||
Provide:
|
||||
1. Differential diagnosis with most likely conditions ranked
|
||||
2. Additional diagnostic tests needed to confirm
|
||||
3. Red flag symptoms requiring immediate attention
|
||||
4. Possible causes and risk factors
|
||||
5. Recommended next steps for the patient
|
||||
6. When to seek emergency care
|
||||
"""
|
||||
|
||||
# Run the council
|
||||
result = council.run(query)
|
||||
|
||||
# Print final response
|
||||
print(result["final_response"])
|
||||
@ -0,0 +1,30 @@
|
||||
"""
|
||||
LLM Council Example: Medical Treatment Analysis
|
||||
|
||||
This example demonstrates using the LLM Council to analyze medical treatments
|
||||
and provide comprehensive treatment recommendations.
|
||||
"""
|
||||
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Medical treatment query
|
||||
query = """
|
||||
A 45-year-old patient with Type 2 diabetes, hypertension, and early-stage
|
||||
kidney disease needs treatment recommendations. Provide:
|
||||
1. Comprehensive treatment plan addressing all conditions
|
||||
2. Medication options with pros/cons for each condition
|
||||
3. Lifestyle modifications and their expected impact
|
||||
4. Monitoring schedule and key metrics to track
|
||||
5. Potential drug interactions and contraindications
|
||||
6. Expected outcomes and timeline for improvement
|
||||
7. When to consider specialist referrals
|
||||
"""
|
||||
|
||||
# Run the council
|
||||
result = council.run(query)
|
||||
|
||||
# Print final response
|
||||
print(result["final_response"])
|
||||
@ -0,0 +1,31 @@
|
||||
"""
|
||||
LLM Council Example: Research Analysis
|
||||
|
||||
This example demonstrates using the LLM Council to conduct comprehensive
|
||||
research analysis on complex topics.
|
||||
"""
|
||||
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Research analysis query
|
||||
query = """
|
||||
Conduct a comprehensive analysis of the potential impact of climate change
|
||||
on global food security over the next 20 years. Include:
|
||||
1. Key climate factors affecting agriculture (temperature, precipitation, extreme weather)
|
||||
2. Regional vulnerabilities and impacts on major food-producing regions
|
||||
3. Crop yield projections and food availability scenarios
|
||||
4. Economic implications and food price volatility
|
||||
5. Adaptation strategies and technological solutions
|
||||
6. Policy recommendations for governments and international organizations
|
||||
7. Role of innovation in agriculture (precision farming, GMOs, vertical farming)
|
||||
8. Social and geopolitical implications of food insecurity
|
||||
"""
|
||||
|
||||
# Run the council
|
||||
result = council.run(query)
|
||||
|
||||
# Print final response
|
||||
print(result["final_response"])
|
||||
@ -0,0 +1,31 @@
|
||||
"""
|
||||
LLM Council Example: Technology Assessment
|
||||
|
||||
This example demonstrates using the LLM Council to assess emerging technologies
|
||||
and their business implications.
|
||||
"""
|
||||
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Create the council
|
||||
council = LLMCouncil(verbose=True)
|
||||
|
||||
# Technology assessment query
|
||||
query = """
|
||||
Evaluate the business potential and implementation strategy for integrating
|
||||
quantum computing capabilities into a financial services company. Consider:
|
||||
1. Current state of quantum computing technology
|
||||
2. Specific use cases in financial services (risk modeling, portfolio optimization, fraud detection)
|
||||
3. Competitive advantages and potential ROI
|
||||
4. Implementation timeline and resource requirements
|
||||
5. Technical challenges and limitations
|
||||
6. Risk factors and mitigation strategies
|
||||
7. Partnership opportunities with quantum computing providers
|
||||
8. Expected timeline for practical business value
|
||||
"""
|
||||
|
||||
# Run the council
|
||||
result = council.run(query)
|
||||
|
||||
# Print final response
|
||||
print(result["final_response"])
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue