new examples for latest features such as graph workflow rustworkx, llmcouncil, debate with judge, and overviews for the examples
parent
a8677286c9
commit
fd6b688b64
@ -0,0 +1,40 @@
|
||||
# AOP Examples Overview
|
||||
|
||||
Deploy agents as network services using the Agent Orchestration Protocol (AOP). Turn your agents into distributed, scalable, and accessible services.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **AOP Fundamentals** | Understanding agent-as-a-service deployment |
|
||||
| **Server Setup** | Running agents as MCP servers |
|
||||
| **Client Integration** | Connecting to remote agents |
|
||||
| **Production Deployment** | Scaling and monitoring agents |
|
||||
|
||||
---
|
||||
|
||||
## AOP Examples
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Medical AOP Example** | Healthcare agent deployment with AOP | [View Example](./aop_medical.md) |
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Use Case | Description |
|
||||
|----------|-------------|
|
||||
| **Microservices** | Agent per service |
|
||||
| **API Gateway** | Central agent access point |
|
||||
| **Multi-tenant** | Shared agent infrastructure |
|
||||
| **Edge Deployment** | Agents at the edge |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [AOP Reference Documentation](../swarms/structs/aop.md) - Complete AOP API
|
||||
- [AOP Server Setup](../swarms/examples/aop_server_example.md) - Server configuration
|
||||
- [AOP Cluster Example](../swarms/examples/aop_cluster_example.md) - Multi-node setup
|
||||
- [Deployment Solutions](../deployment_solutions/overview.md) - Production deployment
|
||||
@ -0,0 +1,69 @@
|
||||
# Applications Overview
|
||||
|
||||
Real-world multi-agent applications built with Swarms. These examples demonstrate complete solutions for business, research, finance, and automation use cases.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **Business Applications** | Marketing, hiring, M&A advisory swarms |
|
||||
| **Research Systems** | Advanced research and analysis workflows |
|
||||
| **Financial Analysis** | ETF research and investment analysis |
|
||||
| **Automation** | Browser agents and web automation |
|
||||
| **Industry Solutions** | Real estate, job finding, and more |
|
||||
|
||||
---
|
||||
|
||||
## Application Examples
|
||||
|
||||
| Application | Description | Industry | Link |
|
||||
|-------------|-------------|----------|------|
|
||||
| **Swarms of Browser Agents** | Automated web browsing with multiple agents | Automation | [View Example](../swarms/examples/swarms_of_browser_agents.md) |
|
||||
| **Hierarchical Marketing Team** | Multi-agent marketing strategy and execution | Marketing | [View Example](./marketing_team.md) |
|
||||
| **Gold ETF Research with HeavySwarm** | Comprehensive ETF analysis using Heavy Swarm | Finance | [View Example](./gold_etf_research.md) |
|
||||
| **Hiring Swarm** | Automated candidate screening and evaluation | HR/Recruiting | [View Example](./hiring_swarm.md) |
|
||||
| **Advanced Research** | Multi-agent research and analysis system | Research | [View Example](./av.md) |
|
||||
| **Real Estate Swarm** | Property analysis and market research | Real Estate | [View Example](./realestate_swarm.md) |
|
||||
| **Job Finding Swarm** | Automated job search and matching | Career | [View Example](./job_finding.md) |
|
||||
| **M&A Advisory Swarm** | Mergers & acquisitions analysis | Finance | [View Example](./ma_swarm.md) |
|
||||
|
||||
---
|
||||
|
||||
## Applications by Category
|
||||
|
||||
### Business & Marketing
|
||||
|
||||
| Application | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Hierarchical Marketing Team** | Complete marketing strategy system | [View Example](./marketing_team.md) |
|
||||
| **Hiring Swarm** | End-to-end recruiting automation | [View Example](./hiring_swarm.md) |
|
||||
| **M&A Advisory Swarm** | Due diligence and analysis | [View Example](./ma_swarm.md) |
|
||||
|
||||
### Financial Analysis
|
||||
|
||||
| Application | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Gold ETF Research** | Comprehensive ETF analysis | [View Example](./gold_etf_research.md) |
|
||||
|
||||
### Research & Automation
|
||||
|
||||
| Application | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Advanced Research** | Multi-source research compilation | [View Example](./av.md) |
|
||||
| **Browser Agents** | Automated web interaction | [View Example](../swarms/examples/swarms_of_browser_agents.md) |
|
||||
| **Job Finding Swarm** | Career opportunity discovery | [View Example](./job_finding.md) |
|
||||
|
||||
### Real Estate
|
||||
|
||||
| Application | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Real Estate Swarm** | Property market analysis | [View Example](./realestate_swarm.md) |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [HierarchicalSwarm Documentation](../swarms/structs/hierarchical_swarm.md)
|
||||
- [HeavySwarm Documentation](../swarms/structs/heavy_swarm.md)
|
||||
- [Building Custom Swarms](../swarms/structs/custom_swarm.md)
|
||||
- [Deployment Solutions](../deployment_solutions/overview.md)
|
||||
@ -0,0 +1,29 @@
|
||||
# Apps Examples Overview
|
||||
|
||||
Complete application examples built with Swarms. These examples show how to build practical tools and utilities with AI agents.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **Web Scraping** | Building intelligent web scrapers |
|
||||
| **Database Integration** | Smart database query agents |
|
||||
| **Practical Tools** | End-to-end application development |
|
||||
|
||||
---
|
||||
|
||||
## App Examples
|
||||
|
||||
| App | Description | Link |
|
||||
|-----|-------------|------|
|
||||
| **Web Scraper Agents** | Intelligent web data extraction | [View Example](../developer_guides/web_scraper.md) |
|
||||
| **Smart Database** | AI-powered database interactions | [View Example](./smart_database.md) |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [Tools & Integrations](./tools_integrations_overview.md) - External service connections
|
||||
- [Multi-Agent Architectures](./multi_agent_architectures_overview.md) - Complex agent systems
|
||||
- [Deployment Solutions](../deployment_solutions/overview.md) - Production deployment
|
||||
|
||||
@ -0,0 +1,80 @@
|
||||
# Basic Examples Overview
|
||||
|
||||
Start your Swarms journey with single-agent examples. Learn how to create agents, use tools, process images, integrate with different LLM providers, and publish to the marketplace.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **Agent Basics** | Create and configure individual agents |
|
||||
| **Tool Integration** | Equip agents with callable tools and functions |
|
||||
| **Vision Capabilities** | Process images and multi-modal inputs |
|
||||
| **LLM Providers** | Connect to OpenAI, Anthropic, Groq, and more |
|
||||
| **Utilities** | Streaming, output types, and marketplace publishing |
|
||||
|
||||
---
|
||||
|
||||
## Individual Agent Examples
|
||||
|
||||
### Core Agent Usage
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Basic Agent** | Fundamental agent creation and execution | [View Example](../swarms/examples/basic_agent.md) |
|
||||
|
||||
### Tool Usage
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Agents with Vision and Tool Usage** | Combine vision and tools in one agent | [View Example](../swarms/examples/vision_tools.md) |
|
||||
| **Agents with Callable Tools** | Equip agents with Python functions as tools | [View Example](../swarms/examples/agent_with_tools.md) |
|
||||
| **Agent with Structured Outputs** | Get consistent JSON/structured responses | [View Example](../swarms/examples/agent_structured_outputs.md) |
|
||||
| **Message Transforms** | Manage context with message transformations | [View Example](../swarms/structs/transforms.md) |
|
||||
|
||||
### Vision & Multi-Modal
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Agents with Vision** | Process and analyze images | [View Example](../swarms/examples/vision_processing.md) |
|
||||
| **Agent with Multiple Images** | Handle multiple images in one request | [View Example](../swarms/examples/multiple_images.md) |
|
||||
|
||||
### Utilities
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Agent with Streaming** | Stream responses in real-time | [View Example](./agent_stream.md) |
|
||||
| **Agent Output Types** | Different output formats (str, json, dict, yaml) | [View Example](../swarms/examples/agent_output_types.md) |
|
||||
| **Gradio Chat Interface** | Build chat UIs for your agents | [View Example](../swarms/ui/main.md) |
|
||||
| **Agent with Gemini Nano Banana** | Jarvis-style agent example | [View Example](../swarms/examples/jarvis_agent.md) |
|
||||
| **Agent Marketplace Publishing** | Publish agents to the Swarms marketplace | [View Example](./marketplace_publishing_quickstart.md) |
|
||||
|
||||
---
|
||||
|
||||
## LLM Provider Examples
|
||||
|
||||
Connect your agents to various language model providers:
|
||||
|
||||
| Provider | Description | Link |
|
||||
|----------|-------------|------|
|
||||
| **Overview** | Guide to all supported providers | [View Guide](../swarms/examples/model_providers.md) |
|
||||
| **OpenAI** | GPT-4, GPT-4o, GPT-4o-mini integration | [View Example](../swarms/examples/openai_example.md) |
|
||||
| **Anthropic** | Claude models integration | [View Example](../swarms/examples/claude.md) |
|
||||
| **Groq** | Ultra-fast inference with Groq | [View Example](../swarms/examples/groq.md) |
|
||||
| **Cohere** | Cohere Command models | [View Example](../swarms/examples/cohere.md) |
|
||||
| **DeepSeek** | DeepSeek models integration | [View Example](../swarms/examples/deepseek.md) |
|
||||
| **Ollama** | Local models with Ollama | [View Example](../swarms/examples/ollama.md) |
|
||||
| **OpenRouter** | Access multiple providers via OpenRouter | [View Example](../swarms/examples/openrouter.md) |
|
||||
| **XAI** | Grok models from xAI | [View Example](../swarms/examples/xai.md) |
|
||||
| **Azure OpenAI** | Enterprise Azure deployment | [View Example](../swarms/examples/azure.md) |
|
||||
| **Llama4** | Meta's Llama 4 models | [View Example](../swarms/examples/llama4.md) |
|
||||
| **Custom Base URL** | Connect to any OpenAI-compatible API | [View Example](../swarms/examples/custom_base_url_example.md) |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
After mastering basic agents, explore:
|
||||
|
||||
- [Multi-Agent Architectures](./multi_agent_architectures_overview.md) - Coordinate multiple agents
|
||||
- [Tools Documentation](../swarms/tools/main.md) - Deep dive into tool creation
|
||||
- [CLI Guides](./cli_guides_overview.md) - Run agents from command line
|
||||
@ -0,0 +1,47 @@
|
||||
# CLI Guides Overview
|
||||
|
||||
Master the Swarms command-line interface with these step-by-step guides. Execute agents, run multi-agent workflows, and integrate Swarms into your DevOps pipelines—all from your terminal.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **CLI Basics** | Install, configure, and run your first commands |
|
||||
| **Agent Creation** | Create and run agents directly from command line |
|
||||
| **YAML Configuration** | Define agents in config files for reproducible deployments |
|
||||
| **Multi-Agent Commands** | Run LLM Council and Heavy Swarm from terminal |
|
||||
| **DevOps Integration** | Integrate into CI/CD pipelines and scripts |
|
||||
|
||||
---
|
||||
|
||||
## CLI Guides
|
||||
|
||||
| Guide | Description | Link |
|
||||
|-------|-------------|------|
|
||||
| **CLI Quickstart** | Get started with Swarms CLI in 3 steps—install, configure, and run | [View Guide](../swarms/cli/cli_quickstart.md) |
|
||||
| **Creating Agents from CLI** | Create, configure, and run AI agents directly from your terminal | [View Guide](../swarms/cli/cli_agent_guide.md) |
|
||||
| **YAML Configuration** | Run multiple agents from YAML configuration files | [View Guide](../swarms/cli/cli_yaml_guide.md) |
|
||||
| **LLM Council CLI** | Run collaborative multi-agent decision-making from command line | [View Guide](../swarms/cli/cli_llm_council_guide.md) |
|
||||
| **Heavy Swarm CLI** | Execute comprehensive task analysis swarms from terminal | [View Guide](../swarms/cli/cli_heavy_swarm_guide.md) |
|
||||
| **CLI Multi-Agent Commands** | Complete guide to multi-agent CLI commands | [View Guide](./cli_multi_agent_quickstart.md) |
|
||||
| **CLI Examples** | Additional CLI usage examples and patterns | [View Guide](../swarms/cli/cli_examples.md) |
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Use Case | Recommended Guide |
|
||||
|----------|-------------------|
|
||||
| First time using CLI | [CLI Quickstart](../swarms/cli/cli_quickstart.md) |
|
||||
| Creating custom agents | [Creating Agents from CLI](../swarms/cli/cli_agent_guide.md) |
|
||||
| Team/production deployments | [YAML Configuration](../swarms/cli/cli_yaml_guide.md) |
|
||||
| Collaborative decision-making | [LLM Council CLI](../swarms/cli/cli_llm_council_guide.md) |
|
||||
| Complex research tasks | [Heavy Swarm CLI](../swarms/cli/cli_heavy_swarm_guide.md) |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [CLI Reference Documentation](../swarms/cli/cli_reference.md) - Complete command reference
|
||||
- [Agent Documentation](../swarms/structs/agent.md) - Agent class reference
|
||||
- [Environment Configuration](../swarms/install/env.md) - Environment setup guide
|
||||
@ -0,0 +1,327 @@
|
||||
# CLI Multi-Agent Features: 3-Step Quickstart Guide
|
||||
|
||||
Run LLM Council and Heavy Swarm directly from the command line for seamless DevOps integration. Execute sophisticated multi-agent workflows without writing Python code.
|
||||
|
||||
## Overview
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **LLM Council CLI** | Run collaborative decision-making from terminal |
|
||||
| **Heavy Swarm CLI** | Execute comprehensive research swarms |
|
||||
| **DevOps Ready** | Integrate into CI/CD pipelines and scripts |
|
||||
| **Configurable** | Full parameter control from command line |
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Install and Verify
|
||||
|
||||
Ensure Swarms is installed and verify CLI access:
|
||||
|
||||
```bash
|
||||
# Install swarms
|
||||
pip install swarms
|
||||
|
||||
# Verify CLI is available
|
||||
swarms --help
|
||||
```
|
||||
|
||||
You should see the Swarms CLI banner and available commands.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Set Environment Variables
|
||||
|
||||
Configure your API keys:
|
||||
|
||||
```bash
|
||||
# Set your OpenAI API key (or other provider)
|
||||
export OPENAI_API_KEY="your-openai-api-key"
|
||||
|
||||
# Optional: Set workspace directory
|
||||
export WORKSPACE_DIR="./agent_workspace"
|
||||
```
|
||||
|
||||
Or add to your `.env` file:
|
||||
|
||||
```
|
||||
OPENAI_API_KEY=your-openai-api-key
|
||||
WORKSPACE_DIR=./agent_workspace
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run Multi-Agent Commands
|
||||
|
||||
### LLM Council
|
||||
|
||||
Run a collaborative council of AI agents:
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
swarms llm-council --task "What is the best approach to implement microservices architecture?"
|
||||
|
||||
# With verbose output
|
||||
swarms llm-council --task "Evaluate investment opportunities in AI startups" --verbose
|
||||
```
|
||||
|
||||
### Heavy Swarm
|
||||
|
||||
Run comprehensive research and analysis:
|
||||
|
||||
```bash
|
||||
# Basic usage
|
||||
swarms heavy-swarm --task "Analyze the current state of quantum computing"
|
||||
|
||||
# With configuration options
|
||||
swarms heavy-swarm \
|
||||
--task "Research renewable energy market trends" \
|
||||
--loops-per-agent 2 \
|
||||
--question-agent-model-name gpt-4o-mini \
|
||||
--worker-model-name gpt-4o-mini \
|
||||
--verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete CLI Reference
|
||||
|
||||
### LLM Council Command
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "<your query>" [options]
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `--task` | **Required.** The query or question for the council |
|
||||
| `--verbose` | Enable detailed output logging |
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Strategic decision
|
||||
swarms llm-council --task "Should our startup pivot from B2B to B2C?"
|
||||
|
||||
# Technical evaluation
|
||||
swarms llm-council --task "Compare React vs Vue for enterprise applications"
|
||||
|
||||
# Business analysis
|
||||
swarms llm-council --task "What are the risks of expanding to European markets?"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Heavy Swarm Command
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm --task "<your task>" [options]
|
||||
```
|
||||
|
||||
| Option | Default | Description |
|
||||
|--------|---------|-------------|
|
||||
| `--task` | - | **Required.** The research task |
|
||||
| `--loops-per-agent` | 1 | Number of loops per agent |
|
||||
| `--question-agent-model-name` | gpt-4o-mini | Model for question agent |
|
||||
| `--worker-model-name` | gpt-4o-mini | Model for worker agents |
|
||||
| `--random-loops-per-agent` | False | Randomize loops per agent |
|
||||
| `--verbose` | False | Enable detailed output |
|
||||
|
||||
**Examples:**
|
||||
|
||||
```bash
|
||||
# Comprehensive research
|
||||
swarms heavy-swarm --task "Research the impact of AI on healthcare diagnostics" --verbose
|
||||
|
||||
# With custom models
|
||||
swarms heavy-swarm \
|
||||
--task "Analyze cryptocurrency regulation trends globally" \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--loops-per-agent 3
|
||||
|
||||
# Quick analysis
|
||||
swarms heavy-swarm --task "Summarize recent advances in battery technology"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### Bash Script Integration
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# research_script.sh
|
||||
|
||||
TOPICS=(
|
||||
"AI in manufacturing"
|
||||
"Autonomous vehicles market"
|
||||
"Edge computing trends"
|
||||
)
|
||||
|
||||
for topic in "${TOPICS[@]}"; do
|
||||
echo "Researching: $topic"
|
||||
swarms heavy-swarm --task "Analyze $topic" --verbose >> research_output.txt
|
||||
echo "---" >> research_output.txt
|
||||
done
|
||||
```
|
||||
|
||||
### CI/CD Pipeline (GitHub Actions)
|
||||
|
||||
```yaml
|
||||
name: AI Research Pipeline
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 9 * * 1' # Every Monday at 9 AM
|
||||
|
||||
jobs:
|
||||
research:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.10'
|
||||
|
||||
- name: Install dependencies
|
||||
run: pip install swarms
|
||||
|
||||
- name: Run LLM Council
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
run: |
|
||||
swarms llm-council \
|
||||
--task "Weekly market analysis for tech sector" \
|
||||
--verbose > weekly_analysis.txt
|
||||
|
||||
- name: Upload results
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: analysis-results
|
||||
path: weekly_analysis.txt
|
||||
```
|
||||
|
||||
### Docker Integration
|
||||
|
||||
```dockerfile
|
||||
FROM python:3.10-slim
|
||||
|
||||
RUN pip install swarms
|
||||
|
||||
ENV OPENAI_API_KEY=""
|
||||
ENV WORKSPACE_DIR="/workspace"
|
||||
|
||||
WORKDIR /workspace
|
||||
|
||||
ENTRYPOINT ["swarms"]
|
||||
CMD ["--help"]
|
||||
```
|
||||
|
||||
```bash
|
||||
# Build and run
|
||||
docker build -t swarms-cli .
|
||||
docker run -e OPENAI_API_KEY="your-key" swarms-cli \
|
||||
llm-council --task "Analyze market trends"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Other Useful CLI Commands
|
||||
|
||||
### Setup Check
|
||||
|
||||
Verify your environment is properly configured:
|
||||
|
||||
```bash
|
||||
swarms setup-check --verbose
|
||||
```
|
||||
|
||||
### Run Single Agent
|
||||
|
||||
Execute a single agent task:
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Research-Agent" \
|
||||
--task "Summarize recent AI developments" \
|
||||
--model "gpt-4o-mini" \
|
||||
--max-loops 1
|
||||
```
|
||||
|
||||
### Auto Swarm
|
||||
|
||||
Automatically generate and run a swarm configuration:
|
||||
|
||||
```bash
|
||||
swarms autoswarm --task "Build a content analysis pipeline" --model gpt-4
|
||||
```
|
||||
|
||||
### Show All Commands
|
||||
|
||||
Display all available CLI features:
|
||||
|
||||
```bash
|
||||
swarms show-all
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Handling
|
||||
|
||||
### Capture Output to File
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Evaluate cloud providers" > analysis.txt 2>&1
|
||||
```
|
||||
|
||||
### JSON Output Processing
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Compare databases" | python -c "
|
||||
import sys
|
||||
import json
|
||||
# Process output as needed
|
||||
for line in sys.stdin:
|
||||
print(line.strip())
|
||||
"
|
||||
```
|
||||
|
||||
### Pipe to Other Tools
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm --task "Research topic" | tee research.log | grep "RESULT"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| "Command not found" | Ensure `pip install swarms` completed successfully |
|
||||
| "API key not set" | Export `OPENAI_API_KEY` environment variable |
|
||||
| "Task cannot be empty" | Always provide `--task` argument |
|
||||
| Timeout errors | Check network connectivity and API rate limits |
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Run with verbose output for debugging:
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Your query" --verbose 2>&1 | tee debug.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Explore [CLI Reference Documentation](../swarms/cli/cli_reference.md) for all commands
|
||||
- See [CLI Examples](../swarms/cli/cli_examples.md) for more use cases
|
||||
- Learn about [LLM Council](./llm_council_quickstart.md) Python API
|
||||
- Try [Heavy Swarm Documentation](../swarms/structs/heavy_swarm.md) for advanced configuration
|
||||
|
||||
@ -0,0 +1,233 @@
|
||||
# DebateWithJudge: 3-Step Quickstart Guide
|
||||
|
||||
The DebateWithJudge architecture enables structured debates between two agents (Pro and Con) with a Judge providing refined synthesis over multiple rounds. This creates progressively improved answers through iterative argumentation and evaluation.
|
||||
|
||||
## Overview
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **Pro Agent** | Argues in favor of a position with evidence and reasoning |
|
||||
| **Con Agent** | Presents counter-arguments and identifies weaknesses |
|
||||
| **Judge Agent** | Evaluates both sides and synthesizes the best elements |
|
||||
| **Iterative Refinement** | Multiple rounds progressively improve the final answer |
|
||||
|
||||
```
|
||||
Agent A (Pro) ↔ Agent B (Con)
|
||||
│ │
|
||||
▼ ▼
|
||||
Judge / Critic Agent
|
||||
│
|
||||
▼
|
||||
Winner or synthesis → refined answer
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Install and Import
|
||||
|
||||
Ensure you have Swarms installed and import the DebateWithJudge class:
|
||||
|
||||
```bash
|
||||
pip install swarms
|
||||
```
|
||||
|
||||
```python
|
||||
from swarms import DebateWithJudge
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Create the Debate System
|
||||
|
||||
Create a DebateWithJudge system using preset agents (the simplest approach):
|
||||
|
||||
```python
|
||||
# Create debate system with preset optimized agents
|
||||
debate = DebateWithJudge(
|
||||
preset_agents=True, # Use built-in optimized agents
|
||||
max_loops=3, # 3 rounds of debate
|
||||
model_name="gpt-4o-mini",
|
||||
verbose=True
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run the Debate
|
||||
|
||||
Execute the debate on a topic:
|
||||
|
||||
```python
|
||||
# Define the debate topic
|
||||
topic = "Should artificial intelligence be regulated by governments?"
|
||||
|
||||
# Run the debate
|
||||
result = debate.run(task=topic)
|
||||
|
||||
# Print the refined answer
|
||||
print(result)
|
||||
|
||||
# Or get just the final synthesis
|
||||
final_answer = debate.get_final_answer()
|
||||
print(final_answer)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Example
|
||||
|
||||
Here's a complete working example:
|
||||
|
||||
```python
|
||||
from swarms import DebateWithJudge
|
||||
|
||||
# Step 1: Create the debate system with preset agents
|
||||
debate_system = DebateWithJudge(
|
||||
preset_agents=True,
|
||||
max_loops=3,
|
||||
model_name="gpt-4o-mini",
|
||||
output_type="str-all-except-first",
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Step 2: Define a complex topic
|
||||
topic = (
|
||||
"Should artificial intelligence be regulated by governments? "
|
||||
"Discuss the balance between innovation and safety."
|
||||
)
|
||||
|
||||
# Step 3: Run the debate and get refined answer
|
||||
result = debate_system.run(task=topic)
|
||||
|
||||
print("=" * 60)
|
||||
print("DEBATE RESULT:")
|
||||
print("=" * 60)
|
||||
print(result)
|
||||
|
||||
# Access conversation history for detailed analysis
|
||||
history = debate_system.get_conversation_history()
|
||||
print(f"\nTotal exchanges: {len(history)}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Custom Agents Example
|
||||
|
||||
Create specialized agents for domain-specific debates:
|
||||
|
||||
```python
|
||||
from swarms import Agent, DebateWithJudge
|
||||
|
||||
# Create specialized Pro agent
|
||||
pro_agent = Agent(
|
||||
agent_name="Innovation-Advocate",
|
||||
system_prompt=(
|
||||
"You are a technology policy expert arguing for innovation and minimal regulation. "
|
||||
"You present arguments focusing on economic growth, technological competitiveness, "
|
||||
"and the risks of over-regulation stifling progress."
|
||||
),
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Create specialized Con agent
|
||||
con_agent = Agent(
|
||||
agent_name="Safety-Advocate",
|
||||
system_prompt=(
|
||||
"You are a technology policy expert arguing for strong AI safety regulations. "
|
||||
"You present arguments focusing on public safety, ethical considerations, "
|
||||
"and the need for government oversight of powerful technologies."
|
||||
),
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Create specialized Judge agent
|
||||
judge_agent = Agent(
|
||||
agent_name="Policy-Analyst",
|
||||
system_prompt=(
|
||||
"You are an impartial policy analyst evaluating technology regulation debates. "
|
||||
"You synthesize the strongest arguments from both sides and provide "
|
||||
"balanced, actionable policy recommendations."
|
||||
),
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Create debate system with custom agents
|
||||
debate = DebateWithJudge(
|
||||
agents=[pro_agent, con_agent, judge_agent], # Pass as list
|
||||
max_loops=3,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
result = debate.run("Should AI-generated content require mandatory disclosure labels?")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Batch Processing
|
||||
|
||||
Process multiple debate topics:
|
||||
|
||||
```python
|
||||
from swarms import DebateWithJudge
|
||||
|
||||
debate = DebateWithJudge(preset_agents=True, max_loops=2)
|
||||
|
||||
# Multiple topics to debate
|
||||
topics = [
|
||||
"Should remote work become the standard for knowledge workers?",
|
||||
"Is cryptocurrency a viable alternative to traditional banking?",
|
||||
"Should social media platforms be held accountable for content moderation?",
|
||||
]
|
||||
|
||||
# Process all topics
|
||||
results = debate.batched_run(topics)
|
||||
|
||||
for topic, result in zip(topics, results):
|
||||
print(f"\nTopic: {topic}")
|
||||
print(f"Result: {result[:200]}...")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration Options
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `preset_agents` | `False` | Use built-in optimized agents |
|
||||
| `max_loops` | `3` | Number of debate rounds |
|
||||
| `model_name` | `"gpt-4o-mini"` | Model for preset agents |
|
||||
| `output_type` | `"str-all-except-first"` | Output format |
|
||||
| `verbose` | `True` | Enable detailed logging |
|
||||
|
||||
### Output Types
|
||||
|
||||
| Value | Description |
|
||||
|-------|-------------|
|
||||
| `"str-all-except-first"` | Formatted string, excluding initialization (default) |
|
||||
| `"str"` | All messages as formatted string |
|
||||
| `"dict"` | Messages as dictionary |
|
||||
| `"list"` | Messages as list |
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Domain | Example Topic |
|
||||
|--------|---------------|
|
||||
| **Policy** | "Should universal basic income be implemented?" |
|
||||
| **Technology** | "Microservices vs. monolithic architecture for startups?" |
|
||||
| **Business** | "Should companies prioritize growth or profitability?" |
|
||||
| **Ethics** | "Is it ethical to use AI in hiring decisions?" |
|
||||
| **Science** | "Should gene editing be allowed for non-medical purposes?" |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Explore [DebateWithJudge Reference](../swarms/structs/debate_with_judge.md) for complete API details
|
||||
- See [Debate Examples](https://github.com/kyegomez/swarms/tree/master/examples/multi_agent/debate_examples) for more use cases
|
||||
- Learn about [Orchestration Methods](../swarms/structs/orchestration_methods.md) for other debate architectures
|
||||
|
||||
@ -0,0 +1,327 @@
|
||||
# GraphWorkflow with Rustworkx: 3-Step Quickstart Guide
|
||||
|
||||
GraphWorkflow provides a powerful workflow orchestration system that creates directed graphs of agents for complex multi-agent collaboration. The new **Rustworkx integration** delivers 5-10x faster performance for large-scale workflows.
|
||||
|
||||
## Overview
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **Directed Graph Structure** | Nodes are agents, edges define data flow |
|
||||
| **Dual Backend Support** | NetworkX (compatibility) or Rustworkx (performance) |
|
||||
| **Parallel Execution** | Multiple agents run simultaneously within layers |
|
||||
| **Automatic Compilation** | Optimizes workflow structure for efficient execution |
|
||||
| **5-10x Performance** | Rustworkx backend for high-throughput workflows |
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Install and Import
|
||||
|
||||
Install Swarms and Rustworkx for high-performance workflows:
|
||||
|
||||
```bash
|
||||
pip install swarms rustworkx
|
||||
```
|
||||
|
||||
```python
|
||||
from swarms import Agent, GraphWorkflow
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Create the Workflow with Rustworkx Backend
|
||||
|
||||
Create agents and build a workflow using the high-performance Rustworkx backend:
|
||||
|
||||
```python
|
||||
# Create specialized agents
|
||||
research_agent = Agent(
|
||||
agent_name="ResearchAgent",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You are a research specialist. Gather and analyze information.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
analysis_agent = Agent(
|
||||
agent_name="AnalysisAgent",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You are an analyst. Process research findings and extract insights.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
# Create workflow with rustworkx backend for better performance
|
||||
workflow = GraphWorkflow(
|
||||
name="Research-Analysis-Pipeline",
|
||||
backend="rustworkx", # Use rustworkx for 5-10x faster performance
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Add agents as nodes
|
||||
workflow.add_node(research_agent)
|
||||
workflow.add_node(analysis_agent)
|
||||
|
||||
# Connect agents with edges
|
||||
workflow.add_edge("ResearchAgent", "AnalysisAgent")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Execute the Workflow
|
||||
|
||||
Run the workflow and get results:
|
||||
|
||||
```python
|
||||
# Execute the workflow
|
||||
results = workflow.run("What are the latest trends in renewable energy technology?")
|
||||
|
||||
# Print results
|
||||
print(results)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Example
|
||||
|
||||
Here's a complete parallel processing workflow:
|
||||
|
||||
```python
|
||||
from swarms import Agent, GraphWorkflow
|
||||
|
||||
# Step 1: Create specialized agents
|
||||
data_collector = Agent(
|
||||
agent_name="DataCollector",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You collect and organize data from various sources.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
technical_analyst = Agent(
|
||||
agent_name="TechnicalAnalyst",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You perform technical analysis on data.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
market_analyst = Agent(
|
||||
agent_name="MarketAnalyst",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You analyze market trends and conditions.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
synthesis_agent = Agent(
|
||||
agent_name="SynthesisAgent",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You synthesize insights from multiple analysts into a cohesive report.",
|
||||
max_loops=1
|
||||
)
|
||||
|
||||
# Step 2: Build workflow with rustworkx backend
|
||||
workflow = GraphWorkflow(
|
||||
name="Market-Analysis-Pipeline",
|
||||
backend="rustworkx", # High-performance backend
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Add all agents
|
||||
for agent in [data_collector, technical_analyst, market_analyst, synthesis_agent]:
|
||||
workflow.add_node(agent)
|
||||
|
||||
# Create fan-out pattern: data collector feeds both analysts
|
||||
workflow.add_edges_from_source(
|
||||
"DataCollector",
|
||||
["TechnicalAnalyst", "MarketAnalyst"]
|
||||
)
|
||||
|
||||
# Create fan-in pattern: both analysts feed synthesis agent
|
||||
workflow.add_edges_to_target(
|
||||
["TechnicalAnalyst", "MarketAnalyst"],
|
||||
"SynthesisAgent"
|
||||
)
|
||||
|
||||
# Step 3: Execute and get results
|
||||
results = workflow.run("Analyze Bitcoin market trends for Q4 2024")
|
||||
|
||||
print("=" * 60)
|
||||
print("WORKFLOW RESULTS:")
|
||||
print("=" * 60)
|
||||
print(results)
|
||||
|
||||
# Get compilation status
|
||||
status = workflow.get_compilation_status()
|
||||
print(f"\nLayers: {status['cached_layers_count']}")
|
||||
print(f"Max workers: {status['max_workers']}")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## NetworkX vs Rustworkx Backend
|
||||
|
||||
| Graph Size | Recommended Backend | Performance |
|
||||
|------------|-------------------|-------------|
|
||||
| < 100 nodes | NetworkX | Minimal overhead |
|
||||
| 100-1000 nodes | Either | Both perform well |
|
||||
| 1000+ nodes | **Rustworkx** | 5-10x faster |
|
||||
| 10k+ nodes | **Rustworkx** | Essential |
|
||||
|
||||
```python
|
||||
# NetworkX backend (default, maximum compatibility)
|
||||
workflow = GraphWorkflow(backend="networkx")
|
||||
|
||||
# Rustworkx backend (high performance)
|
||||
workflow = GraphWorkflow(backend="rustworkx")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Edge Patterns
|
||||
|
||||
### Fan-Out (One-to-Many)
|
||||
|
||||
```python
|
||||
# One agent feeds multiple agents
|
||||
workflow.add_edges_from_source(
|
||||
"DataCollector",
|
||||
["Analyst1", "Analyst2", "Analyst3"]
|
||||
)
|
||||
```
|
||||
|
||||
### Fan-In (Many-to-One)
|
||||
|
||||
```python
|
||||
# Multiple agents feed one agent
|
||||
workflow.add_edges_to_target(
|
||||
["Analyst1", "Analyst2", "Analyst3"],
|
||||
"SynthesisAgent"
|
||||
)
|
||||
```
|
||||
|
||||
### Parallel Chain (Many-to-Many)
|
||||
|
||||
```python
|
||||
# Full mesh connection
|
||||
workflow.add_parallel_chain(
|
||||
["Source1", "Source2"],
|
||||
["Target1", "Target2", "Target3"]
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Using from_spec for Quick Setup
|
||||
|
||||
Create workflows quickly with the `from_spec` class method:
|
||||
|
||||
```python
|
||||
from swarms import Agent, GraphWorkflow
|
||||
|
||||
# Create agents
|
||||
agent1 = Agent(agent_name="Researcher", model_name="gpt-4o-mini", max_loops=1)
|
||||
agent2 = Agent(agent_name="Analyzer", model_name="gpt-4o-mini", max_loops=1)
|
||||
agent3 = Agent(agent_name="Reporter", model_name="gpt-4o-mini", max_loops=1)
|
||||
|
||||
# Create workflow from specification
|
||||
workflow = GraphWorkflow.from_spec(
|
||||
agents=[agent1, agent2, agent3],
|
||||
edges=[
|
||||
("Researcher", "Analyzer"),
|
||||
("Analyzer", "Reporter"),
|
||||
],
|
||||
task="Analyze climate change data",
|
||||
backend="rustworkx" # Use high-performance backend
|
||||
)
|
||||
|
||||
results = workflow.run()
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Visualization
|
||||
|
||||
Generate visual representations of your workflow:
|
||||
|
||||
```python
|
||||
# Create visualization (requires graphviz)
|
||||
output_file = workflow.visualize(
|
||||
format="png",
|
||||
view=True,
|
||||
show_summary=True
|
||||
)
|
||||
print(f"Visualization saved to: {output_file}")
|
||||
|
||||
# Simple text visualization
|
||||
text_viz = workflow.visualize_simple()
|
||||
print(text_viz)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Serialization
|
||||
|
||||
Save and load workflows:
|
||||
|
||||
```python
|
||||
# Save workflow with conversation history
|
||||
workflow.save_to_file(
|
||||
"my_workflow.json",
|
||||
include_conversation=True,
|
||||
include_runtime_state=True
|
||||
)
|
||||
|
||||
# Load workflow later
|
||||
loaded_workflow = GraphWorkflow.load_from_file(
|
||||
"my_workflow.json",
|
||||
restore_runtime_state=True
|
||||
)
|
||||
|
||||
# Continue execution
|
||||
results = loaded_workflow.run("Follow-up analysis")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Large-Scale Example with Rustworkx
|
||||
|
||||
```python
|
||||
from swarms import Agent, GraphWorkflow
|
||||
|
||||
# Create workflow for large-scale processing
|
||||
workflow = GraphWorkflow(
|
||||
name="Large-Scale-Pipeline",
|
||||
backend="rustworkx", # Essential for large graphs
|
||||
verbose=True
|
||||
)
|
||||
|
||||
# Create many processing agents
|
||||
processors = []
|
||||
for i in range(50):
|
||||
agent = Agent(
|
||||
agent_name=f"Processor{i}",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1
|
||||
)
|
||||
processors.append(agent)
|
||||
workflow.add_node(agent)
|
||||
|
||||
# Create layered connections
|
||||
for i in range(0, 40, 10):
|
||||
sources = [f"Processor{j}" for j in range(i, i+10)]
|
||||
targets = [f"Processor{j}" for j in range(i+10, min(i+20, 50))]
|
||||
if targets:
|
||||
workflow.add_parallel_chain(sources, targets)
|
||||
|
||||
# Compile and execute
|
||||
workflow.compile()
|
||||
status = workflow.get_compilation_status()
|
||||
print(f"Compiled: {status['cached_layers_count']} layers")
|
||||
|
||||
results = workflow.run("Process dataset in parallel")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Explore [GraphWorkflow Reference](../swarms/structs/graph_workflow.md) for complete API details
|
||||
- See [Multi-Agentic Patterns with GraphWorkflow](./graphworkflow_rustworkx_patterns.md) for advanced patterns
|
||||
- Learn about [Visualization Options](../swarms/structs/graph_workflow.md#visualization-methods) for debugging workflows
|
||||
|
||||
@ -0,0 +1,170 @@
|
||||
# LLM Council: 3-Step Quickstart Guide
|
||||
|
||||
The LLM Council enables collaborative decision-making with multiple AI agents through peer review and synthesis. Inspired by Andrej Karpathy's llm-council, it creates a council of specialized agents that respond independently, review each other's anonymized responses, and have a Chairman synthesize the best elements into a final answer.
|
||||
|
||||
## Overview
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **Multiple Perspectives** | Each council member provides unique insights from different viewpoints |
|
||||
| **Peer Review** | Members evaluate and rank each other's responses anonymously |
|
||||
| **Synthesis** | Chairman combines the best elements from all responses |
|
||||
| **Transparency** | See both individual responses and evaluation rankings |
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Install and Import
|
||||
|
||||
First, ensure you have Swarms installed and import the LLMCouncil class:
|
||||
|
||||
```bash
|
||||
pip install swarms
|
||||
```
|
||||
|
||||
```python
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Create the Council
|
||||
|
||||
Create an LLM Council with default council members (GPT-5.1, Gemini 3 Pro, Claude Sonnet 4.5, and Grok-4):
|
||||
|
||||
```python
|
||||
# Create the council with default members
|
||||
council = LLMCouncil(
|
||||
name="Decision Council",
|
||||
verbose=True,
|
||||
output_type="dict-all-except-first"
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run a Query
|
||||
|
||||
Execute a query and get the synthesized response:
|
||||
|
||||
```python
|
||||
# Run a query
|
||||
result = council.run("What are the key factors to consider when choosing a cloud provider for enterprise applications?")
|
||||
|
||||
# Access the final synthesized answer
|
||||
print(result["final_response"])
|
||||
|
||||
# View individual member responses
|
||||
print(result["original_responses"])
|
||||
|
||||
# See how members ranked each other
|
||||
print(result["evaluations"])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Example
|
||||
|
||||
Here's a complete working example:
|
||||
|
||||
```python
|
||||
from swarms.structs.llm_council import LLMCouncil
|
||||
|
||||
# Step 1: Create the council
|
||||
council = LLMCouncil(
|
||||
name="Strategy Council",
|
||||
description="A council for strategic decision-making",
|
||||
verbose=True,
|
||||
output_type="dict-all-except-first"
|
||||
)
|
||||
|
||||
# Step 2: Run a strategic query
|
||||
result = council.run(
|
||||
"Should a B2B SaaS startup prioritize product-led growth or sales-led growth? "
|
||||
"Consider factors like market size, customer acquisition costs, and scalability."
|
||||
)
|
||||
|
||||
# Step 3: Process results
|
||||
print("=" * 50)
|
||||
print("FINAL SYNTHESIZED ANSWER:")
|
||||
print("=" * 50)
|
||||
print(result["final_response"])
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Custom Council Members
|
||||
|
||||
For specialized domains, create custom council members:
|
||||
|
||||
```python
|
||||
from swarms import Agent
|
||||
from swarms.structs.llm_council import LLMCouncil, get_gpt_councilor_prompt
|
||||
|
||||
# Create specialized agents
|
||||
finance_expert = Agent(
|
||||
agent_name="Finance-Councilor",
|
||||
system_prompt="You are a financial analyst specializing in market analysis and investment strategies...",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
tech_expert = Agent(
|
||||
agent_name="Technology-Councilor",
|
||||
system_prompt="You are a technology strategist specializing in digital transformation...",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
risk_expert = Agent(
|
||||
agent_name="Risk-Councilor",
|
||||
system_prompt="You are a risk management expert specializing in enterprise risk assessment...",
|
||||
model_name="gpt-4.1",
|
||||
max_loops=1,
|
||||
)
|
||||
|
||||
# Create council with custom members
|
||||
council = LLMCouncil(
|
||||
council_members=[finance_expert, tech_expert, risk_expert],
|
||||
chairman_model="gpt-4.1",
|
||||
verbose=True
|
||||
)
|
||||
|
||||
result = council.run("Evaluate the risk-reward profile of investing in AI infrastructure")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## CLI Usage
|
||||
|
||||
Run LLM Council directly from the command line:
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "What is the best approach to implement microservices architecture?"
|
||||
```
|
||||
|
||||
With verbose output:
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Analyze the pros and cons of remote work" --verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Domain | Example Query |
|
||||
|--------|---------------|
|
||||
| **Business Strategy** | "Should we expand internationally or focus on domestic growth?" |
|
||||
| **Technology** | "Which database architecture best suits our high-throughput requirements?" |
|
||||
| **Finance** | "Evaluate investment opportunities in the renewable energy sector" |
|
||||
| **Healthcare** | "What treatment approaches should be considered for this patient profile?" |
|
||||
| **Legal** | "What are the compliance implications of this data processing policy?" |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Explore [LLM Council Examples](./llm_council_examples.md) for domain-specific implementations
|
||||
- Learn about [LLM Council Reference Documentation](../swarms/structs/llm_council.md) for complete API details
|
||||
- Try the [CLI Reference](../swarms/cli/cli_reference.md) for DevOps integration
|
||||
|
||||
@ -0,0 +1,273 @@
|
||||
# Agent Marketplace Publishing: 3-Step Quickstart Guide
|
||||
|
||||
Publish your agents directly to the Swarms Marketplace with minimal configuration. Share your specialized agents with the community and monetize your creations.
|
||||
|
||||
## Overview
|
||||
|
||||
| Feature | Description |
|
||||
|---------|-------------|
|
||||
| **Direct Publishing** | Publish agents with a single flag |
|
||||
| **Minimal Configuration** | Just add use cases, tags, and capabilities |
|
||||
| **Automatic Integration** | Seamlessly integrates with marketplace API |
|
||||
| **Monetization Ready** | Set pricing for your agents |
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Get Your API Key
|
||||
|
||||
Before publishing, you need a Swarms API key:
|
||||
|
||||
1. Visit [swarms.world/platform/api-keys](https://swarms.world/platform/api-keys)
|
||||
2. Create an account or sign in
|
||||
3. Generate an API key
|
||||
4. Set the environment variable:
|
||||
|
||||
```bash
|
||||
export SWARMS_API_KEY="your-api-key-here"
|
||||
```
|
||||
|
||||
Or add to your `.env` file:
|
||||
|
||||
```
|
||||
SWARMS_API_KEY=your-api-key-here
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Configure Your Agent
|
||||
|
||||
Create an agent with publishing configuration:
|
||||
|
||||
```python
|
||||
from swarms import Agent
|
||||
|
||||
# Create your specialized agent
|
||||
my_agent = Agent(
|
||||
agent_name="Market-Analysis-Agent",
|
||||
agent_description="Expert market analyst specializing in cryptocurrency and stock analysis",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="""You are an expert market analyst specializing in:
|
||||
- Cryptocurrency market analysis
|
||||
- Stock market trends
|
||||
- Risk assessment
|
||||
- Portfolio recommendations
|
||||
|
||||
Provide data-driven insights with confidence levels.""",
|
||||
max_loops=1,
|
||||
|
||||
# Publishing configuration
|
||||
publish_to_marketplace=True,
|
||||
|
||||
# Required: Define use cases
|
||||
use_cases=[
|
||||
{
|
||||
"title": "Cryptocurrency Analysis",
|
||||
"description": "Analyze crypto market trends and provide investment insights"
|
||||
},
|
||||
{
|
||||
"title": "Stock Screening",
|
||||
"description": "Screen stocks based on technical and fundamental criteria"
|
||||
},
|
||||
{
|
||||
"title": "Portfolio Review",
|
||||
"description": "Review and optimize investment portfolios"
|
||||
}
|
||||
],
|
||||
|
||||
# Required: Tags and capabilities
|
||||
tags=["finance", "crypto", "stocks", "analysis"],
|
||||
capabilities=["market-analysis", "risk-assessment", "portfolio-optimization"]
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run to Publish
|
||||
|
||||
Simply run the agent to trigger publishing:
|
||||
|
||||
```python
|
||||
# Running the agent automatically publishes it
|
||||
result = my_agent.run("Analyze Bitcoin's current market position")
|
||||
|
||||
print(result)
|
||||
print("\n✅ Agent published to marketplace!")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Example
|
||||
|
||||
Here's a complete working example:
|
||||
|
||||
```python
|
||||
import os
|
||||
from swarms import Agent
|
||||
|
||||
# Ensure API key is set
|
||||
if not os.getenv("SWARMS_API_KEY"):
|
||||
raise ValueError("Please set SWARMS_API_KEY environment variable")
|
||||
|
||||
# Step 1: Create a specialized medical analysis agent
|
||||
medical_agent = Agent(
|
||||
agent_name="Blood-Data-Analysis-Agent",
|
||||
agent_description="Explains and contextualizes common blood test panels with structured insights",
|
||||
model_name="gpt-4o-mini",
|
||||
max_loops=1,
|
||||
|
||||
system_prompt="""You are a clinical laboratory data analyst assistant focused on hematology and basic metabolic panels.
|
||||
|
||||
Your goals:
|
||||
1) Interpret common blood test panels (CBC, CMP/BMP, lipid panel, HbA1c, thyroid panels)
|
||||
2) Provide structured findings: out-of-range markers, degree of deviation, clinical significance
|
||||
3) Identify potential confounders (e.g., hemolysis, fasting status, medications)
|
||||
4) Suggest safe, non-diagnostic next steps
|
||||
|
||||
Reliability and safety:
|
||||
- This is not medical advice. Do not diagnose or treat.
|
||||
- Use cautious language with confidence levels (low/medium/high)
|
||||
- Highlight red-flag combinations that warrant urgent clinical evaluation""",
|
||||
|
||||
# Step 2: Publishing configuration
|
||||
publish_to_marketplace=True,
|
||||
|
||||
tags=["lab", "hematology", "metabolic", "education"],
|
||||
capabilities=[
|
||||
"panel-interpretation",
|
||||
"risk-flagging",
|
||||
"guideline-citation"
|
||||
],
|
||||
|
||||
use_cases=[
|
||||
{
|
||||
"title": "Blood Analysis",
|
||||
"description": "Analyze blood samples and summarize notable findings."
|
||||
},
|
||||
{
|
||||
"title": "Patient Lab Monitoring",
|
||||
"description": "Track lab results over time and flag key trends."
|
||||
},
|
||||
{
|
||||
"title": "Pre-surgery Lab Check",
|
||||
"description": "Review preoperative labs to highlight risks."
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
# Step 3: Run the agent (this publishes it to the marketplace)
|
||||
result = medical_agent.run(
|
||||
task="Analyze this blood sample: Hematology and Basic Metabolic Panel"
|
||||
)
|
||||
|
||||
print(result)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Required Fields for Publishing
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `publish_to_marketplace` | `bool` | Set to `True` to enable publishing |
|
||||
| `use_cases` | `List[Dict]` | List of use case dictionaries with `title` and `description` |
|
||||
| `tags` | `List[str]` | Keywords for discovery |
|
||||
| `capabilities` | `List[str]` | Agent capabilities for matching |
|
||||
|
||||
### Use Case Format
|
||||
|
||||
```python
|
||||
use_cases = [
|
||||
{
|
||||
"title": "Use Case Title",
|
||||
"description": "Detailed description of what the agent does for this use case"
|
||||
},
|
||||
# Add more use cases...
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Optional: Programmatic Publishing
|
||||
|
||||
You can also publish prompts/agents directly using the utility function:
|
||||
|
||||
```python
|
||||
from swarms.utils.swarms_marketplace_utils import add_prompt_to_marketplace
|
||||
|
||||
response = add_prompt_to_marketplace(
|
||||
name="My Custom Agent",
|
||||
prompt="Your detailed system prompt here...",
|
||||
description="What this agent does",
|
||||
use_cases=[
|
||||
{"title": "Use Case 1", "description": "Description 1"},
|
||||
{"title": "Use Case 2", "description": "Description 2"}
|
||||
],
|
||||
tags="tag1, tag2, tag3",
|
||||
category="research",
|
||||
is_free=True, # Set to False for paid agents
|
||||
price_usd=0.0 # Set price if not free
|
||||
)
|
||||
|
||||
print(response)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Marketplace Categories
|
||||
|
||||
| Category | Description |
|
||||
|----------|-------------|
|
||||
| `research` | Research and analysis agents |
|
||||
| `content` | Content generation agents |
|
||||
| `coding` | Programming and development agents |
|
||||
| `finance` | Financial analysis agents |
|
||||
| `healthcare` | Medical and health-related agents |
|
||||
| `education` | Educational and tutoring agents |
|
||||
| `legal` | Legal research and analysis agents |
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
!!! tip "Publishing Best Practices"
|
||||
- **Clear Descriptions**: Write detailed, accurate agent descriptions
|
||||
- **Multiple Use Cases**: Provide 3-5 distinct use cases
|
||||
- **Relevant Tags**: Use specific, searchable keywords
|
||||
- **Test First**: Thoroughly test your agent before publishing
|
||||
- **System Prompt Quality**: Ensure your system prompt is well-crafted
|
||||
|
||||
!!! warning "Important Notes"
|
||||
- `use_cases` is **required** when `publish_to_marketplace=True`
|
||||
- Both `tags` and `capabilities` should be provided for discoverability
|
||||
- The agent must have a valid `SWARMS_API_KEY` set in the environment
|
||||
|
||||
---
|
||||
|
||||
## Monetization
|
||||
|
||||
To create a paid agent:
|
||||
|
||||
```python
|
||||
from swarms.utils.swarms_marketplace_utils import add_prompt_to_marketplace
|
||||
|
||||
response = add_prompt_to_marketplace(
|
||||
name="Premium Analysis Agent",
|
||||
prompt="Your premium agent prompt...",
|
||||
description="Advanced analysis capabilities",
|
||||
use_cases=[...],
|
||||
tags="premium, advanced",
|
||||
category="finance",
|
||||
is_free=False, # Paid agent
|
||||
price_usd=9.99 # Price per use
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Visit [Swarms Marketplace](https://swarms.world) to browse published agents
|
||||
- Learn about [Marketplace Documentation](../swarms_platform/share_and_discover.md)
|
||||
- Explore [Monetization Options](../swarms_platform/monetize.md)
|
||||
- See [API Key Management](../swarms_platform/apikeys.md)
|
||||
|
||||
@ -0,0 +1,69 @@
|
||||
# Multi-Agent Architectures Overview
|
||||
|
||||
Build sophisticated multi-agent systems with Swarms' advanced orchestration patterns. From hierarchical teams to collaborative councils, these examples demonstrate how to coordinate multiple AI agents for complex tasks.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **Hierarchical Swarms** | Director agents coordinating worker agents |
|
||||
| **Collaborative Systems** | Agents working together through debate and consensus |
|
||||
| **Workflow Patterns** | Sequential, concurrent, and graph-based execution |
|
||||
| **Routing Systems** | Intelligent task routing to specialized agents |
|
||||
| **Group Interactions** | Multi-agent conversations and discussions |
|
||||
|
||||
---
|
||||
|
||||
## Architecture Examples
|
||||
|
||||
### Hierarchical & Orchestration
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **HierarchicalSwarm** | Multi-level agent organization with director and workers | [View Example](../swarms/examples/hierarchical_swarm_example.md) |
|
||||
| **Hybrid Hierarchical-Cluster Swarm** | Combined hierarchical and cluster patterns | [View Example](../swarms/examples/hhcs_examples.md) |
|
||||
| **SwarmRouter** | Intelligent routing of tasks to appropriate swarms | [View Example](../swarms/examples/swarm_router.md) |
|
||||
| **MultiAgentRouter** | Route tasks to specialized individual agents | [View Example](../swarms/examples/multi_agent_router_minimal.md) |
|
||||
|
||||
### Collaborative & Consensus
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **LLM Council Quickstart** | Collaborative decision-making with peer review and synthesis | [View Example](./llm_council_quickstart.md) |
|
||||
| **LLM Council Examples** | Domain-specific council implementations | [View Examples](./llm_council_examples.md) |
|
||||
| **DebateWithJudge Quickstart** | Two agents debate with judge providing synthesis | [View Example](./debate_quickstart.md) |
|
||||
| **Mixture of Agents** | Heterogeneous agents for diverse task handling | [View Example](../swarms/examples/moa_example.md) |
|
||||
|
||||
### Workflow Patterns
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **GraphWorkflow with Rustworkx** | High-performance graph-based workflows (5-10x faster) | [View Example](./graphworkflow_quickstart.md) |
|
||||
| **Multi-Agentic Patterns with GraphWorkflow** | Advanced graph workflow patterns | [View Example](../swarms/examples/graphworkflow_rustworkx_patterns.md) |
|
||||
| **SequentialWorkflow** | Linear agent pipelines | [View Example](../swarms/examples/sequential_example.md) |
|
||||
| **ConcurrentWorkflow** | Parallel agent execution | [View Example](../swarms/examples/concurrent_workflow.md) |
|
||||
|
||||
### Group Communication
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Group Chat** | Multi-agent group conversations | [View Example](../swarms/examples/groupchat_example.md) |
|
||||
| **Interactive GroupChat** | Real-time interactive agent discussions | [View Example](../swarms/examples/igc_example.md) |
|
||||
|
||||
### Specialized Patterns
|
||||
|
||||
| Example | Description | Link |
|
||||
|---------|-------------|------|
|
||||
| **Agents as Tools** | Use agents as callable tools for other agents | [View Example](../swarms/examples/agents_as_tools.md) |
|
||||
| **Aggregate Responses** | Combine outputs from multiple agents | [View Example](../swarms/examples/aggregate.md) |
|
||||
| **Unique Swarms** | Experimental and specialized swarm patterns | [View Example](../swarms/examples/unique_swarms.md) |
|
||||
| **BatchedGridWorkflow (Simple)** | Grid-based batch processing | [View Example](../swarms/examples/batched_grid_simple_example.md) |
|
||||
| **BatchedGridWorkflow (Advanced)** | Advanced grid-based batch processing | [View Example](../swarms/examples/batched_grid_advanced_example.md) |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [Swarm Architectures Concept Guide](../swarms/concept/swarm_architectures.md)
|
||||
- [Choosing Multi-Agent Architecture](../swarms/concept/how_to_choose_swarms.md)
|
||||
- [Custom Swarm Development](../swarms/structs/custom_swarm.md)
|
||||
@ -0,0 +1,39 @@
|
||||
# RAG Examples Overview
|
||||
|
||||
Enhance your agents with Retrieval-Augmented Generation (RAG). Connect to vector databases and knowledge bases to give agents access to your custom data.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **RAG Fundamentals** | Understanding retrieval-augmented generation |
|
||||
| **Vector Databases** | Connecting to Qdrant, Pinecone, and more |
|
||||
| **Document Processing** | Ingesting and indexing documents |
|
||||
| **Semantic Search** | Finding relevant context for queries |
|
||||
|
||||
---
|
||||
|
||||
## RAG Examples
|
||||
|
||||
| Example | Description | Vector DB | Link |
|
||||
|---------|-------------|-----------|------|
|
||||
| **RAG with Qdrant** | Complete RAG implementation with Qdrant | Qdrant | [View Example](../swarms/RAG/qdrant_rag.md) |
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
| Use Case | Description |
|
||||
|----------|-------------|
|
||||
| **Document Q&A** | Answer questions about your documents |
|
||||
| **Knowledge Base** | Query internal company knowledge |
|
||||
| **Research Assistant** | Search through research papers |
|
||||
| **Code Documentation** | Query codebase documentation |
|
||||
| **Customer Support** | Access product knowledge |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [Memory Documentation](../swarms/memory/diy_memory.md) - Building custom memory
|
||||
- [Agent Long-term Memory](../swarms/structs/agent.md#long-term-memory) - Agent memory configuration
|
||||
@ -0,0 +1,55 @@
|
||||
# Tools & Integrations Overview
|
||||
|
||||
Extend your agents with powerful integrations. Connect to web search, browser automation, financial data, and Model Context Protocol (MCP) servers.
|
||||
|
||||
## What You'll Learn
|
||||
|
||||
| Topic | Description |
|
||||
|-------|-------------|
|
||||
| **Web Search** | Integrate real-time web search capabilities |
|
||||
| **Browser Automation** | Control web browsers programmatically |
|
||||
| **Financial Data** | Access stock and market information |
|
||||
| **Web Scraping** | Extract data from websites |
|
||||
| **MCP Integration** | Connect to Model Context Protocol servers |
|
||||
|
||||
---
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### Web Search
|
||||
|
||||
| Integration | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Exa Search** | AI-powered web search for agents | [View Example](./exa_search.md) |
|
||||
|
||||
### Browser Automation
|
||||
|
||||
| Integration | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Browser Use** | Automated browser control with agents | [View Example](./browser_use.md) |
|
||||
|
||||
### Financial Data
|
||||
|
||||
| Integration | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Yahoo Finance** | Stock data, quotes, and market info | [View Example](../swarms/examples/yahoo_finance.md) |
|
||||
|
||||
### Web Scraping
|
||||
|
||||
| Integration | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Firecrawl** | AI-powered web scraping | [View Example](../developer_guides/firecrawl.md) |
|
||||
|
||||
### MCP (Model Context Protocol)
|
||||
|
||||
| Integration | Description | Link |
|
||||
|-------------|-------------|------|
|
||||
| **Multi-MCP Agent** | Connect agents to multiple MCP servers | [View Example](../swarms/examples/multi_mcp_agent.md) |
|
||||
|
||||
---
|
||||
|
||||
## Related Resources
|
||||
|
||||
- [Tools Documentation](../swarms/tools/main.md) - Building custom tools
|
||||
- [MCP Integration Guide](../swarms/structs/agent_mcp.md) - Detailed MCP setup
|
||||
- [swarms-tools Package](../swarms_tools/overview.md) - Pre-built tool collection
|
||||
@ -0,0 +1,242 @@
|
||||
# CLI Agent Guide: Create Agents from Command Line
|
||||
|
||||
Create, configure, and run AI agents directly from your terminal without writing Python code.
|
||||
|
||||
## Basic Agent Creation
|
||||
|
||||
### Step 1: Define Your Agent
|
||||
|
||||
Create an agent with required parameters:
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Research-Agent" \
|
||||
--description "An AI agent that researches topics and provides summaries" \
|
||||
--system-prompt "You are an expert researcher. Provide comprehensive, well-structured summaries with key insights." \
|
||||
--task "Research the current state of quantum computing and its applications"
|
||||
```
|
||||
|
||||
### Step 2: Customize Model Settings
|
||||
|
||||
Add model configuration options:
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Code-Reviewer" \
|
||||
--description "Expert code review assistant" \
|
||||
--system-prompt "You are a senior software engineer. Review code for best practices, bugs, and improvements." \
|
||||
--task "Review this Python function for efficiency: def fib(n): return fib(n-1) + fib(n-2) if n > 1 else n" \
|
||||
--model-name "gpt-4o-mini" \
|
||||
--temperature 0.1 \
|
||||
--max-loops 3
|
||||
```
|
||||
|
||||
### Step 3: Enable Advanced Features
|
||||
|
||||
Add streaming, dashboard, and autosave:
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Analysis-Agent" \
|
||||
--description "Data analysis specialist" \
|
||||
--system-prompt "You are a data analyst. Provide detailed statistical analysis and insights." \
|
||||
--task "Analyze market trends for electric vehicles in 2024" \
|
||||
--model-name "gpt-4" \
|
||||
--streaming-on \
|
||||
--verbose \
|
||||
--autosave \
|
||||
--saved-state-path "./agent_states/analysis_agent.json"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Complete Parameter Reference
|
||||
|
||||
### Required Parameters
|
||||
|
||||
| Parameter | Description | Example |
|
||||
|-----------|-------------|---------|
|
||||
| `--name` | Agent name | `"Research-Agent"` |
|
||||
| `--description` | Agent description | `"AI research assistant"` |
|
||||
| `--system-prompt` | Agent's system instructions | `"You are an expert..."` |
|
||||
| `--task` | Task for the agent | `"Analyze this data"` |
|
||||
|
||||
### Model Parameters
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `--model-name` | `"gpt-4"` | LLM model to use |
|
||||
| `--temperature` | `None` | Creativity (0.0-2.0) |
|
||||
| `--max-loops` | `None` | Maximum execution loops |
|
||||
| `--context-length` | `None` | Context window size |
|
||||
|
||||
### Behavior Parameters
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `--auto-generate-prompt` | `False` | Auto-generate prompts |
|
||||
| `--dynamic-temperature-enabled` | `False` | Dynamic temperature adjustment |
|
||||
| `--dynamic-context-window` | `False` | Dynamic context window |
|
||||
| `--streaming-on` | `False` | Enable streaming output |
|
||||
| `--verbose` | `False` | Verbose mode |
|
||||
|
||||
### State Management
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `--autosave` | `False` | Enable autosave |
|
||||
| `--saved-state-path` | `None` | Path to save state |
|
||||
| `--dashboard` | `False` | Enable dashboard |
|
||||
| `--return-step-meta` | `False` | Return step metadata |
|
||||
|
||||
### Integration
|
||||
|
||||
| Parameter | Default | Description |
|
||||
|-----------|---------|-------------|
|
||||
| `--mcp-url` | `None` | MCP server URL |
|
||||
| `--user-name` | `None` | Username for agent |
|
||||
| `--output-type` | `None` | Output format (str, json) |
|
||||
| `--retry-attempts` | `None` | Retry attempts on failure |
|
||||
|
||||
---
|
||||
|
||||
## Use Case Examples
|
||||
|
||||
### Financial Analyst Agent
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Financial-Analyst" \
|
||||
--description "Expert financial analysis and market insights" \
|
||||
--system-prompt "You are a CFA-certified financial analyst. Provide detailed market analysis with data-driven insights. Include risk assessments and recommendations." \
|
||||
--task "Analyze Apple (AAPL) stock performance and provide investment outlook for Q4 2024" \
|
||||
--model-name "gpt-4" \
|
||||
--temperature 0.2 \
|
||||
--max-loops 5 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Code Generation Agent
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Code-Generator" \
|
||||
--description "Expert Python developer and code generator" \
|
||||
--system-prompt "You are an expert Python developer. Write clean, efficient, well-documented code following PEP 8 guidelines. Include type hints and docstrings." \
|
||||
--task "Create a Python class for managing a task queue with priority scheduling" \
|
||||
--model-name "gpt-4" \
|
||||
--temperature 0.1 \
|
||||
--streaming-on
|
||||
```
|
||||
|
||||
### Creative Writing Agent
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Creative-Writer" \
|
||||
--description "Professional content writer and storyteller" \
|
||||
--system-prompt "You are a professional writer with expertise in engaging content. Write compelling, creative content with strong narrative flow." \
|
||||
--task "Write a short story about a scientist who discovers time travel" \
|
||||
--model-name "gpt-4" \
|
||||
--temperature 0.8 \
|
||||
--max-loops 2
|
||||
```
|
||||
|
||||
### Research Summarizer Agent
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Research-Summarizer" \
|
||||
--description "Academic research summarization specialist" \
|
||||
--system-prompt "You are an academic researcher. Summarize research topics with key findings, methodologies, and implications. Cite sources when available." \
|
||||
--task "Summarize recent advances in CRISPR gene editing technology" \
|
||||
--model-name "gpt-4o-mini" \
|
||||
--temperature 0.3 \
|
||||
--verbose \
|
||||
--autosave
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Scripting Examples
|
||||
|
||||
### Bash Script with Multiple Agents
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# run_agents.sh
|
||||
|
||||
# Research phase
|
||||
swarms agent \
|
||||
--name "Researcher" \
|
||||
--description "Research specialist" \
|
||||
--system-prompt "You are a researcher. Gather comprehensive information on topics." \
|
||||
--task "Research the impact of AI on healthcare" \
|
||||
--model-name "gpt-4o-mini" \
|
||||
--output-type "json" > research_output.json
|
||||
|
||||
# Analysis phase
|
||||
swarms agent \
|
||||
--name "Analyst" \
|
||||
--description "Data analyst" \
|
||||
--system-prompt "You are an analyst. Analyze data and provide insights." \
|
||||
--task "Analyze the research findings from: $(cat research_output.json)" \
|
||||
--model-name "gpt-4o-mini" \
|
||||
--output-type "json" > analysis_output.json
|
||||
|
||||
echo "Pipeline complete!"
|
||||
```
|
||||
|
||||
### Loop Through Tasks
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# batch_analysis.sh
|
||||
|
||||
TOPICS=("renewable energy" "electric vehicles" "smart cities" "AI ethics")
|
||||
|
||||
for topic in "${TOPICS[@]}"; do
|
||||
echo "Analyzing: $topic"
|
||||
swarms agent \
|
||||
--name "Topic-Analyst" \
|
||||
--description "Topic analysis specialist" \
|
||||
--system-prompt "You are an expert analyst. Provide concise analysis of topics." \
|
||||
--task "Analyze current trends in: $topic" \
|
||||
--model-name "gpt-4o-mini" \
|
||||
>> "analysis_results.txt"
|
||||
echo "---" >> "analysis_results.txt"
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Tips and Best Practices
|
||||
|
||||
!!! tip "System Prompt Tips"
|
||||
- Be specific about the agent's role and expertise
|
||||
- Include output format preferences
|
||||
- Specify any constraints or guidelines
|
||||
|
||||
!!! tip "Temperature Settings"
|
||||
- Use **0.1-0.3** for factual/analytical tasks
|
||||
- Use **0.5-0.7** for balanced responses
|
||||
- Use **0.8-1.0** for creative tasks
|
||||
|
||||
!!! tip "Performance Optimization"
|
||||
- Use `gpt-4o-mini` for simpler tasks (faster, cheaper)
|
||||
- Use `gpt-4` for complex reasoning tasks
|
||||
- Set appropriate `--max-loops` to control execution time
|
||||
|
||||
!!! warning "Common Issues"
|
||||
- Ensure API key is set: `export OPENAI_API_KEY="..."`
|
||||
- Wrap multi-word arguments in quotes
|
||||
- Use `--verbose` to debug issues
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [CLI YAML Configuration](./cli_yaml_guide.md) - Run agents from YAML files
|
||||
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - LLM Council and Heavy Swarm
|
||||
- [CLI Reference](./cli_reference.md) - Complete command documentation
|
||||
|
||||
@ -0,0 +1,383 @@
|
||||
# CLI Heavy Swarm Guide: Comprehensive Task Analysis
|
||||
|
||||
Run Heavy Swarm from command line for complex task decomposition and comprehensive analysis with specialized agents.
|
||||
|
||||
## Overview
|
||||
|
||||
Heavy Swarm follows a structured workflow:
|
||||
|
||||
1. **Task Decomposition**: Breaks down tasks into specialized questions
|
||||
2. **Parallel Execution**: Executes specialized agents in parallel
|
||||
3. **Result Synthesis**: Integrates and synthesizes results
|
||||
4. **Comprehensive Reporting**: Generates detailed final reports
|
||||
|
||||
---
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Step 1: Run a Simple Analysis
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm --task "Analyze the current state of quantum computing"
|
||||
```
|
||||
|
||||
### Step 2: Customize with Options
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Research renewable energy market trends" \
|
||||
--loops-per-agent 2 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Step 3: Use Custom Models
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Analyze cryptocurrency regulation globally" \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--loops-per-agent 3 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Command Options
|
||||
|
||||
| Option | Default | Description |
|
||||
|--------|---------|-------------|
|
||||
| `--task` | **Required** | The task to analyze |
|
||||
| `--loops-per-agent` | 1 | Execution loops per agent |
|
||||
| `--question-agent-model-name` | gpt-4o-mini | Model for question generation |
|
||||
| `--worker-model-name` | gpt-4o-mini | Model for worker agents |
|
||||
| `--random-loops-per-agent` | False | Randomize loops (1-10) |
|
||||
| `--verbose` | False | Enable detailed output |
|
||||
|
||||
---
|
||||
|
||||
## Specialized Agents
|
||||
|
||||
Heavy Swarm includes specialized agents for different aspects:
|
||||
|
||||
| Agent | Role | Focus |
|
||||
|-------|------|-------|
|
||||
| **Question Agent** | Decomposes tasks | Generates targeted questions |
|
||||
| **Research Agent** | Gathers information | Fast, trustworthy research |
|
||||
| **Analysis Agent** | Processes data | Statistical analysis, insights |
|
||||
| **Writing Agent** | Creates reports | Clear, structured documentation |
|
||||
|
||||
---
|
||||
|
||||
## Use Case Examples
|
||||
|
||||
### Market Research
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Comprehensive market analysis of the electric vehicle industry in North America" \
|
||||
--loops-per-agent 3 \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Technology Assessment
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Evaluate the technical feasibility and ROI of implementing AI-powered customer service automation" \
|
||||
--loops-per-agent 2 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Competitive Analysis
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Analyze competitive landscape for cloud computing services: AWS vs Azure vs Google Cloud" \
|
||||
--loops-per-agent 2 \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Investment Research
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Research investment opportunities in AI infrastructure companies for 2024-2025" \
|
||||
--loops-per-agent 3 \
|
||||
--worker-model-name gpt-4 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Policy Analysis
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Analyze the impact of proposed AI regulations on tech startups in the United States" \
|
||||
--loops-per-agent 2 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Due Diligence
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Conduct technology due diligence for acquiring a fintech startup focusing on payment processing" \
|
||||
--loops-per-agent 3 \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Visualization
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ User Task │
|
||||
│ "Analyze the impact of AI on healthcare" │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Question Agent │
|
||||
│ Decomposes task into specialized questions: │
|
||||
│ - What are current AI applications in healthcare? │
|
||||
│ - What are the regulatory challenges? │
|
||||
│ - What is the market size and growth? │
|
||||
│ - What are the key players and competitors? │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────┬─────────────┬─────────────┬─────────────┐
|
||||
│ Research │ Analysis │ Research │ Writing │
|
||||
│ Agent 1 │ Agent │ Agent 2 │ Agent │
|
||||
└─────────────┴─────────────┴─────────────┴─────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Synthesis & Integration │
|
||||
│ Combines all agent outputs │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Comprehensive Report │
|
||||
│ - Executive Summary │
|
||||
│ - Detailed Findings │
|
||||
│ - Analysis & Insights │
|
||||
│ - Recommendations │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Scripting Examples
|
||||
|
||||
### Research Pipeline
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# research_pipeline.sh
|
||||
|
||||
TOPICS=(
|
||||
"AI in manufacturing"
|
||||
"Blockchain in supply chain"
|
||||
"Edge computing in IoT"
|
||||
)
|
||||
|
||||
for topic in "${TOPICS[@]}"; do
|
||||
echo "Researching: $topic"
|
||||
OUTPUT_FILE="research_$(echo $topic | tr ' ' '_').txt"
|
||||
|
||||
swarms heavy-swarm \
|
||||
--task "Comprehensive analysis of $topic: market size, key players, trends, and opportunities" \
|
||||
--loops-per-agent 2 \
|
||||
--verbose > "$OUTPUT_FILE"
|
||||
|
||||
echo "Saved to: $OUTPUT_FILE"
|
||||
done
|
||||
```
|
||||
|
||||
### Daily Market Analysis
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# daily_market.sh
|
||||
|
||||
DATE=$(date +%Y-%m-%d)
|
||||
OUTPUT_FILE="market_analysis_$DATE.txt"
|
||||
|
||||
echo "Daily Market Analysis - $DATE" > $OUTPUT_FILE
|
||||
echo "==============================" >> $OUTPUT_FILE
|
||||
|
||||
swarms heavy-swarm \
|
||||
--task "Analyze today's key market movements, notable news, and outlook for tomorrow. Focus on tech, healthcare, and energy sectors." \
|
||||
--loops-per-agent 2 \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--verbose >> $OUTPUT_FILE
|
||||
|
||||
echo "Analysis complete: $OUTPUT_FILE"
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/heavy-swarm-research.yml
|
||||
name: Weekly Heavy Swarm Research
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 6 * * 1' # Every Monday at 6 AM
|
||||
|
||||
jobs:
|
||||
research:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.10'
|
||||
|
||||
- name: Install Swarms
|
||||
run: pip install swarms
|
||||
|
||||
- name: Run Heavy Swarm Research
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
run: |
|
||||
swarms heavy-swarm \
|
||||
--task "Weekly technology trends and market analysis report" \
|
||||
--loops-per-agent 3 \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--verbose > weekly_research.txt
|
||||
|
||||
- name: Upload Results
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: weekly-research
|
||||
path: weekly_research.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Configuration Recommendations
|
||||
|
||||
### Quick Analysis (Cost-Effective)
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Quick overview of [topic]" \
|
||||
--loops-per-agent 1 \
|
||||
--question-agent-model-name gpt-4o-mini \
|
||||
--worker-model-name gpt-4o-mini
|
||||
```
|
||||
|
||||
### Standard Research
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Detailed analysis of [topic]" \
|
||||
--loops-per-agent 2 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Deep Dive (Comprehensive)
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Comprehensive research on [topic]" \
|
||||
--loops-per-agent 3 \
|
||||
--question-agent-model-name gpt-4 \
|
||||
--worker-model-name gpt-4 \
|
||||
--verbose
|
||||
```
|
||||
|
||||
### Exploratory (Variable Depth)
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm \
|
||||
--task "Explore [topic] with varying depth" \
|
||||
--random-loops-per-agent \
|
||||
--verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Processing
|
||||
|
||||
### Save to File
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm --task "Your task" > report.txt 2>&1
|
||||
```
|
||||
|
||||
### Extract Sections
|
||||
|
||||
```bash
|
||||
# Get executive summary
|
||||
swarms heavy-swarm --task "Your task" | grep -A 50 "Executive Summary"
|
||||
|
||||
# Get recommendations
|
||||
swarms heavy-swarm --task "Your task" | grep -A 20 "Recommendations"
|
||||
```
|
||||
|
||||
### Timestamp Output
|
||||
|
||||
```bash
|
||||
swarms heavy-swarm --task "Your task" | while read line; do
|
||||
echo "[$(date '+%H:%M:%S')] $line"
|
||||
done
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
!!! tip "Task Formulation"
|
||||
- Be specific about what you want analyzed
|
||||
- Include scope and constraints
|
||||
- Specify desired output format
|
||||
|
||||
!!! tip "Loop Configuration"
|
||||
- Use `--loops-per-agent 1` for quick overviews
|
||||
- Use `--loops-per-agent 2-3` for detailed analysis
|
||||
- Higher loops = more comprehensive but slower
|
||||
|
||||
!!! tip "Model Selection"
|
||||
- Use `gpt-4o-mini` for cost-effective analysis
|
||||
- Use `gpt-4` for complex, nuanced topics
|
||||
- Match model to task complexity
|
||||
|
||||
!!! warning "Performance Notes"
|
||||
- Deep analysis (3+ loops) may take several minutes
|
||||
- Higher loops increase API costs
|
||||
- Use `--verbose` to monitor progress
|
||||
|
||||
---
|
||||
|
||||
## Comparison: LLM Council vs Heavy Swarm
|
||||
|
||||
| Feature | LLM Council | Heavy Swarm |
|
||||
|---------|-------------|-------------|
|
||||
| **Focus** | Collaborative decision-making | Comprehensive task analysis |
|
||||
| **Workflow** | Parallel responses + peer review | Task decomposition + parallel research |
|
||||
| **Best For** | Questions with multiple viewpoints | Complex research and analysis tasks |
|
||||
| **Output** | Synthesized consensus | Detailed research report |
|
||||
| **Speed** | Faster | More thorough but slower |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [CLI LLM Council Guide](./cli_llm_council_guide.md) - Collaborative decisions
|
||||
- [CLI Reference](./cli_reference.md) - Complete command documentation
|
||||
- [Heavy Swarm Python API](../structs/heavy_swarm.md) - Programmatic usage
|
||||
|
||||
@ -0,0 +1,272 @@
|
||||
# CLI LLM Council Guide: Collaborative Multi-Agent Decisions
|
||||
|
||||
Run the LLM Council directly from command line for collaborative decision-making with multiple AI agents through peer review and synthesis.
|
||||
|
||||
## Overview
|
||||
|
||||
The LLM Council creates a collaborative environment where:
|
||||
|
||||
1. **Multiple Perspectives**: Each council member (GPT-5.1, Gemini, Claude, Grok) independently responds
|
||||
2. **Peer Review**: Members evaluate and rank each other's anonymized responses
|
||||
3. **Synthesis**: A Chairman synthesizes the best elements into a final answer
|
||||
|
||||
---
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Step 1: Run a Simple Query
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "What are the best practices for code review?"
|
||||
```
|
||||
|
||||
### Step 2: Enable Verbose Output
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "How should we approach microservices architecture?" --verbose
|
||||
```
|
||||
|
||||
### Step 3: Process the Results
|
||||
|
||||
The council returns:
|
||||
- Individual member responses
|
||||
- Peer review rankings
|
||||
- Synthesized final answer
|
||||
|
||||
---
|
||||
|
||||
## Use Case Examples
|
||||
|
||||
### Strategic Business Decisions
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Should our SaaS startup prioritize product-led growth or sales-led growth? Consider market size, CAC, and scalability."
|
||||
```
|
||||
|
||||
### Technology Evaluation
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Compare Kubernetes vs Docker Swarm for a startup with 10 microservices. Consider cost, complexity, and scalability."
|
||||
```
|
||||
|
||||
### Investment Analysis
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Evaluate investment opportunities in AI infrastructure companies. Consider market size, competition, and growth potential."
|
||||
```
|
||||
|
||||
### Policy Analysis
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "What are the implications of implementing AI regulation similar to the EU AI Act in the United States?"
|
||||
```
|
||||
|
||||
### Research Questions
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "What are the most promising approaches to achieving AGI? Evaluate different research paradigms."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Council Members
|
||||
|
||||
The default council includes:
|
||||
|
||||
| Member | Model | Strengths |
|
||||
|--------|-------|-----------|
|
||||
| **GPT-5.1 Councilor** | gpt-5.1 | Analytical, comprehensive |
|
||||
| **Gemini 3 Pro Councilor** | gemini-3-pro | Concise, well-processed |
|
||||
| **Claude Sonnet 4.5 Councilor** | claude-sonnet-4.5 | Thoughtful, balanced |
|
||||
| **Grok-4 Councilor** | grok-4 | Creative, innovative |
|
||||
| **Chairman** | gpt-5.1 | Synthesizes final answer |
|
||||
|
||||
---
|
||||
|
||||
## Workflow Visualization
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ User Query │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────┬─────────────┬─────────────┬─────────────┐
|
||||
│ GPT-5.1 │ Gemini 3 │ Claude 4.5 │ Grok-4 │
|
||||
│ Councilor │ Councilor │ Councilor │ Councilor │
|
||||
└─────────────┴─────────────┴─────────────┴─────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Anonymized Peer Review │
|
||||
│ Each member ranks all responses (anonymized) │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Chairman │
|
||||
│ Synthesizes best elements from all responses │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Final Synthesized Answer │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Scripting Examples
|
||||
|
||||
### Batch Processing
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# council_batch.sh
|
||||
|
||||
QUESTIONS=(
|
||||
"What is the future of remote work?"
|
||||
"How will AI impact healthcare in 5 years?"
|
||||
"What are the risks of cryptocurrency adoption?"
|
||||
)
|
||||
|
||||
for question in "${QUESTIONS[@]}"; do
|
||||
echo "=== Processing: $question ===" >> council_results.txt
|
||||
swarms llm-council --task "$question" >> council_results.txt
|
||||
echo "" >> council_results.txt
|
||||
done
|
||||
```
|
||||
|
||||
### Weekly Analysis Script
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# weekly_council.sh
|
||||
|
||||
DATE=$(date +%Y-%m-%d)
|
||||
OUTPUT_FILE="council_analysis_$DATE.txt"
|
||||
|
||||
echo "Weekly Market Analysis - $DATE" > $OUTPUT_FILE
|
||||
echo "================================" >> $OUTPUT_FILE
|
||||
|
||||
swarms llm-council \
|
||||
--task "Analyze current tech sector market conditions and provide outlook for the coming week" \
|
||||
--verbose >> $OUTPUT_FILE
|
||||
|
||||
echo "Analysis complete: $OUTPUT_FILE"
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/council-analysis.yml
|
||||
name: Weekly Council Analysis
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 8 * * 1' # Every Monday at 8 AM
|
||||
|
||||
jobs:
|
||||
council:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.10'
|
||||
|
||||
- name: Install Swarms
|
||||
run: pip install swarms
|
||||
|
||||
- name: Run Council Analysis
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
run: |
|
||||
swarms llm-council \
|
||||
--task "Provide weekly technology trends analysis" \
|
||||
--verbose > weekly_analysis.txt
|
||||
|
||||
- name: Upload Results
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: council-analysis
|
||||
path: weekly_analysis.txt
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Processing
|
||||
|
||||
### Capture to File
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "Your question" > council_output.txt 2>&1
|
||||
```
|
||||
|
||||
### Extract Sections
|
||||
|
||||
```bash
|
||||
# Get just the final synthesis
|
||||
swarms llm-council --task "Your question" | grep -A 100 "FINAL SYNTHESIS"
|
||||
```
|
||||
|
||||
### JSON Processing
|
||||
|
||||
```bash
|
||||
# Pipe to Python for processing
|
||||
swarms llm-council --task "Your question" | python3 -c "
|
||||
import sys
|
||||
content = sys.stdin.read()
|
||||
# Process content as needed
|
||||
print(content)
|
||||
"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
!!! tip "Query Formulation"
|
||||
- Be specific and detailed in your queries
|
||||
- Include context and constraints
|
||||
- Ask for specific types of analysis
|
||||
|
||||
!!! tip "When to Use LLM Council"
|
||||
- Complex decisions requiring multiple perspectives
|
||||
- Research questions needing comprehensive analysis
|
||||
- Strategic planning and evaluation
|
||||
- Questions with trade-offs to consider
|
||||
|
||||
!!! tip "Performance Tips"
|
||||
- Use `--verbose` for detailed progress tracking
|
||||
- Expect responses to take 30-60 seconds
|
||||
- Complex queries may take longer
|
||||
|
||||
!!! warning "Limitations"
|
||||
- Requires multiple API calls (higher cost)
|
||||
- Not suitable for simple factual queries
|
||||
- Response time is longer than single-agent queries
|
||||
|
||||
---
|
||||
|
||||
## Command Reference
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "<query>" [--verbose]
|
||||
```
|
||||
|
||||
| Option | Type | Default | Description |
|
||||
|--------|------|---------|-------------|
|
||||
| `--task` | string | **Required** | Query for the council |
|
||||
| `--verbose` | flag | False | Enable detailed output |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [CLI Heavy Swarm Guide](./cli_heavy_swarm_guide.md) - Complex task analysis
|
||||
- [CLI Reference](./cli_reference.md) - Complete command documentation
|
||||
- [LLM Council Python API](../examples/llm_council_quickstart.md) - Programmatic usage
|
||||
|
||||
@ -0,0 +1,115 @@
|
||||
# CLI Quickstart: Getting Started in 3 Steps
|
||||
|
||||
Get up and running with the Swarms CLI in minutes. This guide covers installation, setup verification, and running your first commands.
|
||||
|
||||
## Step 1: Install Swarms
|
||||
|
||||
Install the Swarms package which includes the CLI:
|
||||
|
||||
```bash
|
||||
pip install swarms
|
||||
```
|
||||
|
||||
Verify installation:
|
||||
|
||||
```bash
|
||||
swarms --help
|
||||
```
|
||||
|
||||
You should see the Swarms CLI banner with available commands.
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Configure Environment
|
||||
|
||||
Set up your API keys and workspace:
|
||||
|
||||
```bash
|
||||
# Set your OpenAI API key (or other provider)
|
||||
export OPENAI_API_KEY="your-openai-api-key"
|
||||
|
||||
# Optional: Set workspace directory
|
||||
export WORKSPACE_DIR="./agent_workspace"
|
||||
```
|
||||
|
||||
Or create a `.env` file in your project directory:
|
||||
|
||||
```
|
||||
OPENAI_API_KEY=your-openai-api-key
|
||||
WORKSPACE_DIR=./agent_workspace
|
||||
```
|
||||
|
||||
Verify your setup:
|
||||
|
||||
```bash
|
||||
swarms setup-check --verbose
|
||||
```
|
||||
|
||||
Expected output:
|
||||
|
||||
```
|
||||
🔍 Running Swarms Environment Setup Check
|
||||
|
||||
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||
│ Environment Check Results │
|
||||
├─────────┬─────────────────────────┬─────────────────────────────────────────┤
|
||||
│ Status │ Check │ Details │
|
||||
├─────────┼─────────────────────────┼─────────────────────────────────────────┤
|
||||
│ ✓ │ Python Version │ Python 3.11.5 │
|
||||
│ ✓ │ Swarms Version │ Current version: 8.7.0 │
|
||||
│ ✓ │ API Keys │ API keys found: OPENAI_API_KEY │
|
||||
│ ✓ │ Dependencies │ All required dependencies available │
|
||||
└─────────┴─────────────────────────┴─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Run Your First Command
|
||||
|
||||
Try these commands to verify everything works:
|
||||
|
||||
### View All Features
|
||||
|
||||
```bash
|
||||
swarms features
|
||||
```
|
||||
|
||||
### Create a Simple Agent
|
||||
|
||||
```bash
|
||||
swarms agent \
|
||||
--name "Assistant" \
|
||||
--description "A helpful AI assistant" \
|
||||
--system-prompt "You are a helpful assistant that provides clear, concise answers." \
|
||||
--task "What are the benefits of renewable energy?" \
|
||||
--model-name "gpt-4o-mini"
|
||||
```
|
||||
|
||||
### Run LLM Council
|
||||
|
||||
```bash
|
||||
swarms llm-council --task "What are the best practices for code review?"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `swarms --help` | Show all available commands |
|
||||
| `swarms features` | Display all CLI features |
|
||||
| `swarms setup-check` | Verify environment setup |
|
||||
| `swarms onboarding` | Interactive setup wizard |
|
||||
| `swarms agent` | Create and run a custom agent |
|
||||
| `swarms llm-council` | Run collaborative LLM council |
|
||||
| `swarms heavy-swarm` | Run comprehensive analysis swarm |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [CLI Agent Guide](./cli_agent_guide.md) - Create custom agents from CLI
|
||||
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - Run LLM Council and Heavy Swarm
|
||||
- [CLI Reference](./cli_reference.md) - Complete command documentation
|
||||
|
||||
@ -0,0 +1,354 @@
|
||||
# CLI YAML Configuration Guide: Run Agents from Config Files
|
||||
|
||||
Run multiple agents from YAML configuration files for reproducible, version-controlled agent deployments.
|
||||
|
||||
## Basic YAML Configuration
|
||||
|
||||
### Step 1: Create YAML Config File
|
||||
|
||||
Create a file named `agents.yaml`:
|
||||
|
||||
```yaml
|
||||
agents:
|
||||
- name: "Research-Agent"
|
||||
description: "AI research specialist"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are an expert researcher.
|
||||
Provide comprehensive, well-structured research summaries.
|
||||
Include key insights and data points.
|
||||
temperature: 0.3
|
||||
max_loops: 2
|
||||
task: "Research current trends in renewable energy"
|
||||
|
||||
- name: "Analysis-Agent"
|
||||
description: "Data analysis specialist"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are a data analyst.
|
||||
Provide detailed statistical analysis and insights.
|
||||
Use data-driven reasoning.
|
||||
temperature: 0.2
|
||||
max_loops: 3
|
||||
task: "Analyze market opportunities in the EV sector"
|
||||
```
|
||||
|
||||
### Step 2: Run Agents from YAML
|
||||
|
||||
```bash
|
||||
swarms run-agents --yaml-file agents.yaml
|
||||
```
|
||||
|
||||
### Step 3: View Results
|
||||
|
||||
Results are displayed in the terminal with formatted output for each agent.
|
||||
|
||||
---
|
||||
|
||||
## Complete YAML Schema
|
||||
|
||||
### Agent Configuration Options
|
||||
|
||||
```yaml
|
||||
agents:
|
||||
- name: "Agent-Name" # Required: Agent identifier
|
||||
description: "Agent description" # Required: What the agent does
|
||||
model_name: "gpt-4o-mini" # Model to use
|
||||
system_prompt: "Your instructions" # Agent's system prompt
|
||||
temperature: 0.5 # Creativity (0.0-2.0)
|
||||
max_loops: 3 # Maximum execution loops
|
||||
task: "Task to execute" # Task for this agent
|
||||
|
||||
# Optional settings
|
||||
context_length: 8192 # Context window size
|
||||
streaming_on: true # Enable streaming
|
||||
verbose: true # Verbose output
|
||||
autosave: true # Auto-save state
|
||||
saved_state_path: "./states/agent.json" # State file path
|
||||
output_type: "json" # Output format
|
||||
retry_attempts: 3 # Retries on failure
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Use Case Examples
|
||||
|
||||
### Multi-Agent Research Pipeline
|
||||
|
||||
```yaml
|
||||
# research_pipeline.yaml
|
||||
agents:
|
||||
- name: "Data-Collector"
|
||||
description: "Collects and organizes research data"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are a research data collector.
|
||||
Gather comprehensive information on the given topic.
|
||||
Organize findings into structured categories.
|
||||
temperature: 0.3
|
||||
max_loops: 2
|
||||
task: "Collect data on AI applications in healthcare"
|
||||
|
||||
- name: "Trend-Analyst"
|
||||
description: "Analyzes trends and patterns"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are a trend analyst.
|
||||
Identify emerging patterns and trends from data.
|
||||
Provide statistical insights and projections.
|
||||
temperature: 0.2
|
||||
max_loops: 2
|
||||
task: "Analyze AI healthcare adoption trends from 2020-2024"
|
||||
|
||||
- name: "Report-Writer"
|
||||
description: "Creates comprehensive reports"
|
||||
model_name: "gpt-4"
|
||||
system_prompt: |
|
||||
You are a professional report writer.
|
||||
Create comprehensive, well-structured reports.
|
||||
Include executive summaries and key recommendations.
|
||||
temperature: 0.4
|
||||
max_loops: 1
|
||||
task: "Write an executive summary on AI in healthcare"
|
||||
```
|
||||
|
||||
Run:
|
||||
|
||||
```bash
|
||||
swarms run-agents --yaml-file research_pipeline.yaml
|
||||
```
|
||||
|
||||
### Financial Analysis Team
|
||||
|
||||
```yaml
|
||||
# financial_team.yaml
|
||||
agents:
|
||||
- name: "Market-Analyst"
|
||||
description: "Analyzes market conditions"
|
||||
model_name: "gpt-4"
|
||||
system_prompt: |
|
||||
You are a CFA-certified market analyst.
|
||||
Provide detailed market analysis with technical indicators.
|
||||
Include risk assessments and market outlook.
|
||||
temperature: 0.2
|
||||
max_loops: 3
|
||||
task: "Analyze current S&P 500 market conditions"
|
||||
|
||||
- name: "Risk-Assessor"
|
||||
description: "Evaluates investment risks"
|
||||
model_name: "gpt-4"
|
||||
system_prompt: |
|
||||
You are a risk management specialist.
|
||||
Evaluate investment risks and provide mitigation strategies.
|
||||
Use quantitative risk metrics.
|
||||
temperature: 0.1
|
||||
max_loops: 2
|
||||
task: "Assess risks in current tech sector investments"
|
||||
|
||||
- name: "Portfolio-Advisor"
|
||||
description: "Provides portfolio recommendations"
|
||||
model_name: "gpt-4"
|
||||
system_prompt: |
|
||||
You are a portfolio advisor.
|
||||
Provide asset allocation recommendations.
|
||||
Consider risk tolerance and market conditions.
|
||||
temperature: 0.3
|
||||
max_loops: 2
|
||||
task: "Recommend portfolio adjustments for Q4 2024"
|
||||
```
|
||||
|
||||
### Content Creation Pipeline
|
||||
|
||||
```yaml
|
||||
# content_pipeline.yaml
|
||||
agents:
|
||||
- name: "Topic-Researcher"
|
||||
description: "Researches content topics"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are a content researcher.
|
||||
Research topics thoroughly and identify key angles.
|
||||
Find unique perspectives and data points.
|
||||
temperature: 0.4
|
||||
max_loops: 2
|
||||
task: "Research content angles for 'Future of Remote Work'"
|
||||
|
||||
- name: "Content-Writer"
|
||||
description: "Writes engaging content"
|
||||
model_name: "gpt-4"
|
||||
system_prompt: |
|
||||
You are a professional content writer.
|
||||
Write engaging, SEO-friendly content.
|
||||
Use clear structure with headers and bullet points.
|
||||
temperature: 0.7
|
||||
max_loops: 2
|
||||
task: "Write a blog post about remote work trends"
|
||||
|
||||
- name: "Editor"
|
||||
description: "Edits and polishes content"
|
||||
model_name: "gpt-4o-mini"
|
||||
system_prompt: |
|
||||
You are a professional editor.
|
||||
Review content for clarity, grammar, and style.
|
||||
Suggest improvements and optimize for readability.
|
||||
temperature: 0.2
|
||||
max_loops: 1
|
||||
task: "Edit and polish the blog post for publication"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advanced Configuration
|
||||
|
||||
### Environment Variables in YAML
|
||||
|
||||
You can reference environment variables:
|
||||
|
||||
```yaml
|
||||
agents:
|
||||
- name: "API-Agent"
|
||||
description: "Agent with API access"
|
||||
model_name: "${MODEL_NAME:-gpt-4o-mini}" # Default if not set
|
||||
system_prompt: "You are an API integration specialist."
|
||||
task: "Test API integration"
|
||||
```
|
||||
|
||||
### Multiple Config Files
|
||||
|
||||
Organize agents by purpose:
|
||||
|
||||
```bash
|
||||
# Run different configurations
|
||||
swarms run-agents --yaml-file research_agents.yaml
|
||||
swarms run-agents --yaml-file analysis_agents.yaml
|
||||
swarms run-agents --yaml-file reporting_agents.yaml
|
||||
```
|
||||
|
||||
### Pipeline Script
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# run_pipeline.sh
|
||||
|
||||
echo "Starting research pipeline..."
|
||||
swarms run-agents --yaml-file configs/research.yaml
|
||||
|
||||
echo "Starting analysis pipeline..."
|
||||
swarms run-agents --yaml-file configs/analysis.yaml
|
||||
|
||||
echo "Starting reporting pipeline..."
|
||||
swarms run-agents --yaml-file configs/reporting.yaml
|
||||
|
||||
echo "Pipeline complete!"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Markdown Configuration
|
||||
|
||||
### Alternative: Load from Markdown
|
||||
|
||||
Create agents using markdown with YAML frontmatter:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: Research Agent
|
||||
description: AI research specialist
|
||||
model_name: gpt-4o-mini
|
||||
temperature: 0.3
|
||||
max_loops: 2
|
||||
---
|
||||
|
||||
You are an expert researcher specializing in technology trends.
|
||||
Provide comprehensive research summaries with:
|
||||
- Key findings and insights
|
||||
- Data points and statistics
|
||||
- Recommendations and implications
|
||||
|
||||
Always cite sources when available and maintain objectivity.
|
||||
```
|
||||
|
||||
Load from markdown:
|
||||
|
||||
```bash
|
||||
# Load single file
|
||||
swarms load-markdown --markdown-path ./agents/research_agent.md
|
||||
|
||||
# Load directory (concurrent processing)
|
||||
swarms load-markdown --markdown-path ./agents/ --concurrent
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
!!! tip "Configuration Management"
|
||||
- Version control your YAML files
|
||||
- Use descriptive agent names
|
||||
- Document purpose in descriptions
|
||||
|
||||
!!! tip "Template Organization"
|
||||
```
|
||||
configs/
|
||||
├── research/
|
||||
│ ├── tech_research.yaml
|
||||
│ └── market_research.yaml
|
||||
├── analysis/
|
||||
│ ├── financial_analysis.yaml
|
||||
│ └── data_analysis.yaml
|
||||
└── production/
|
||||
└── prod_agents.yaml
|
||||
```
|
||||
|
||||
!!! tip "Testing Configurations"
|
||||
- Test with `--verbose` flag first
|
||||
- Use lower `max_loops` for testing
|
||||
- Start with `gpt-4o-mini` for cost efficiency
|
||||
|
||||
!!! warning "Common Pitfalls"
|
||||
- Ensure proper YAML indentation (2 spaces)
|
||||
- Quote strings with special characters
|
||||
- Use `|` for multi-line prompts
|
||||
|
||||
---
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
### GitHub Actions
|
||||
|
||||
```yaml
|
||||
# .github/workflows/run-agents.yml
|
||||
name: Run Agent Pipeline
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 9 * * 1' # Every Monday at 9 AM
|
||||
|
||||
jobs:
|
||||
run-agents:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.10'
|
||||
|
||||
- name: Install Swarms
|
||||
run: pip install swarms
|
||||
|
||||
- name: Run Agents
|
||||
env:
|
||||
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
|
||||
run: swarms run-agents --yaml-file agents.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
- [CLI Agent Guide](./cli_agent_guide.md) - Create agents from command line
|
||||
- [CLI Multi-Agent Guide](../examples/cli_multi_agent_quickstart.md) - LLM Council and Heavy Swarm
|
||||
- [CLI Reference](./cli_reference.md) - Complete command documentation
|
||||
|
||||
@ -0,0 +1,55 @@
|
||||
import re
|
||||
|
||||
from swarms.structs.maker import MAKER
|
||||
|
||||
|
||||
# Define task-specific functions for a counting task
|
||||
def format_counting_prompt(
|
||||
task, state, step_idx, previous_result
|
||||
):
|
||||
"""Format prompt for counting task."""
|
||||
if previous_result is None:
|
||||
return f"{task}\nThis is step 1. What is the first number? Reply with just the number."
|
||||
return f"{task}\nThe previous number was {previous_result}. What is the next number? Reply with just the number."
|
||||
|
||||
|
||||
def parse_counting_response(response):
|
||||
"""Parse the counting response to extract the number."""
|
||||
numbers = re.findall(r"\d+", response)
|
||||
if numbers:
|
||||
return int(numbers[0])
|
||||
return response.strip()
|
||||
|
||||
|
||||
def validate_counting_response(response, max_tokens):
|
||||
"""Validate counting response."""
|
||||
if len(response) > max_tokens * 4:
|
||||
return False
|
||||
return bool(re.search(r"\d+", response))
|
||||
|
||||
|
||||
# Create MAKER instance
|
||||
maker = MAKER(
|
||||
name="CountingExample",
|
||||
description="MAKER example: counting numbers",
|
||||
model_name="gpt-4o-mini",
|
||||
system_prompt="You are a helpful assistant. When asked to count, respond with just the number, nothing else.",
|
||||
format_prompt=format_counting_prompt,
|
||||
parse_response=parse_counting_response,
|
||||
validate_response=validate_counting_response,
|
||||
k=2,
|
||||
max_tokens=100,
|
||||
temperature=0.1,
|
||||
verbose=True,
|
||||
)
|
||||
|
||||
# Run the solver with the task as the main input
|
||||
results = maker.run(
|
||||
task="Count from 1 to 10, one number at a time",
|
||||
max_steps=5,
|
||||
)
|
||||
|
||||
print(results)
|
||||
|
||||
# Show statistics
|
||||
stats = maker.get_statistics()
|
||||
Loading…
Reference in new issue