You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
6.8 KiB
6.8 KiB
CLI Multi-Agent Features: 3-Step Quickstart Guide
Run LLM Council and Heavy Swarm directly from the command line for seamless DevOps integration. Execute sophisticated multi-agent workflows without writing Python code.
Overview
| Feature | Description |
|---|---|
| LLM Council CLI | Run collaborative decision-making from terminal |
| Heavy Swarm CLI | Execute comprehensive research swarms |
| DevOps Ready | Integrate into CI/CD pipelines and scripts |
| Configurable | Full parameter control from command line |
Step 1: Install and Verify
Ensure Swarms is installed and verify CLI access:
# Install swarms
pip install swarms
# Verify CLI is available
swarms --help
You should see the Swarms CLI banner and available commands.
Step 2: Set Environment Variables
Configure your API keys:
# Set your OpenAI API key (or other provider)
export OPENAI_API_KEY="your-openai-api-key"
# Optional: Set workspace directory
export WORKSPACE_DIR="./agent_workspace"
Or add to your .env file:
OPENAI_API_KEY=your-openai-api-key
WORKSPACE_DIR=./agent_workspace
Step 3: Run Multi-Agent Commands
LLM Council
Run a collaborative council of AI agents:
# Basic usage
swarms llm-council --task "What is the best approach to implement microservices architecture?"
# With verbose output
swarms llm-council --task "Evaluate investment opportunities in AI startups" --verbose
Heavy Swarm
Run comprehensive research and analysis:
# Basic usage
swarms heavy-swarm --task "Analyze the current state of quantum computing"
# With configuration options
swarms heavy-swarm \
--task "Research renewable energy market trends" \
--loops-per-agent 2 \
--question-agent-model-name gpt-4o-mini \
--worker-model-name gpt-4o-mini \
--verbose
Complete CLI Reference
LLM Council Command
swarms llm-council --task "<your query>" [options]
| Option | Description |
|---|---|
--task |
Required. The query or question for the council |
--verbose |
Enable detailed output logging |
Examples:
# Strategic decision
swarms llm-council --task "Should our startup pivot from B2B to B2C?"
# Technical evaluation
swarms llm-council --task "Compare React vs Vue for enterprise applications"
# Business analysis
swarms llm-council --task "What are the risks of expanding to European markets?"
Heavy Swarm Command
swarms heavy-swarm --task "<your task>" [options]
| Option | Default | Description |
|---|---|---|
--task |
- | Required. The research task |
--loops-per-agent |
1 | Number of loops per agent |
--question-agent-model-name |
gpt-4o-mini | Model for question agent |
--worker-model-name |
gpt-4o-mini | Model for worker agents |
--random-loops-per-agent |
False | Randomize loops per agent |
--verbose |
False | Enable detailed output |
Examples:
# Comprehensive research
swarms heavy-swarm --task "Research the impact of AI on healthcare diagnostics" --verbose
# With custom models
swarms heavy-swarm \
--task "Analyze cryptocurrency regulation trends globally" \
--question-agent-model-name gpt-4 \
--worker-model-name gpt-4 \
--loops-per-agent 3
# Quick analysis
swarms heavy-swarm --task "Summarize recent advances in battery technology"
Integration Examples
Bash Script Integration
#!/bin/bash
# research_script.sh
TOPICS=(
"AI in manufacturing"
"Autonomous vehicles market"
"Edge computing trends"
)
for topic in "${TOPICS[@]}"; do
echo "Researching: $topic"
swarms heavy-swarm --task "Analyze $topic" --verbose >> research_output.txt
echo "---" >> research_output.txt
done
CI/CD Pipeline (GitHub Actions)
name: AI Research Pipeline
on:
schedule:
- cron: '0 9 * * 1' # Every Monday at 9 AM
jobs:
research:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: pip install swarms
- name: Run LLM Council
env:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
run: |
swarms llm-council \
--task "Weekly market analysis for tech sector" \
--verbose > weekly_analysis.txt
- name: Upload results
uses: actions/upload-artifact@v3
with:
name: analysis-results
path: weekly_analysis.txt
Docker Integration
FROM python:3.10-slim
RUN pip install swarms
ENV OPENAI_API_KEY=""
ENV WORKSPACE_DIR="/workspace"
WORKDIR /workspace
ENTRYPOINT ["swarms"]
CMD ["--help"]
# Build and run
docker build -t swarms-cli .
docker run -e OPENAI_API_KEY="your-key" swarms-cli \
llm-council --task "Analyze market trends"
Other Useful CLI Commands
Setup Check
Verify your environment is properly configured:
swarms setup-check --verbose
Run Single Agent
Execute a single agent task:
swarms agent \
--name "Research-Agent" \
--task "Summarize recent AI developments" \
--model "gpt-4o-mini" \
--max-loops 1
Auto Swarm
Automatically generate and run a swarm configuration:
swarms autoswarm --task "Build a content analysis pipeline" --model gpt-4
Show All Commands
Display all available CLI features:
swarms show-all
Output Handling
Capture Output to File
swarms llm-council --task "Evaluate cloud providers" > analysis.txt 2>&1
JSON Output Processing
swarms llm-council --task "Compare databases" | python -c "
import sys
import json
# Process output as needed
for line in sys.stdin:
print(line.strip())
"
Pipe to Other Tools
swarms heavy-swarm --task "Research topic" | tee research.log | grep "RESULT"
Troubleshooting
Common Issues
| Issue | Solution |
|---|---|
| "Command not found" | Ensure pip install swarms completed successfully |
| "API key not set" | Export OPENAI_API_KEY environment variable |
| "Task cannot be empty" | Always provide --task argument |
| Timeout errors | Check network connectivity and API rate limits |
Debug Mode
Run with verbose output for debugging:
swarms llm-council --task "Your query" --verbose 2>&1 | tee debug.log
Next Steps
- Explore CLI Reference Documentation for all commands
- See CLI Examples for more use cases
- Learn about LLM Council Python API
- Try Heavy Swarm Documentation for advanced configuration