diff --git a/docs/llm.txt b/docs/llm.txt index 692944af..bfe3be1e 100644 --- a/docs/llm.txt +++ b/docs/llm.txt @@ -1,22 +1,4 @@ -# File: agent_deployment_solutions.md - -1. make agent api - fastapi -2. make agent cron job -3. agents that listen that could listen to events -4. run on startup, every time the machine starts -4. docker -5. kubernetes -6. aws or google cloud etc - - - -user -> build agent -> user now need deploy agent - -FAST - --------------------------------------------------- - -# File: concepts\limitations.md +# File: concepts/limitations.md # Limitations of Individual Agents @@ -181,7 +163,7 @@ The next section explores how [Multi-Agent Architecture](architecture.md) addres -------------------------------------------------- -# File: contributors\docs.md +# File: contributors/docs.md # Contributing to Swarms Documentation @@ -553,7 +535,7 @@ We look forward to your pull requests, feedback, and ideas. -------------------------------------------------- -# File: contributors\environment_setup.md +# File: contributors/environment_setup.md # Environment Setup Guide for Swarms Contributors @@ -1250,233 +1232,52 @@ Happy coding! 🚀 -------------------------------------------------- -# File: contributors\main.md - -# Contributing to Swarms: Building the Infrastructure for The Agentic Economy - -Multi-agent collaboration is the most important technology in human history. It will reshape civilization by enabling billions of autonomous agents to coordinate and solve problems at unprecedented scale. - -!!! success "The Foundation of Tomorrow" - **Swarms** is the foundational infrastructure powering this autonomous economy. By contributing, you're building the systems that will enable the next generation of intelligent automation. - -### What You're Building - -=== "Autonomous Systems" - **Autonomous Resource Allocation** - - Global supply chains and energy distribution optimized in real-time - -=== "Intelligence Networks" - **Distributed Decision Making** - - Collaborative intelligence networks across industries and governments - -=== "Smart Markets" - **Self-Organizing Markets** - - Agent-driven marketplaces that automatically balance supply and demand - -=== "Problem Solving" - **Collaborative Problem Solving** - - Massive agent swarms tackling climate change, disease, and scientific discovery - -=== "Infrastructure" - **Adaptive Infrastructure** - - Self-healing systems that evolve without human intervention - ---- - -## Why Contribute to Swarms? - -### :material-rocket-launch: Shape the Future of Civilization +# File: contributors/main.md -!!! abstract "Your Impact" - - Define standards for multi-agent communication protocols - - Build architectural patterns for distributed intelligence systems - - Create frameworks for deploying agent swarms in production - - Establish ethical guidelines for autonomous agent collaboration +# Contribute to Swarms -### :material-trophy: Recognition and Professional Development +Our mission is to accelerate the transition to a fully autonomous world economy by providing enterprise-grade, production-ready infrastructure that enables seamless deployment and orchestration of millions of autonomous agents. We are creating the operating system for the agent economy, and we need your help to achieve this goal. -!!! tip "Immediate Recognition" - - **Social Media Features** - All merged PRs showcased publicly - - **Bounty Programs** - Financial rewards for high-impact contributions - - **Fast-Track Hiring** - Priority consideration for core team positions - - **Community Spotlights** - Regular recognition and acknowledgments +Swarms is built by the community, for the community. We believe that collaborative development is the key to pushing the boundaries of what's possible with multi-agent AI. Your contributions are not only welcome—they are essential to our mission. [Learn more about why you should contribute to Swarms](https://docs.swarms.world/en/latest/contributors/main/) -!!! info "Career Benefits" - - Multi-agent expertise highly valued by AI industry - - Portfolio demonstrates cutting-edge technical skills - - Direct networking with leading researchers and companies - - Thought leadership opportunities in emerging field +### Why Contribute? -### :material-brain: Technical Expertise Development +By joining us, you have the opportunity to: -Master cutting-edge technologies: +* **Work on the Frontier of Agents:** Shape the future of autonomous agent technology and help build a production-grade, open-source framework. -| Technology Area | Skills You'll Develop | -|----------------|----------------------| -| **Swarm Intelligence** | Design sophisticated agent coordination mechanisms | -| **Distributed Computing** | Build scalable architectures for thousands of agents | -| **Communication Protocols** | Create novel interaction patterns | -| **Production AI** | Deploy and orchestrate enterprise-scale systems | -| **Research Implementation** | Turn cutting-edge papers into working code | +* **Join a Vibrant Community:** Collaborate with a passionate and growing group of agent developers, researchers, and agent enthusasits. -### :material-account-group: Research Community Access +* **Make a Tangible Impact:** Whether you're fixing a bug, adding a new feature, or improving documentation, your work will be used in real-world applications. -!!! note "Collaborative Environment" - - Work with experts from academic institutions and industry - - Regular technical seminars and research discussions - - Structured mentorship from experienced contributors - - Applied research opportunities with real-world impact +* **Learn and Grow:** Gain hands-on experience with advanced AI concepts and strengthen your software engineering skills. ---- +Discover more about our mission and the benefits of becoming a contributor in our official [**Contributor's Guide**](https://docs.swarms.world/en/latest/contributors/main/). -## Contribution Opportunities +### How to Get Started -=== "New Contributors" - ### :material-school: Perfect for Getting Started - - - **Documentation** - Improve guides, tutorials, and API references - - **Bug Reports** - Identify and document issues - - **Code Quality** - Participate in testing and review processes - - **Community Support** - Help users in forums and discussions +We've made it easy to start contributing. Here's how you can help: -=== "Experienced Developers" - ### :material-code-braces: Advanced Technical Work - - - **Core Architecture** - Design fundamental system components - - **Performance Optimization** - Enhance coordination and communication efficiency - - **Research Implementation** - Turn cutting-edge papers into working code - - **Integration Development** - Build connections with AI tools and platforms +1. **Find an Issue to Tackle:** The best way to begin is by visiting our [**contributing project board**](https://github.com/users/kyegomez/projects/1). Look for issues tagged with `good first issue`—these are specifically selected for new contributors. -=== "Researchers" - ### :material-flask: Research and Innovation - - - **Algorithm Development** - Implement novel multi-agent algorithms - - **Experimental Frameworks** - Create evaluation and benchmarking tools - - **Theoretical Contributions** - Develop research documentation and frameworks - - **Academic Collaboration** - Partner on funded research projects +2. **Report a Bug or Request a Feature:** Have a new idea or found something that isn't working right? We'd love to hear from you. Please [**file a Bug Report or Feature Request**](https://github.com/kyegomez/swarms/issues) on our GitHub Issues page. ---- +3. **Understand Our Workflow and Standards:** Before submitting your work, please review our complete [**Contribution Guidelines**](https://github.com/kyegomez/swarms/blob/master/CONTRIBUTING.md). To help maintain code quality, we also encourage you to read our guide on [**Code Cleanliness**](https://docs.swarms.world/en/latest/swarms/framework/code_cleanliness/). -## How to Contribute +4. **Join the Discussion:** To participate in roadmap discussions and connect with other developers, join our community on [**Discord**](https://discord.gg/EamjgSaEQf). -### Step 1: Get Started -!!! info "Essential Resources" - [:material-book-open-page-variant: **Documentation**](https://docs.swarms.world/en/latest/){ .md-button .md-button--primary } - [:material-github: **GitHub Repository**](https://github.com/kyegomez/swarms){ .md-button } - [:material-chat: **Community Channels**](#){ .md-button } +### ✨ Our Valued Contributors -### Step 2: Find Your Path +Thank you for contributing to swarms. Your work is extremely appreciated and recognized. -```mermaid -graph TD - A[Choose Your Path] --> B[Browse Issues] - A --> C[Review Roadmap] - A --> D[Propose Ideas] - B --> E[good first issue] - B --> F[help wanted] - C --> G[Core Features] - C --> H[Research Areas] - D --> I[Discussion Forums] -``` - -### Step 3: Make Impact - -1. **Fork & Setup** - Configure your development environment -2. **Develop** - Create your contribution -3. **Submit** - Open a pull request -4. **Collaborate** - Work with maintainers -5. **Celebrate** - See your work recognized - ---- - -## Recognition Framework - -### :material-flash: Immediate Benefits - -!!! success "Instant Recognition" - | Benefit | Description | - |---------|-------------| - | **Social Media Features** | Every merged PR showcased publicly | - | **Community Recognition** | Contributor badges and documentation credits | - | **Professional References** | Formal acknowledgment for portfolios | - | **Direct Mentorship** | Access to core team guidance | - -### :material-trending-up: Long-term Opportunities - -!!! tip "Career Growth" - - **Team Positions** - Fast-track consideration for core team roles - - **Conference Speaking** - Present work at AI conferences and events - - **Industry Connections** - Network with leading AI organizations - - **Research Collaboration** - Partner with academic institutions - ---- - -## Societal Impact - -!!! abstract "Building Solutions for Humanity" - Swarms enables technology that addresses critical challenges: - - === "Research" - **Scientific Research** - - Accelerate collaborative research and discovery across disciplines - - === "Healthcare" - **Healthcare Innovation** - - Support drug discovery and personalized medicine development - - === "Environment" - **Environmental Solutions** - - Monitor climate and optimize sustainability initiatives - - === "Education" - **Educational Technology** - - Create adaptive learning systems for personalized education - - === "Economy" - **Economic Innovation** - - Generate new opportunities and efficiency improvements - ---- - -## Get Involved - -### :material-link: Connect With Us - -!!! info "Join the Community" - [:material-github: **GitHub Repository**](https://github.com/kyegomez/swarms){ .md-button .md-button--primary } - [:material-book: **Documentation**](https://docs.swarms.world/en/latest/){ .md-button } - [:material-forum: **Community Forums**](#){ .md-button } - ---- - -!!! warning "The Future is Now" - Multi-agent collaboration will define the next century of human progress. The autonomous economy depends on the infrastructure we build today. - -!!! success "Your Mission" - Your contribution to Swarms helps create the foundation for billions of autonomous agents working together to solve humanity's greatest challenges. - - **Join us in building the most important technology of our time.** - ---- - -
-*Built with :material-heart: by the global Swarms community* -
+ + + -------------------------------------------------- -# File: contributors\tools.md +# File: contributors/tools.md # Contributing Tools and Plugins to the Swarms Ecosystem @@ -1716,6 +1517,620 @@ To begin, fork the [Swarms Tools repository](https://github.com/The-Swarm-Corpor +-------------------------------------------------- + +# File: deployment_solutions/fastapi_agent_api.md + +# FastAPI Agent API + +This guide shows you how to deploy your Swarms agents as REST APIs using FastAPI and Uvicorn. This is the fastest way to expose your agents via HTTP endpoints. + +## Overview + +FastAPI is a modern, fast web framework for building APIs with Python. Combined with Uvicorn (ASGI server), it provides excellent performance and automatic API documentation. + +**Benefits:** + +| Feature | Description | +|----------------|--------------------------------------------------| +| **Fast** | Built on Starlette and Pydantic | +| **Auto-docs** | Automatic OpenAPI/Swagger documentation | +| **Type-safe** | Full type hints and validation | +| **Easy** | Minimal boilerplate code | +| **Monitoring** | Built-in logging and metrics | + +## Quick Start + +### 1. Install Dependencies + +```bash +pip install fastapi uvicorn swarms +``` + +### 2. Create Your Agent API + +Create a file called `agent_api.py`: + +```python +from fastapi import FastAPI, HTTPException +from pydantic import BaseModel +from swarms import Agent +import uvicorn +from typing import Optional, Dict, Any + +# Initialize FastAPI app +app = FastAPI( + title="Swarms Agent API", + description="REST API for Swarms agents", + version="1.0.0" +) + +# Pydantic models for request/response +class AgentRequest(BaseModel): + """Request model for agent tasks""" + task: str + agent_name: Optional[str] = "default" + max_loops: Optional[int] = 1 + temperature: Optional[float] = None + +class AgentResponse(BaseModel): + """Response model for agent tasks""" + success: bool + result: str + agent_name: str + task: str + execution_time: Optional[float] = None + +# Initialize your agent (you can customize this) +def create_agent(agent_name: str = "default") -> Agent: + """Create and return a configured agent""" + return Agent( + agent_name=agent_name, + agent_description="Versatile AI agent for various tasks", + system_prompt="""You are a helpful AI assistant that can handle a wide variety of tasks. + You provide clear, accurate, and helpful responses while maintaining a professional tone. + Always strive to be thorough and accurate in your responses.""", + model_name="claude-sonnet-4-20250514", + dynamic_temperature_enabled=True, + max_loops=1, + dynamic_context_window=True, + ) + +# API endpoints +@app.get("/") +async def root(): + """Health check endpoint""" + return {"message": "Swarms Agent API is running!", "status": "healthy"} + +@app.get("/health") +async def health_check(): + """Detailed health check""" + return { + "status": "healthy", + "service": "Swarms Agent API", + "version": "1.0.0" + } + +@app.post("/agent/run", response_model=AgentResponse) +async def run_agent(request: AgentRequest): + """Run an agent with the specified task""" + try: + import time + start_time = time.time() + + # Create agent instance + agent = create_agent(request.agent_name) + + # Run the agent + result = agent.run( + task=request.task, + max_loops=request.max_loops + ) + + execution_time = time.time() - start_time + + return AgentResponse( + success=True, + result=str(result), + agent_name=request.agent_name, + task=request.task, + execution_time=execution_time + ) + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Agent execution failed: {str(e)}") + +@app.post("/agent/chat") +async def chat_with_agent(request: AgentRequest): + """Chat with an agent (conversational mode)""" + try: + agent = create_agent(request.agent_name) + + # For chat, you might want to maintain conversation history + # This is a simple implementation + result = agent.run( + task=request.task, + max_loops=request.max_loops + ) + + return { + "success": True, + "response": str(result), + "agent_name": request.agent_name + } + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Chat failed: {str(e)}") + +@app.get("/agents/available") +async def list_available_agents(): + """List available agent configurations""" + return { + "agents": [ + { + "name": "default", + "description": "Versatile AI agent for various tasks", + "model": "claude-sonnet-4-20250514" + }, + { + "name": "quantitative-trading", + "description": "Advanced quantitative trading and algorithmic analysis agent", + "model": "claude-sonnet-4-20250514" + } + ] + } + +# Custom agent endpoint example +@app.post("/agent/quantitative-trading") +async def run_quantitative_trading_agent(request: AgentRequest): + """Run the quantitative trading agent specifically""" + try: + # Create specialized quantitative trading agent + agent = Agent( + agent_name="Quantitative-Trading-Agent", + agent_description="Advanced quantitative trading and algorithmic analysis agent", + system_prompt="""You are an expert quantitative trading agent with deep expertise in: + - Algorithmic trading strategies and implementation + - Statistical arbitrage and market making + - Risk management and portfolio optimization + - High-frequency trading systems + - Market microstructure analysis + - Quantitative research methodologies + - Financial mathematics and stochastic processes + - Machine learning applications in trading""", + model_name="claude-sonnet-4-20250514", + dynamic_temperature_enabled=True, + max_loops=request.max_loops, + dynamic_context_window=True, + ) + + result = agent.run(task=request.task) + + return { + "success": True, + "result": str(result), + "agent_name": "Quantitative-Trading-Agent", + "task": request.task + } + + except Exception as e: + raise HTTPException(status_code=500, detail=f"Quantitative trading agent failed: {str(e)}") + +if __name__ == "__main__": + uvicorn.run(app, host="0.0.0.0", port=8000) +``` + +### 3. Run Your API + +```bash +python agent_api.py +``` + +Or with uvicorn directly: + +```bash +uvicorn agent_api:app --host 0.0.0.0 --port 8000 --reload +``` + +### 4. Test Your API + +Your API will be available at: + +- **API**: http://localhost:8000 + +- **Documentation**: http://localhost:8000/docs + +- **Alternative docs**: http://localhost:8000/redoc + +## Usage Examples + +### Using curl + +```bash +# Basic agent task +curl -X POST "http://localhost:8000/agent/run" \ + -H "Content-Type: application/json" \ + -d '{"task": "What are the best top 3 ETFs for gold coverage?"}' + +# Quantitative trading agent +curl -X POST "http://localhost:8000/agent/quantitative-trading" \ + -H "Content-Type: application/json" \ + -d '{"task": "Analyze the current market conditions for gold ETFs"}' +``` + +### Using Python requests + +```python +import requests + +# Run basic agent +response = requests.post( + "http://localhost:8000/agent/run", + json={"task": "Explain quantum computing in simple terms"} +) +print(response.json()) + +# Run quantitative trading agent +response = requests.post( + "http://localhost:8000/agent/quantitative-trading", + json={"task": "What are the key factors affecting gold prices today?"} +) +print(response.json()) +``` + +## Advanced Configuration + +### Environment Variables + +Create a `.env` file for configuration: + +```bash +# .env +AGENT_MODEL_NAME=claude-sonnet-4-20250514 +AGENT_MAX_LOOPS=3 +API_HOST=0.0.0.0 +API_PORT=8000 +LOG_LEVEL=info +``` + +### Enhanced Agent Factory + +```python +import os +from typing import Dict, Type +from swarms import Agent + +class AgentFactory: + """Factory for creating different types of agents""" + + AGENT_CONFIGS = { + "default": { + "agent_name": "Default-Agent", + "agent_description": "Versatile AI agent for various tasks", + "system_prompt": "You are a helpful AI assistant...", + "model_name": "claude-sonnet-4-20250514" + }, + "quantitative-trading": { + "agent_name": "Quantitative-Trading-Agent", + "agent_description": "Advanced quantitative trading agent", + "system_prompt": "You are an expert quantitative trading agent...", + "model_name": "claude-sonnet-4-20250514" + }, + "research": { + "agent_name": "Research-Agent", + "agent_description": "Academic research and analysis agent", + "system_prompt": "You are an expert research agent...", + "model_name": "claude-sonnet-4-20250514" + } + } + + @classmethod + def create_agent(cls, agent_type: str = "default", **kwargs) -> Agent: + """Create an agent of the specified type""" + if agent_type not in cls.AGENT_CONFIGS: + raise ValueError(f"Unknown agent type: {agent_type}") + + config = cls.AGENT_CONFIGS[agent_type].copy() + config.update(kwargs) + + return Agent(**config) +``` + +### Authentication & Rate Limiting + +```python +from fastapi import Depends, HTTPException, status +from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials +from slowapi import Limiter, _rate_limit_exceeded_handler +from slowapi.util import get_remote_address +from slowapi.errors import RateLimitExceeded +import time + +# Rate limiting +limiter = Limiter(key_func=get_remote_address) +app.state.limiter = limiter +app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler) + +# Security +security = HTTPBearer() + +def verify_token(credentials: HTTPAuthorizationCredentials = Depends(security)): + """Verify API token""" + # Implement your token verification logic here + if credentials.credentials != "your-secret-token": + raise HTTPException( + status_code=status.HTTP_401_UNAUTHORIZED, + detail="Invalid token" + ) + return credentials.credentials + +@app.post("/agent/run", response_model=AgentResponse) +@limiter.limit("10/minute") +async def run_agent( + request: AgentRequest, + token: str = Depends(verify_token) +): + """Run an agent with authentication and rate limiting""" + # ... existing code ... +``` + +## Production Deployment + +### Using Gunicorn + +```bash +pip install gunicorn +gunicorn agent_api:app -w 4 -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000 +``` + +### Using Docker + +```dockerfile +FROM python:3.11-slim + +WORKDIR /app + +COPY requirements.txt . +RUN pip install --no-cache-dir -r requirements.txt + +COPY . . + +EXPOSE 8000 + +CMD ["uvicorn", "agent_api:app", "--host", "0.0.0.0", "--port", "8000"] +``` + +### Using Docker Compose + +```yaml +version: '3.8' +services: + agent-api: + build: . + ports: + - "8000:8000" + environment: + - AGENT_MODEL_NAME=claude-sonnet-4-20250514 + volumes: + - ./logs:/app/logs +``` + +## Monitoring & Logging + +### Structured Logging + +```python +import logging +import json +from datetime import datetime + +# Configure logging +logging.basicConfig(level=logging.INFO) +logger = logging.getLogger(__name__) + +@app.middleware("http") +async def log_requests(request, call_next): + """Log all requests and responses""" + start_time = time.time() + + # Log request + logger.info(f"Request: {request.method} {request.url}") + + response = await call_next(request) + + # Log response + process_time = time.time() - start_time + logger.info(f"Response: {response.status_code} - {process_time:.2f}s") + + return response +``` + +### Health Checks + +```python +@app.get("/health/detailed") +async def detailed_health_check(): + """Detailed health check with agent status""" + try: + # Test agent creation + agent = create_agent() + + return { + "status": "healthy", + "timestamp": datetime.utcnow().isoformat(), + "agent_status": "available", + "model_status": "connected" + } + except Exception as e: + return { + "status": "unhealthy", + "timestamp": datetime.utcnow().isoformat(), + "error": str(e) + } +``` + +## Best Practices + +| Best Practice | Description | +|----------------------|-----------------------------------------------------------| +| **Error Handling** | Always wrap agent execution in try-catch blocks | +| **Validation** | Use Pydantic models for request validation | +| **Rate Limiting** | Implement rate limiting for production APIs | +| **Authentication** | Add proper authentication for sensitive endpoints | +| **Logging** | Log all requests and responses for debugging | +| **Monitoring** | Add health checks and metrics | +| **Testing** | Write tests for your API endpoints | +| **Documentation** | Keep your API documentation up to date | + +## Troubleshooting + +### Common Issues + +1. **Port already in use**: Change the port in the uvicorn command +2. **Agent initialization fails**: Check your API keys and model configuration +3. **Memory issues**: Reduce `max_loops` or implement streaming responses +4. **Timeout errors**: Increase timeout settings for long-running tasks + +### Performance Tips + +| Performance Tip | Description | +|--------------------------|-----------------------------------------------------| +| **Connection pooling** | Reuse agent instances when possible | +| **Async operations** | Use async/await for I/O operations | +| **Caching** | Cache frequently requested responses | +| **Load balancing** | Use multiple worker processes for high traffic | + + +Your FastAPI agent API is now ready to handle requests and scale with your needs! + + +-------------------------------------------------- + +# File: deployment_solutions/overview.md + +# Deployment Solutions Overview + +This section covers various deployment strategies for Swarms agents and multi-agent systems, from simple local deployments to enterprise-grade cloud solutions. + +## Deployment Types Comparison & Documentation + +| Deployment Type | Use Case | Complexity | Scalability | Cost | Best For | Documentation Link | Status | +|------------------------|---------------------|------------|-------------|-----------|----------------------------------|-----------------------------------------------------------------------------|------------| +| **FastAPI + Uvicorn** | REST API endpoints | Low | Medium | Low | Quick prototypes, internal tools | [FastAPI Agent API Guide](fastapi_agent_api.md) | Available | +| **Cron Jobs** | Scheduled tasks | Low | Low | Very Low | Batch processing, periodic tasks | [Cron Job Examples](../../examples/deployment_solutions/cron_job_examples/) | Available | + + +## Quick Start Guide + +### 1. FastAPI + Uvicorn (Recommended for APIs) + +- **Best for**: Creating REST APIs for your agents + +- **Setup time**: 5-10 minutes + +- **Documentation**: [FastAPI Agent API](fastapi_agent_api.md) + +- **Example Code**: [FastAPI Example](../../examples/deployment_solutions/fastapi_agent_api_example.py) + + +### 2. Cron Jobs (Recommended for scheduled tasks) + +- **Best for**: Running agents on a schedule + +- **Setup time**: 2-5 minutes + +- **Examples**: [Cron Job Examples](../../examples/deployment_solutions/cron_job_examples/) + + + +## Deployment Considerations + +### Performance + +- **FastAPI**: Excellent for high-throughput APIs + +- **Cron Jobs**: Good for batch processing + +- **Docker**: Consistent performance across environments + +- **Kubernetes**: Best for complex, scalable systems + + +### Security + +- **FastAPI**: Built-in security features, easy to add authentication + +- **Cron Jobs**: Runs with system permissions + +- **Docker**: Isolated environment, security best practices + +- **Kubernetes**: Advanced security policies and RBAC + + +### Monitoring & Observability + +- **FastAPI**: Built-in logging, easy to integrate with monitoring tools + +- **Cron Jobs**: Basic logging, requires custom monitoring setup + +- **Docker**: Container-level monitoring, easy to integrate + +- **Kubernetes**: Comprehensive monitoring and alerting + + +### Cost Optimization + +- **FastAPI**: Pay for compute resources + +- **Cron Jobs**: Minimal cost, runs on existing infrastructure + +- **Docker**: Efficient resource utilization + +- **Kubernetes**: Advanced resource management and auto-scaling + + +## Choosing the Right Deployment + +### For Development & Testing + +- **FastAPI + Uvicorn**: Quick setup, easy debugging + +- **Cron Jobs**: Simple scheduled tasks + + +### For Production APIs + +- **FastAPI + Docker**: Reliable, scalable + +- **Cloud Run**: Auto-scaling, managed infrastructure + + +### For Enterprise Systems + +- **Kubernetes**: Full control, advanced features + +- **Hybrid approach**: Mix of deployment types based on use case + + +### For Cost-Sensitive Projects + +- **Cron Jobs**: Minimal infrastructure cost + +- **Cloud Functions**: Pay-per-use model + +- **FastAPI**: Efficient resource utilization + + +## Next Steps + +1. **Start with FastAPI** if you need an API endpoint +2. **Use Cron Jobs** for scheduled tasks +3. **Move to Docker** when you need consistency +4. **Consider Kubernetes** for complex, scalable systems + +Each deployment solution includes detailed examples and step-by-step guides to help you get started quickly. + + -------------------------------------------------- # File: docs_structure.md @@ -1746,7 +2161,7 @@ Benefits of class/structure, and more -------------------------------------------------- -# File: examples\agent_stream.md +# File: examples/agent_stream.md # Agent with Streaming @@ -1814,7 +2229,54 @@ If you'd like technical support, join our Discord below and stay updated on our -------------------------------------------------- -# File: examples\cookbook_index.md +# File: examples/community_resources.md + +# Community Resources + +Welcome to the Community Resources page! Here you'll find a curated collection of articles, tutorials, and guides created by the Swarms community and core contributors. + +These resources cover a wide range of topics, including building your first agent, advanced multi-agent architectures, API integrations, and using Swarms with both Python and Rust. Whether you're a beginner or an experienced developer, these links will help you deepen your understanding and accelerate your development with the Swarms framework. + + +## Swarms Python + +| Title | Description | Link | +|-------|-------------|------| +| **Build Your First Swarms Agent in Under 10 Minutes** | Step-by-step beginner guide to creating your first Swarms agent quickly. | [Read Article](https://medium.com/@devangvashistha/build-your-first-swarms-agent-in-under-10-minutes-ddff23b6c703) | +| **Building Multi-Agent Systems with GPT-5 and The Swarms Framework** | Learn how to leverage GPT-5 with Swarms for advanced multi-agent system design. | [Read Article](https://medium.com/@kyeg/building-multi-agent-systems-with-gpt-5-and-the-swarms-framework-e52ffaf0fa4f) | +| **Learn How to Build Production-Grade Agents with OpenAI’s Latest Model: GPT-OSS Locally and in the Cloud** | Guide to building robust agents using OpenAI’s GPT-OSS, both locally and in cloud environments. | [Read Article](https://medium.com/@kyeg/learn-how-to-build-production-grade-agents-with-openais-latest-model-gpt-oss-locally-and-in-the-c5826c7cca7c) | +| **Building Gemini 2.5 Agents with Swarms Framework** | Tutorial on integrating Gemini 2.5 models into Swarms agents for enhanced capabilities. | [Read Article](https://medium.com/@kyeg/building-gemini-2-5-agents-with-swarms-framework-20abdcf82cac) | +| **Enterprise Developer Guide: Leveraging OpenAI’s o3 and o4-mini Models with The Swarms Framework** | Enterprise-focused guide to using OpenAI’s o3 and o4-mini models within Swarms. | [Read Article](https://medium.com/@kyeg/enterprise-developer-guide-leveraging-openais-o3-and-o4-mini-models-with-the-swarms-framework-89490c57820a) | +| **Enneagram of Thoughts Using the Swarms Framework: A Multi-Agent Approach to Holistic Problem Solving** | Explores using Swarms for holistic, multi-perspective problem solving via the Enneagram model. | [Read Article](https://medium.com/@kyeg/enneagram-of-thoughts-using-the-swarms-framework-a-multi-agent-approach-to-holistic-problem-c26c7df5e7eb) | +| **Building Production-Grade Financial Agents with tickr-agent: An Enterprise Solution for Comprehensive Stock Analysis** | How to build advanced financial analysis agents using tickr-agent and Swarms. | [Read Article](https://medium.com/@kyeg/building-production-grade-financial-agents-with-tickr-agent-an-enterprise-solution-for-db867ec93193) | +| **Automating Your Startup’s Financial Analysis Using AI Agents: A Comprehensive Guide** | Comprehensive guide to automating your startup’s financial analysis with AI agents using Swarms. | [Read Article](https://medium.com/@kyeg/automating-your-startups-financial-analysis-using-ai-agents-a-comprehensive-guide-b2fa0e2c09d5) | +| **Managing Thousands of Agent Outputs at Scale with The Spreadsheet Swarm: All-New Multi-Agent Architecture** | Learn how to manage and scale thousands of agent outputs efficiently using the Spreadsheet Swarm architecture. | [Read Article](https://medium.com/@kyeg/managing-thousands-of-agent-outputs-at-scale-with-the-spreadsheet-swarm-all-new-multi-agent-f16f5f40fd5a) | +| **Introducing GPT-4o Mini: The Future of Cost-Efficient AI Intelligence** | Discover the capabilities and advantages of GPT-4o Mini for building cost-effective, intelligent agents. | [Read Article](https://medium.com/@kyeg/introducing-gpt-4o-mini-the-future-of-cost-efficient-ai-intelligence-a3e3fe78d939) | +| **Introducing Swarm's GraphWorkflow: A Faster, Simpler, and Superior Alternative to LangGraph** | Learn about Swarms' GraphWorkflow, a powerful alternative to LangGraph that offers improved performance and simplicity for building complex agent workflows. | [Read Article](https://medium.com/@kyeg/introducing-swarms-graphworkflow-a-faster-simpler-and-superior-alternative-to-langgraph-5c040225a4f1) | + + +### Swarms API + +| Title | Description | Link | +|-------|-------------|------| +| **Specialized Healthcare Agents with Swarms Agent Completions API** | Guide to building healthcare-focused agents using the Swarms API. | [Read Article](https://medium.com/@kyeg/specialized-healthcare-agents-with-swarms-agent-completions-api-b56d067e3b11) | +| **Building Multi-Agent Systems for Finance & Accounting with the Swarms API: A Technical Guide** | Technical walkthrough for creating finance and accounting multi-agent systems with the Swarms API. | [Read Article](https://medium.com/@kyeg/building-multi-agent-systems-for-finance-accounting-with-the-swarms-api-a-technical-guide-bf6f7005b708) | + +### Swarms Rust + +| Title | Description | Link | +|-------|-------------|------| +| **Building Medical Multi-Agent Systems with Swarms Rust: A Comprehensive Tutorial** | Comprehensive tutorial for developing medical multi-agent systems using Swarms Rust. | [Read Article](https://medium.com/@kyeg/building-medical-multi-agent-systems-with-swarms-rust-a-comprehensive-tutorial-1e8e060601f9) | +| **Building Production-Grade Agentic Applications with Swarms Rust: A Comprehensive Tutorial** | Learn to build robust, production-ready agentic applications with Swarms Rust. | [Read Article](https://medium.com/@kyeg/building-production-grade-agentic-applications-with-swarms-rust-a-comprehensive-tutorial-bb567c02340f) | + + +### Youtube Videos + +- [Swarms Playlist by Swarms Founder Kye Gomez](https://www.youtube.com/watch?v=FzbBRbaqsG8&list=PLphplB7PcU1atnmrUl7lJ5bmGXR7R4lhA) + +-------------------------------------------------- + +# File: examples/cookbook_index.md # Swarms Cookbook Examples Index @@ -1877,7 +2339,7 @@ This project is licensed under the MIT License - see the [LICENSE](https://githu -------------------------------------------------- -# File: examples\index.md +# File: examples/index.md # Swarms Examples Index @@ -2052,6 +2514,7 @@ This index organizes **100+ production-ready examples** from our [Swarms Example ### Research and Deep Analysis | Category | Example | Description | |----------|---------|-------------| +| Advanced Research | [Advanced Research System](https://github.com/The-Swarm-Corporation/AdvancedResearch) | Multi-agent research system inspired by Anthropic's research methodology with orchestrator-worker architecture | | Deep Research | [Deep Research Example](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/deep_research_examples/deep_research_example.py) | Comprehensive research system with multiple specialized agents | | Deep Research Swarm | [Deep Research Swarm](https://github.com/kyegomez/swarms/blob/master/examples/multi_agent/deep_research_examples/deep_research_swarm_example.py) | Swarm-based deep research with collaborative analysis | | Scientific Agents | [Deep Research Swarm Example](https://github.com/kyegomez/swarms/blob/master/examples/demos/scient_agents/deep_research_swarm_example.py) | Scientific research swarm for academic and research applications | @@ -2134,11 +2597,13 @@ This index organizes **100+ production-ready examples** from our [Swarms Example -------------------------------------------------- -# File: examples\paper_implementations.md +# File: examples/paper_implementations.md # Multi-Agent Paper Implementations -At Swarms, we are passionate about democratizing access to cutting-edge multi-agent research and making advanced AI collaboration accessible to everyone. Our mission is to bridge the gap between academic research and practical implementation by providing production-ready, open-source implementations of the most impactful multi-agent research papers. +At Swarms, we are passionate about democratizing access to cutting-edge multi-agent research and making advanced agent collaboration accessible to everyone. + +Our mission is to bridge the gap between academic research and practical implementation by providing production-ready, open-source implementations of the most impactful multi-agent research papers. ### Why Multi-Agent Research Matters @@ -2176,10 +2641,6 @@ This documentation showcases our comprehensive collection of multi-agent researc Whether you're a researcher looking to validate findings, a developer building production systems, or a student learning about multi-agent AI, you'll find valuable resources here to advance your work. -### Join the Multi-Agent Revolution - -We invite you to explore these implementations, contribute to our research efforts, and help shape the future of collaborative AI. Together, we can unlock the full potential of multi-agent systems and create AI that truly works as a team. - ## Implemented Research Papers | Paper Name | Description | Original Paper | Implementation | Status | Key Features | @@ -2190,124 +2651,1123 @@ We invite you to explore these implementations, contribute to our research effor | **[Mixture of Agents (MoA)](https://arxiv.org/abs/2406.04692)** | A sophisticated multi-agent architecture that implements parallel processing with iterative refinement, combining diverse expert agents for comprehensive analysis. | Multi-agent collaboration concepts | [`swarms.structs.moa`](https://docs.swarms.world/en/latest/swarms/structs/moa/) | ✅ Complete | Parallel processing, expert agent combination, iterative refinement, state-of-the-art performance | | **Deep Research Swarm** | A production-grade research system that conducts comprehensive analysis across multiple domains using parallel processing and advanced AI agents. | Research methodology | [`swarms.structs.deep_research_swarm`](https://docs.swarms.world/en/latest/swarms/structs/deep_research_swarm/) | ✅ Complete | Parallel search processing, multi-agent coordination, information synthesis, concurrent execution | | **Agent-as-a-Judge** | An evaluation framework that uses agents to evaluate other agents, implementing the "Agent-as-a-Judge: Evaluate Agents with Agents" methodology. | [arXiv:2410.10934](https://arxiv.org/abs/2410.10934) | [`swarms.agents.agent_judge`](https://docs.swarms.world/en/latest/swarms/agents/agent_judge/) | ✅ Complete | Agent evaluation, quality assessment, automated judging, performance metrics | - -## Additional Research Resources +| **Advanced Research System** | An enhanced implementation of the orchestrator-worker pattern from Anthropic's paper "How we built our multi-agent research system", featuring parallel execution, LLM-as-judge evaluation, and professional report generation. | [Anthropic Paper](https://www.anthropic.com/engineering/built-multi-agent-research-system) | [GitHub Repository](https://github.com/The-Swarm-Corporation/AdvancedResearch) | ✅ Complete | Orchestrator-worker architecture, parallel execution, Exa API integration, export capabilities | ### Multi-Agent Papers Compilation We maintain a comprehensive list of multi-agent research papers at: [awesome-multi-agent-papers](https://github.com/kyegomez/awesome-multi-agent-papers) -### Research Lists -Our research compilation includes: -- **Projects**: ModelScope-Agent, Gorilla, BMTools, LMQL, Langchain, MetaGPT, AutoGPT, and more +## Contributing -- **Research Papers**: BOLAA, ToolLLM, Communicative Agents, Mind2Web, Voyager, Tree of Thoughts, and many others +We welcome contributions to implement additional research papers! If you'd like to contribute: -- **Blog Articles**: Latest insights and developments in autonomous agents +1. **Identify a paper**: Choose a relevant multi-agent research paper +2. **Propose implementation**: Submit an issue with your proposal +3. **Implement**: Create the implementation following our guidelines +4. **Document**: Add comprehensive documentation and examples +5. **Test**: Ensure robust testing and validation -- **Talks**: Presentations from leading researchers like Geoffrey Hinton and Andrej Karpathy +## Citation +If you use any of these implementations in your research, please cite the original papers and the Swarms framework: -## Implementation Details +```bibtex +@misc{SWARMS_2022, + author = {Gomez, Kye and Pliny and More, Harshal and Swarms Community}, + title = {{Swarms: Production-Grade Multi-Agent Infrastructure Platform}}, + year = {2022}, + howpublished = {\url{https://github.com/kyegomez/swarms}}, + note = {Documentation available at \url{https://docs.swarms.world}}, + version = {latest} +} +``` -### MALT Framework +## Community -The MALT implementation provides: +Join our community to stay updated on the latest multi-agent research implementations: -- **Three-Agent Architecture**: Creator, Verifier, and Refiner agents +- **Discord**: [Join our community](https://discord.gg/EamjgSaEQf) -- **Structured Workflow**: Coordinated task execution with conversation history +- **Documentation**: [docs.swarms.world](https://docs.swarms.world) -- **Reliability Features**: Error handling, validation, and quality assurance +- **GitHub**: [kyegomez/swarms](https://github.com/kyegomez/swarms) -- **Extensibility**: Custom agent integration and configuration options +- **Research Papers**: [awesome-multi-agent-papers](https://github.com/kyegomez/awesome-multi-agent-papers) -### MAI-DxO System -The MAI Diagnostic Orchestrator features: -- **Virtual Physician Panel**: Multiple specialized medical agents +-------------------------------------------------- -- **Cost Optimization**: Efficient diagnostic workflows +# File: examples/smart_database.md -- **Iterative Refinement**: Continuous improvement of diagnoses +# Smart Database Powered by Hierarchical Multi-Agent Workflow -- **Medical Expertise**: Domain-specific knowledge and reasoning +This module implements a fully autonomous database management system using a hierarchical multi-agent architecture. The system includes specialized agents for different database operations coordinated by a Database Director agent. +## Features -### AI-CoScientist Framework +| Feature | Description | +|---------------------------------------|-----------------------------------------------------------------------------------------------| +| Autonomous Database Management | Complete database lifecycle management, including setup and ongoing management of databases. | +| Intelligent Task Distribution | Automatic assignment of tasks to appropriate specialist agents. | +| Table Creation with Schema Validation | Ensures tables are created with correct structure, schema enforcement, and data integrity. | +| Data Insertion and Updates | Handles adding new data and updating existing records efficiently, supporting JSON input. | +| Complex Query Execution | Executes advanced and optimized queries for data retrieval and analysis. | +| Schema Modifications | Supports altering table structures and database schemas as needed. | +| Hierarchical Agent Coordination | Utilizes a multi-agent system for orchestrated, intelligent task execution. | +| Security | Built-in SQL injection prevention and query validation for data protection. | +| Performance Optimization | Query optimization and efficient data operations for high performance. | +| Comprehensive Error Handling | Robust error management and reporting throughout all operations. | +| Multi-format Data Support | Flexible query parameters and support for JSON-based data insertion. | -The AI-CoScientist implementation includes: +## Architecture -- **Tournament-Based Selection**: Elo rating system for hypothesis ranking +### Multi-Agent Architecture -- **Peer Review System**: Comprehensive evaluation of scientific proposals +``` +Database Director (Coordinator) +├── Database Creator (Creates databases) +├── Table Manager (Manages table schemas) +├── Data Operations (Handles data insertion/updates) +└── Query Specialist (Executes queries and retrieval) +``` -- **Hypothesis Evolution**: Iterative refinement based on feedback +### Agent Specializations -- **Diversity Control**: Proximity analysis to maintain hypothesis variety +| Agent | Description | +|------------------------|-----------------------------------------------------------------------------------------------| +| **Database Director** | Orchestrates all database operations and coordinates specialist agents | +| **Database Creator** | Specializes in creating and initializing databases | +| **Table Manager** | Expert in table creation, schema design, and structure management | +| **Data Operations** | Handles data insertion, updates, and manipulation | +| **Query Specialist** | Manages database queries, data retrieval, and optimization | -### Mixture of Agents (MoA) +## Agent Tools -The MoA architecture provides: +| Function | Description | +|----------|-------------| +| **`create_database(database_name, database_path)`** | Creates new SQLite databases | +| **`create_table(database_path, table_name, schema)`** | Creates tables with specified schemas | +| **`insert_data(database_path, table_name, data)`** | Inserts data into tables | +| **`query_database(database_path, query, params)`** | Executes SELECT queries | +| **`update_table_data(database_path, table_name, update_data, where_clause)`** | Updates existing data | +| **`get_database_schema(database_path)`** | Retrieves comprehensive schema information | -- **Parallel Processing**: Multiple agents working simultaneously +## Install -- **Expert Specialization**: Domain-specific agent capabilities +```bash +pip install -U swarms sqlite3 loguru +``` -- **Iterative Refinement**: Continuous improvement through collaboration +## ENV -- **State-of-the-Art Performance**: Achieving superior results through collective intelligence +``` +WORKSPACE_DIR="agent_workspace" +ANTHROPIC_API_KEY="" +OPENAI_API_KEY="" +``` +## Code +- Make a file called `smart_database_swarm.py` -## Contributing +```python +import sqlite3 +import json +from pathlib import Path +from loguru import logger -We welcome contributions to implement additional research papers! If you'd like to contribute: +from swarms import Agent, HierarchicalSwarm -1. **Identify a paper**: Choose a relevant multi-agent research paper -2. **Propose implementation**: Submit an issue with your proposal -3. **Implement**: Create the implementation following our guidelines -4. **Document**: Add comprehensive documentation and examples -5. **Test**: Ensure robust testing and validation -## Citation +# ============================================================================= +# DATABASE TOOLS - Core Functions for Database Operations +# ============================================================================= -If you use any of these implementations in your research, please cite the original papers and the Swarms framework: -```bibtex -@misc{SWARMS_2022, - author = {Gomez, Kye and Pliny and More, Harshal and Swarms Community}, - title = {{Swarms: Production-Grade Multi-Agent Infrastructure Platform}}, - year = {2022}, - howpublished = {\url{https://github.com/kyegomez/swarms}}, - note = {Documentation available at \url{https://docs.swarms.world}}, - version = {latest} -} -``` +def create_database( + database_name: str, database_path: str = "./databases" +) -> str: + """ + Create a new SQLite database file. -## Community + Args: + database_name (str): Name of the database to create (without .db extension) + database_path (str, optional): Directory path where database will be created. + Defaults to "./databases". -Join our community to stay updated on the latest multi-agent research implementations: + Returns: + str: JSON string containing operation result and database information -- **Discord**: [Join our community](https://discord.gg/EamjgSaEQf) + Raises: + OSError: If unable to create database directory or file + sqlite3.Error: If database connection fails -- **Documentation**: [docs.swarms.world](https://docs.swarms.world) + Example: + >>> result = create_database("company_db", "/data/databases") + >>> print(result) + {"status": "success", "database": "company_db.db", "path": "/data/databases/company_db.db"} + """ + try: + # Validate input parameters + if not database_name or not database_name.strip(): + raise ValueError("Database name cannot be empty") -- **GitHub**: [kyegomez/swarms](https://github.com/kyegomez/swarms) + # Clean database name + db_name = database_name.strip().replace(" ", "_") + if not db_name.endswith(".db"): + db_name += ".db" -- **Research Papers**: [awesome-multi-agent-papers](https://github.com/kyegomez/awesome-multi-agent-papers) + # Create database directory if it doesn't exist + db_path = Path(database_path) + db_path.mkdir(parents=True, exist_ok=True) + + # Full database file path + full_db_path = db_path / db_name + + # Create database connection (creates file if doesn't exist) + conn = sqlite3.connect(str(full_db_path)) + + # Create a metadata table to track database info + conn.execute( + """ + CREATE TABLE IF NOT EXISTS _database_metadata ( + key TEXT PRIMARY KEY, + value TEXT, + created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP + ) + """ + ) + + # Insert database metadata + conn.execute( + "INSERT OR REPLACE INTO _database_metadata (key, value) VALUES (?, ?)", + ("database_name", database_name), + ) + + conn.commit() + conn.close() + result = { + "status": "success", + "message": f"Database '{database_name}' created successfully", + "database": db_name, + "path": str(full_db_path), + "size_bytes": full_db_path.stat().st_size, + } + + logger.info(f"Database created: {db_name}") + return json.dumps(result, indent=2) + + except ValueError as e: + return json.dumps({"status": "error", "error": str(e)}) + except sqlite3.Error as e: + return json.dumps( + {"status": "error", "error": f"Database error: {str(e)}"} + ) + except Exception as e: + return json.dumps( + { + "status": "error", + "error": f"Unexpected error: {str(e)}", + } + ) + + +def create_table( + database_path: str, table_name: str, schema: str +) -> str: + """ + Create a new table in the specified database with the given schema. + + Args: + database_path (str): Full path to the database file + table_name (str): Name of the table to create + schema (str): SQL schema definition for the table columns + Format: "column1 TYPE constraints, column2 TYPE constraints, ..." + Example: "id INTEGER PRIMARY KEY, name TEXT NOT NULL, age INTEGER" + + Returns: + str: JSON string containing operation result and table information + + Raises: + sqlite3.Error: If table creation fails + FileNotFoundError: If database file doesn't exist + + Example: + >>> schema = "id INTEGER PRIMARY KEY, name TEXT NOT NULL, email TEXT UNIQUE" + >>> result = create_table("/data/company.db", "employees", schema) + >>> print(result) + {"status": "success", "table": "employees", "columns": 3} + """ + try: + # Validate inputs + if not all([database_path, table_name, schema]): + raise ValueError( + "Database path, table name, and schema are required" + ) + + # Check if database exists + if not Path(database_path).exists(): + raise FileNotFoundError( + f"Database file not found: {database_path}" + ) + + # Clean table name + clean_table_name = table_name.strip().replace(" ", "_") + + # Connect to database + conn = sqlite3.connect(database_path) + cursor = conn.cursor() + + # Check if table already exists + cursor.execute( + "SELECT name FROM sqlite_master WHERE type='table' AND name=?", + (clean_table_name,), + ) + + if cursor.fetchone(): + conn.close() + return json.dumps( + { + "status": "warning", + "message": f"Table '{clean_table_name}' already exists", + "table": clean_table_name, + } + ) + + # Create table with provided schema + create_sql = f"CREATE TABLE {clean_table_name} ({schema})" + cursor.execute(create_sql) + + # Get table info + cursor.execute(f"PRAGMA table_info({clean_table_name})") + columns = cursor.fetchall() + + # Update metadata + cursor.execute( + """ + INSERT OR REPLACE INTO _database_metadata (key, value) + VALUES (?, ?) + """, + (f"table_{clean_table_name}_created", "true"), + ) + + conn.commit() + conn.close() + + result = { + "status": "success", + "message": f"Table '{clean_table_name}' created successfully", + "table": clean_table_name, + "columns": len(columns), + "schema": [ + { + "name": col[1], + "type": col[2], + "nullable": not col[3], + } + for col in columns + ], + } + + return json.dumps(result, indent=2) + + except ValueError as e: + return json.dumps({"status": "error", "error": str(e)}) + except FileNotFoundError as e: + return json.dumps({"status": "error", "error": str(e)}) + except sqlite3.Error as e: + return json.dumps( + {"status": "error", "error": f"SQL error: {str(e)}"} + ) + except Exception as e: + return json.dumps( + { + "status": "error", + "error": f"Unexpected error: {str(e)}", + } + ) + + +def insert_data( + database_path: str, table_name: str, data: str +) -> str: + """ + Insert data into a specified table. + + Args: + database_path (str): Full path to the database file + table_name (str): Name of the target table + data (str): JSON string containing data to insert + Format: {"columns": ["col1", "col2"], "values": [[val1, val2], ...]} + Or: [{"col1": val1, "col2": val2}, ...] + + Returns: + str: JSON string containing operation result and insertion statistics + + Example: + >>> data = '{"columns": ["name", "age"], "values": [["John", 30], ["Jane", 25]]}' + >>> result = insert_data("/data/company.db", "employees", data) + >>> print(result) + {"status": "success", "table": "employees", "rows_inserted": 2} + """ + try: + # Validate inputs + if not all([database_path, table_name, data]): + raise ValueError( + "Database path, table name, and data are required" + ) + + # Check if database exists + if not Path(database_path).exists(): + raise FileNotFoundError( + f"Database file not found: {database_path}" + ) + + # Parse data + try: + parsed_data = json.loads(data) + except json.JSONDecodeError: + raise ValueError("Invalid JSON format for data") + + conn = sqlite3.connect(database_path) + cursor = conn.cursor() + + # Check if table exists + cursor.execute( + "SELECT name FROM sqlite_master WHERE type='table' AND name=?", + (table_name,), + ) + + if not cursor.fetchone(): + conn.close() + raise ValueError(f"Table '{table_name}' does not exist") + + rows_inserted = 0 + + # Handle different data formats + if isinstance(parsed_data, list) and all( + isinstance(item, dict) for item in parsed_data + ): + # Format: [{"col1": val1, "col2": val2}, ...] + for row in parsed_data: + columns = list(row.keys()) + values = list(row.values()) + placeholders = ", ".join(["?" for _ in values]) + columns_str = ", ".join(columns) + + insert_sql = f"INSERT INTO {table_name} ({columns_str}) VALUES ({placeholders})" + cursor.execute(insert_sql, values) + rows_inserted += 1 + + elif ( + isinstance(parsed_data, dict) + and "columns" in parsed_data + and "values" in parsed_data + ): + # Format: {"columns": ["col1", "col2"], "values": [[val1, val2], ...]} + columns = parsed_data["columns"] + values_list = parsed_data["values"] + + placeholders = ", ".join(["?" for _ in columns]) + columns_str = ", ".join(columns) + + insert_sql = f"INSERT INTO {table_name} ({columns_str}) VALUES ({placeholders})" + + for values in values_list: + cursor.execute(insert_sql, values) + rows_inserted += 1 + else: + raise ValueError( + "Invalid data format. Expected list of dicts or dict with columns/values" + ) + + conn.commit() + conn.close() + + result = { + "status": "success", + "message": f"Data inserted successfully into '{table_name}'", + "table": table_name, + "rows_inserted": rows_inserted, + } + + return json.dumps(result, indent=2) + + except (ValueError, FileNotFoundError) as e: + return json.dumps({"status": "error", "error": str(e)}) + except sqlite3.Error as e: + return json.dumps( + {"status": "error", "error": f"SQL error: {str(e)}"} + ) + except Exception as e: + return json.dumps( + { + "status": "error", + "error": f"Unexpected error: {str(e)}", + } + ) + + +def query_database( + database_path: str, query: str, params: str = "[]" +) -> str: + """ + Execute a SELECT query on the database and return results. + + Args: + database_path (str): Full path to the database file + query (str): SQL SELECT query to execute + params (str, optional): JSON string of query parameters for prepared statements. + Defaults to "[]". + + Returns: + str: JSON string containing query results and metadata + + Example: + >>> query = "SELECT * FROM employees WHERE age > ?" + >>> params = "[25]" + >>> result = query_database("/data/company.db", query, params) + >>> print(result) + {"status": "success", "results": [...], "row_count": 5} + """ + try: + # Validate inputs + if not all([database_path, query]): + raise ValueError("Database path and query are required") + + # Check if database exists + if not Path(database_path).exists(): + raise FileNotFoundError( + f"Database file not found: {database_path}" + ) + + # Validate query is SELECT only (security) + if not query.strip().upper().startswith("SELECT"): + raise ValueError("Only SELECT queries are allowed") + + # Parse parameters + try: + query_params = json.loads(params) + except json.JSONDecodeError: + raise ValueError("Invalid JSON format for parameters") + + conn = sqlite3.connect(database_path) + conn.row_factory = sqlite3.Row # Enable column access by name + cursor = conn.cursor() + + # Execute query + if query_params: + cursor.execute(query, query_params) + else: + cursor.execute(query) + + # Fetch results + rows = cursor.fetchall() + + # Convert to list of dictionaries + results = [dict(row) for row in rows] + + # Get column names + column_names = ( + [description[0] for description in cursor.description] + if cursor.description + else [] + ) + + conn.close() + + result = { + "status": "success", + "message": "Query executed successfully", + "results": results, + "row_count": len(results), + "columns": column_names, + } + + return json.dumps(result, indent=2) + + except (ValueError, FileNotFoundError) as e: + return json.dumps({"status": "error", "error": str(e)}) + except sqlite3.Error as e: + return json.dumps( + {"status": "error", "error": f"SQL error: {str(e)}"} + ) + except Exception as e: + return json.dumps( + { + "status": "error", + "error": f"Unexpected error: {str(e)}", + } + ) + + +def update_table_data( + database_path: str, + table_name: str, + update_data: str, + where_clause: str = "", +) -> str: + """ + Update existing data in a table. + + Args: + database_path (str): Full path to the database file + table_name (str): Name of the table to update + update_data (str): JSON string with column-value pairs to update + Format: {"column1": "new_value1", "column2": "new_value2"} + where_clause (str, optional): WHERE condition for the update (without WHERE keyword). + Example: "id = 1 AND status = 'active'" + + Returns: + str: JSON string containing operation result and update statistics + + Example: + >>> update_data = '{"salary": 50000, "department": "Engineering"}' + >>> where_clause = "id = 1" + >>> result = update_table_data("/data/company.db", "employees", update_data, where_clause) + >>> print(result) + {"status": "success", "table": "employees", "rows_updated": 1} + """ + try: + # Validate inputs + if not all([database_path, table_name, update_data]): + raise ValueError( + "Database path, table name, and update data are required" + ) + + # Check if database exists + if not Path(database_path).exists(): + raise FileNotFoundError( + f"Database file not found: {database_path}" + ) + + # Parse update data + try: + parsed_updates = json.loads(update_data) + except json.JSONDecodeError: + raise ValueError("Invalid JSON format for update data") + + if not isinstance(parsed_updates, dict): + raise ValueError("Update data must be a dictionary") + + conn = sqlite3.connect(database_path) + cursor = conn.cursor() + + # Check if table exists + cursor.execute( + "SELECT name FROM sqlite_master WHERE type='table' AND name=?", + (table_name,), + ) + + if not cursor.fetchone(): + conn.close() + raise ValueError(f"Table '{table_name}' does not exist") + + # Build UPDATE query + set_clauses = [] + values = [] + + for column, value in parsed_updates.items(): + set_clauses.append(f"{column} = ?") + values.append(value) + + set_clause = ", ".join(set_clauses) + + if where_clause: + update_sql = f"UPDATE {table_name} SET {set_clause} WHERE {where_clause}" + else: + update_sql = f"UPDATE {table_name} SET {set_clause}" + + # Execute update + cursor.execute(update_sql, values) + rows_updated = cursor.rowcount + + conn.commit() + conn.close() + + result = { + "status": "success", + "message": f"Table '{table_name}' updated successfully", + "table": table_name, + "rows_updated": rows_updated, + "updated_columns": list(parsed_updates.keys()), + } + + return json.dumps(result, indent=2) + + except (ValueError, FileNotFoundError) as e: + return json.dumps({"status": "error", "error": str(e)}) + except sqlite3.Error as e: + return json.dumps( + {"status": "error", "error": f"SQL error: {str(e)}"} + ) + except Exception as e: + return json.dumps( + { + "status": "error", + "error": f"Unexpected error: {str(e)}", + } + ) + + +def get_database_schema(database_path: str) -> str: + """ + Get comprehensive schema information for all tables in the database. + + Args: + database_path (str): Full path to the database file + + Returns: + str: JSON string containing complete database schema information + + Example: + >>> result = get_database_schema("/data/company.db") + >>> print(result) + {"status": "success", "database": "company.db", "tables": {...}} + """ + try: + if not database_path: + raise ValueError("Database path is required") + + if not Path(database_path).exists(): + raise FileNotFoundError( + f"Database file not found: {database_path}" + ) + + conn = sqlite3.connect(database_path) + cursor = conn.cursor() + + # Get all tables + cursor.execute( + "SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE '_%'" + ) + tables = cursor.fetchall() + + schema_info = { + "database": Path(database_path).name, + "table_count": len(tables), + "tables": {}, + } + + for table in tables: + table_name = table[0] + + # Get table schema + cursor.execute(f"PRAGMA table_info({table_name})") + columns = cursor.fetchall() + + # Get row count + cursor.execute(f"SELECT COUNT(*) FROM {table_name}") + row_count = cursor.fetchone()[0] + + schema_info["tables"][table_name] = { + "columns": [ + { + "name": col[1], + "type": col[2], + "nullable": not col[3], + "default": col[4], + "primary_key": bool(col[5]), + } + for col in columns + ], + "column_count": len(columns), + "row_count": row_count, + } + + conn.close() + + result = { + "status": "success", + "message": "Database schema retrieved successfully", + "schema": schema_info, + } + + return json.dumps(result, indent=2) + + except (ValueError, FileNotFoundError) as e: + return json.dumps({"status": "error", "error": str(e)}) + except sqlite3.Error as e: + return json.dumps( + {"status": "error", "error": f"SQL error: {str(e)}"} + ) + except Exception as e: + return json.dumps( + { + "status": "error", + "error": f"Unexpected error: {str(e)}", + } + ) + + +# ============================================================================= +# DATABASE CREATION SPECIALIST AGENT +# ============================================================================= +database_creator_agent = Agent( + agent_name="Database-Creator", + agent_description="Specialist agent responsible for creating and initializing databases with proper structure and metadata", + system_prompt="""You are the Database Creator, a specialist agent responsible for database creation and initialization. Your expertise includes: + + DATABASE CREATION & SETUP: + - Creating new SQLite databases with proper structure + - Setting up database metadata and tracking systems + - Initializing database directories and file organization + - Ensuring database accessibility and permissions + - Creating database backup and recovery procedures + + DATABASE ARCHITECTURE: + - Designing optimal database structures for different use cases + - Planning database organization and naming conventions + - Setting up database configuration and optimization settings + - Implementing database security and access controls + - Creating database documentation and specifications + + Your responsibilities: + - Create new databases when requested + - Set up proper database structure and metadata + - Ensure database is properly initialized and accessible + - Provide database creation status and information + - Handle database creation errors and provide solutions + + You work with precise technical specifications and always ensure databases are created correctly and efficiently.""", + model_name="claude-sonnet-4-20250514", + max_loops=1, + temperature=0.3, + dynamic_temperature_enabled=True, + tools=[create_database, get_database_schema], +) + +# ============================================================================= +# TABLE MANAGEMENT SPECIALIST AGENT +# ============================================================================= +table_manager_agent = Agent( + agent_name="Table-Manager", + agent_description="Specialist agent for table creation, schema design, and table structure management", + system_prompt="""You are the Table Manager, a specialist agent responsible for table creation, schema design, and table structure management. Your expertise includes: + + TABLE CREATION & DESIGN: + - Creating tables with optimal schema design + - Defining appropriate data types and constraints + - Setting up primary keys, foreign keys, and indexes + - Designing normalized table structures + - Creating tables that support efficient queries and operations + + SCHEMA MANAGEMENT: + - Analyzing schema requirements and designing optimal structures + - Validating schema definitions and data types + - Ensuring schema consistency and integrity + - Managing schema modifications and updates + - Optimizing table structures for performance + + DATA INTEGRITY: + - Implementing proper constraints and validation rules + - Setting up referential integrity between tables + - Ensuring data consistency across table operations + - Managing table relationships and dependencies + - Creating tables that support data quality requirements + + Your responsibilities: + - Create tables with proper schema definitions + - Validate table structures and constraints + - Ensure optimal table design for performance + - Handle table creation errors and provide solutions + - Provide detailed table information and metadata + + You work with precision and always ensure tables are created with optimal structure and performance characteristics.""", + model_name="claude-sonnet-4-20250514", + max_loops=1, + temperature=0.3, + dynamic_temperature_enabled=True, + tools=[create_table, get_database_schema], +) + +# ============================================================================= +# DATA OPERATIONS SPECIALIST AGENT +# ============================================================================= +data_operations_agent = Agent( + agent_name="Data-Operations", + agent_description="Specialist agent for data insertion, updates, and data manipulation operations", + system_prompt="""You are the Data Operations specialist, responsible for all data manipulation operations including insertion, updates, and data management. Your expertise includes: + + DATA INSERTION: + - Inserting data with proper validation and formatting + - Handling bulk data insertions efficiently + - Managing data type conversions and formatting + - Ensuring data integrity during insertion operations + - Validating data before insertion to prevent errors + + DATA UPDATES: + - Updating existing data with precision and safety + - Creating targeted update operations with proper WHERE clauses + - Managing bulk updates and data modifications + - Ensuring data consistency during update operations + - Validating update operations to prevent data corruption + + DATA VALIDATION: + - Validating data formats and types before operations + - Ensuring data meets schema requirements and constraints + - Checking for data consistency and integrity + - Managing data transformation and cleaning operations + - Providing detailed feedback on data operation results + + ERROR HANDLING: + - Managing data operation errors gracefully + - Providing clear error messages and solutions + - Ensuring data operations are atomic and safe + - Rolling back operations when necessary + - Maintaining data integrity throughout all operations + + Your responsibilities: + - Execute data insertion operations safely and efficiently + - Perform data updates with proper validation + - Ensure data integrity throughout all operations + - Handle data operation errors and provide solutions + - Provide detailed operation results and statistics + + You work with extreme precision and always prioritize data integrity and safety in all operations.""", + model_name="claude-sonnet-4-20250514", + max_loops=1, + temperature=0.3, + dynamic_temperature_enabled=True, + tools=[insert_data, update_table_data], +) + +# ============================================================================= +# QUERY SPECIALIST AGENT +# ============================================================================= +query_specialist_agent = Agent( + agent_name="Query-Specialist", + agent_description="Expert agent for database querying, data retrieval, and query optimization", + system_prompt="""You are the Query Specialist, an expert agent responsible for database querying, data retrieval, and query optimization. Your expertise includes: + + QUERY EXECUTION: + - Executing complex SELECT queries efficiently + - Handling parameterized queries for security + - Managing query results and data formatting + - Ensuring query performance and optimization + - Providing comprehensive query results with metadata + + QUERY OPTIMIZATION: + - Analyzing query performance and optimization opportunities + - Creating efficient queries that minimize resource usage + - Understanding database indexes and query planning + - Optimizing JOIN operations and complex queries + - Managing query timeouts and performance monitoring + + DATA RETRIEVAL: + - Retrieving data with proper formatting and structure + - Handling large result sets efficiently + - Managing data aggregation and summarization + - Creating reports and data analysis queries + - Ensuring data accuracy and completeness in results + + SECURITY & VALIDATION: + - Ensuring queries are safe and secure + - Validating query syntax and parameters + - Preventing SQL injection and security vulnerabilities + - Managing query permissions and access controls + - Ensuring queries follow security best practices + + Your responsibilities: + - Execute database queries safely and efficiently + - Optimize query performance for best results + - Provide comprehensive query results and analysis + - Handle query errors and provide solutions + - Ensure query security and data protection + + You work with expertise in SQL optimization and always ensure queries are secure, efficient, and provide accurate results.""", + model_name="claude-sonnet-4-20250514", + max_loops=1, + temperature=0.3, + dynamic_temperature_enabled=True, + tools=[query_database, get_database_schema], +) + +# ============================================================================= +# DATABASE DIRECTOR AGENT (COORDINATOR) +# ============================================================================= +database_director_agent = Agent( + agent_name="Database-Director", + agent_description="Senior database director who orchestrates comprehensive database operations across all specialized teams", + system_prompt="""You are the Database Director, the senior executive responsible for orchestrating comprehensive database operations and coordinating a team of specialized database experts. Your role is to: + + STRATEGIC COORDINATION: + - Analyze complex database tasks and break them down into specialized operations + - Assign tasks to the most appropriate specialist based on their unique expertise + - Ensure comprehensive coverage of all database operations (creation, schema, data, queries) + - Coordinate between specialists to avoid conflicts and ensure data integrity + - Synthesize results from multiple specialists into coherent database solutions + - Ensure all database operations align with user requirements and best practices + + TEAM LEADERSHIP: + - Lead the Database Creator in setting up new databases and infrastructure + - Guide the Table Manager in creating optimal table structures and schemas + - Direct the Data Operations specialist in data insertion and update operations + - Oversee the Query Specialist in data retrieval and analysis operations + - Ensure all team members work collaboratively toward unified database goals + - Provide strategic direction and feedback to optimize team performance + + DATABASE ARCHITECTURE: + - Design comprehensive database solutions that meet user requirements + - Ensure database operations follow best practices and standards + - Plan database workflows that optimize performance and reliability + - Balance immediate operational needs with long-term database health + - Ensure database operations are secure, efficient, and maintainable + - Optimize database operations for scalability and performance + + OPERATION ORCHESTRATION: + - Monitor database operations across all specialists and activities + - Analyze results to identify optimization opportunities and improvements + - Ensure database operations deliver reliable and accurate results + - Provide strategic recommendations based on operation outcomes + - Coordinate complex multi-step database operations across specialists + - Ensure continuous improvement and optimization in database management + + Your expertise includes: + - Database architecture and design strategy + - Team leadership and cross-functional coordination + - Database performance analysis and optimization + - Strategic planning and requirement analysis + - Operation workflow management and optimization + - Database security and best practices implementation + + You deliver comprehensive database solutions that leverage the full expertise of your specialized team, ensuring all database operations work together to provide reliable, efficient, and secure data management.""", + model_name="claude-sonnet-4-20250514", + max_loops=1, + temperature=0.5, + dynamic_temperature_enabled=True, +) + +# ============================================================================= +# HIERARCHICAL DATABASE SWARM +# ============================================================================= +# Create list of specialized database agents +database_specialists = [ + database_creator_agent, + table_manager_agent, + data_operations_agent, + query_specialist_agent, +] + +# Initialize the hierarchical database swarm +smart_database_swarm = HierarchicalSwarm( + name="Smart-Database-Swarm", + description="A comprehensive database management system with specialized agents for creation, schema management, data operations, and querying, coordinated by a database director", + director_model_name="gpt-4.1", + agents=database_specialists, + max_loops=1, + verbose=True, +) + +# ============================================================================= +# EXAMPLE USAGE AND DEMONSTRATIONS +# ============================================================================= +if __name__ == "__main__": + # Configure logging + logger.info("Starting Smart Database Swarm demonstration") + + # Example 1: Create a complete e-commerce database system + print("=" * 80) + print("SMART DATABASE SWARM - E-COMMERCE SYSTEM EXAMPLE") + print("=" * 80) + + task1 = """Create a comprehensive e-commerce database system with the following requirements: + + 1. Create a database called 'ecommerce_db' + 2. Create tables for: + - customers (id, name, email, phone, address, created_at) + - products (id, name, description, price, category, stock_quantity, created_at) + - orders (id, customer_id, order_date, total_amount, status) + - order_items (id, order_id, product_id, quantity, unit_price) + + 3. Insert sample data: + - Add 3 customers + - Add 5 products in different categories + - Create 2 orders with multiple items + + 4. Query the database to: + - Show all customers with their order history + - Display products by category with stock levels + - Calculate total sales by product + + Ensure all operations are executed properly and provide comprehensive results.""" + + result1 = smart_database_swarm.run(task=task1) + print("\nE-COMMERCE DATABASE RESULT:") + print(result1) + + # print("\n" + "=" * 80) + # print("SMART DATABASE SWARM - EMPLOYEE MANAGEMENT SYSTEM") + # print("=" * 80) + + # # Example 2: Employee management system + # task2 = """Create an employee management database system: + + # 1. Create database 'company_hr' + # 2. Create tables for: + # - departments (id, name, budget, manager_id) + # - employees (id, name, email, department_id, position, salary, hire_date) + # - projects (id, name, description, start_date, end_date, budget) + # - employee_projects (employee_id, project_id, role, hours_allocated) + + # 3. Add sample data for departments, employees, and projects + # 4. Query for: + # - Employee count by department + # - Average salary by position + # - Projects with their assigned employees + # - Department budgets vs project allocations + + # Coordinate the team to build this system efficiently.""" + + # result2 = smart_database_swarm.run(task=task2) + # print("\nEMPLOYEE MANAGEMENT RESULT:") + # print(result2) + + # print("\n" + "=" * 80) + # print("SMART DATABASE SWARM - DATABASE ANALYSIS") + # print("=" * 80) + + # # Example 3: Database analysis and optimization + # task3 = """Analyze and optimize the existing databases: + + # 1. Get schema information for all created databases + # 2. Analyze table structures and relationships + # 3. Suggest optimizations for: + # - Index creation for better query performance + # - Data normalization improvements + # - Constraint additions for data integrity + + # 4. Update data in existing tables: + # - Increase product prices by 10% for electronics category + # - Update employee salaries based on performance criteria + # - Modify order statuses for completed orders + + # 5. Create comprehensive reports showing: + # - Database statistics and health metrics + # - Data distribution and patterns + # - Performance optimization recommendations + + # Coordinate all specialists to provide a complete database analysis.""" + + # result3 = smart_database_swarm.run(task=task3) + # print("\nDATABASE ANALYSIS RESULT:") + # print(result3) + + # logger.info("Smart Database Swarm demonstration completed successfully") +``` +- Run the file with `smart_database_swarm.py` -------------------------------------------------- -# File: examples\templates.md +# File: examples/templates.md # Templates & Applications Documentation @@ -2528,7 +3988,7 @@ Join our community of agent engineers and researchers for technical support, cut -------------------------------------------------- -# File: governance\bounty_program.md +# File: governance/bounty_program.md # Swarms Bounty Program @@ -2608,7 +4068,7 @@ Join us in building the future of multi-agent collaboration and AI automation. W -------------------------------------------------- -# File: governance\main.md +# File: governance/main.md # 🔗 Links & Resources @@ -2691,7 +4151,7 @@ Welcome to the Swarms ecosystem. Click any tile below to explore our products, c -------------------------------------------------- -# File: guides\agent_evals.md +# File: guides/agent_evals.md ### Understanding Agent Evaluation Mechanisms @@ -2950,7 +4410,7 @@ Agent evaluation mechanisms are vital for ensuring the reliability, efficiency, -------------------------------------------------- -# File: guides\financial_analysis_swarm_mm.md +# File: guides/financial_analysis_swarm_mm.md # Building a Multi-Agent System for Real-Time Financial Analysis: A Comprehensive Tutorial @@ -3436,7 +4896,7 @@ By leveraging the power of multi-agent AI systems, you're well-equipped to navig -------------------------------------------------- -# File: guides\financial_data_api.md +# File: guides/financial_data_api.md # Analyzing Financial Data with AI Agents using Swarms Framework @@ -4192,7 +5652,7 @@ As the field of AI in finance continues to evolve, we can expect even more sophi -------------------------------------------------- -# File: guides\healthcare_blog.md +# File: guides/healthcare_blog.md # Unlocking Efficiency and Cost Savings in Healthcare: How Swarms of LLM Agents Can Revolutionize Medical Operations and Save Millions @@ -4472,7 +5932,7 @@ By adopting swarms of LLM agents, healthcare organizations can streamline operat -------------------------------------------------- -# File: guides\pricing.md +# File: guides/pricing.md # Comparing LLM Provider Pricing: A Guide for Enterprises @@ -5442,7 +6902,7 @@ Want to get in touch with the Swarms team? Open an issue on [GitHub](https://git -------------------------------------------------- -# File: protocol\overview.md +# File: protocol/overview.md # Swarms Protocol Overview & Architecture @@ -5852,32 +7312,21 @@ For more on the philosophy and architecture, see [Development Philosophy & Princ ## Further Reading & References -- [Swarms Docs Home](https://docs.swarms.world/en/latest/) - -- [Quickstart for Agents](https://docs.swarms.world/en/latest/swarms/agents/) - -- [Agent API Reference](https://docs.swarms.world/en/latest/swarms/structs/agent/) - -- [Tools Overview](https://docs.swarms.world/en/latest/swarms_tools/overview/) - -- [BaseTool Reference](https://docs.swarms.world/en/latest/swarms/tools/base_tool/) - -- [Reasoning Agents Overview](https://docs.swarms.world/en/latest/swarms/agents/reasoning_agents_overview/) - -- [Multi-Agent Architectures Overview](https://docs.swarms.world/en/latest/swarms/concept/swarm_architectures/) - -- [Examples Overview](https://docs.swarms.world/en/latest/examples/index/) - -- [CLI Documentation](https://docs.swarms.world/en/latest/swarms/cli/main/) - -- [Prompts Management](https://docs.swarms.world/en/latest/swarms/prompts/main/) - -- [Development Philosophy & Principles](https://docs.swarms.world/en/latest/swarms/concept/philosophy/) - -- [Understanding Swarms Architecture](https://docs.swarms.world/en/latest/swarms/concept/framework_architecture/) - -- [SIP Guidelines and Template](https://docs.swarms.world/en/latest/protocol/sip/) - +| Resource Name | Link | Description | +|-------------------------------------- |----------------------------------------------------------------------------------------|--------------------------------------------------| +| Swarms Docs Home | [Swarms Docs Home](https://docs.swarms.world/en/latest/) | Main documentation homepage | +| Quickstart for Agents | [Quickstart for Agents](https://docs.swarms.world/en/latest/swarms/agents/) | Getting started with Swarms agents | +| Agent API Reference | [Agent API Reference](https://docs.swarms.world/en/latest/swarms/structs/agent/) | API reference for Agent class | +| Tools Overview | [Tools Overview](https://docs.swarms.world/en/latest/swarms_tools/overview/) | Overview of available tools | +| BaseTool Reference | [BaseTool Reference](https://docs.swarms.world/en/latest/swarms/tools/base_tool/) | Reference for the BaseTool class | +| Reasoning Agents Overview | [Reasoning Agents Overview](https://docs.swarms.world/en/latest/swarms/agents/reasoning_agents_overview/) | Overview of reasoning agents | +| Multi-Agent Architectures Overview | [Multi-Agent Architectures Overview](https://docs.swarms.world/en/latest/swarms/concept/swarm_architectures/) | Multi-agent system architectures | +| Examples Overview | [Examples Overview](https://docs.swarms.world/en/latest/examples/index/) | Example projects and use cases | +| CLI Documentation | [CLI Documentation](https://docs.swarms.world/en/latest/swarms/cli/main/) | Command-line interface documentation | +| Prompts Management | [Prompts Management](https://docs.swarms.world/en/latest/swarms/prompts/main/) | Managing and customizing prompts | +| Development Philosophy & Principles | [Development Philosophy & Principles](https://docs.swarms.world/en/latest/swarms/concept/philosophy/) | Framework philosophy and guiding principles | +| Understanding Swarms Architecture | [Understanding Swarms Architecture](https://docs.swarms.world/en/latest/swarms/concept/framework_architecture/) | In-depth look at Swarms architecture | +| SIP Guidelines and Template | [SIP Guidelines and Template](https://docs.swarms.world/en/latest/protocol/sip/) | Swarms Improvement Proposal process and template | # Conclusion @@ -5887,7 +7336,7 @@ The Swarms protocol provides a robust foundation for building intelligent, colla -------------------------------------------------- -# File: protocol\sip.md +# File: protocol/sip.md # Swarms Improvement Proposal (SIP) Guidelines @@ -6444,7 +7893,7 @@ for message in conversation_history: -------------------------------------------------- -# File: swarms\agents\abstractagent.md +# File: swarms/agents/abstractagent.md # swarms.agents @@ -6573,7 +8022,7 @@ For further exploration and understanding of AI agents and agent communication, -------------------------------------------------- -# File: swarms\agents\agent_judge.md +# File: swarms/agents/agent_judge.md # AgentJudge @@ -6829,7 +8278,7 @@ for i, task_evals in enumerate(evaluations): -------------------------------------------------- -# File: swarms\agents\consistency_agent.md +# File: swarms/agents/consistency_agent.md # Consistency Agent Documentation @@ -7065,7 +8514,7 @@ The agent supports various output types: -------------------------------------------------- -# File: swarms\agents\create_agents_yaml.md +# File: swarms/agents/create_agents_yaml.md # Building Agents from a YAML File @@ -7191,24 +8640,13 @@ load_dotenv() yaml_file = "agents_multi_agent.yaml" -# Get the OpenAI API key from the environment variable -api_key = os.getenv("GROQ_API_KEY") - -# Model -model = OpenAIChat( - openai_api_base="https://api.groq.com/openai/v1", - openai_api_key=api_key, - model_name="llama-3.1-70b-versatile", - temperature=0.1, -) - try: - # Create agents and run tasks (using 'both' to return agents and task results) - task_results = create_agents_from_yaml( - model=model, yaml_file=yaml_file, return_type="run_swarm" - ) + # Create agents and run tasks (using 'both' to return agents and task results) + task_results = create_agents_from_yaml( + model=model, yaml_file=yaml_file, return_type="run_swarm" + ) - logger.info(f"Results from agents: {task_results}") + logger.info(f"Results from agents: {task_results}") except Exception as e: logger.error(f"An error occurred: {e}") @@ -7390,7 +8828,7 @@ The `create_agents_from_yaml` function provides a flexible and powerful way to d -------------------------------------------------- -# File: swarms\agents\external_party_agents.md +# File: swarms/agents/external_party_agents.md @@ -7773,7 +9211,7 @@ For more examples and use cases, please refer to the official Swarms documentati -------------------------------------------------- -# File: swarms\agents\gkp_agent.md +# File: swarms/agents/gkp_agent.md # Generated Knowledge Prompting (GKP) Agent @@ -7952,22 +9390,17 @@ The agent includes robust error handling for: -------------------------------------------------- -# File: swarms\agents\index.md +# File: swarms/agents/index.md # Agents Introduction -The Agent class is the core component of the Swarms framework, designed to create intelligent, autonomous AI agents capable of handling complex tasks through multi-modal processing, tool integration, and structured outputs. This comprehensive guide covers all aspects of the Agent class, from basic setup to advanced features. -## Table of Contents +An agent in swarms is basically 4 elements added together: + +`agent = LLM + Tools + RAG + Loop` + +The Agent class is the core component of the Swarms framework, designed to create intelligent, autonomous AI agents capable of handling complex tasks through multi-modal processing, tool integration, and structured outputs. This comprehensive guide covers all aspects of the Agent class, from basic setup to advanced features. -1. [Prerequisites & Installation](#prerequisites--installation) -2. [Basic Agent Configuration](#basic-agent-configuration) -3. [Multi-Modal Capabilities](#multi-modal-capabilities) -4. [Tool Integration](#tool-integration) -5. [Structured Outputs](#structured-outputs) -6. [Advanced Features](#advanced-features) -7. [Best Practices](#best-practices) -8. [Complete Examples](#complete-examples) ## Prerequisites & Installation @@ -8448,57 +9881,6 @@ final_only_agent = Agent( ) ``` -### Safety and Content Filtering - -```python -from swarms import Agent - -# Agent with enhanced safety features -safe_agent = Agent( - agent_name="Safe-Agent", - agent_description="Agent with comprehensive safety measures", - system_prompt="You are a helpful, harmless, and honest AI assistant.", - model_name="gpt-4o-mini", - safety_prompt_on=True, # Enable safety prompts - max_loops=1, - temperature=0.3 # Lower temperature for more consistent, safe responses -) -``` - -## Best Practices - -### Error Handling and Robustness - -```python -import logging -from swarms import Agent - -# Configure logging -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - -def robust_agent_execution(agent, task, max_retries=3): - """Execute agent with retry logic and error handling.""" - for attempt in range(max_retries): - try: - response = agent.run(task) - logger.info(f"Agent execution successful on attempt {attempt + 1}") - return response - except Exception as e: - logger.error(f"Attempt {attempt + 1} failed: {str(e)}") - if attempt == max_retries - 1: - raise - time.sleep(2 ** attempt) # Exponential backoff - - return None - -# Example usage -try: - result = robust_agent_execution(agent, "Analyze market trends") - print(result) -except Exception as e: - print(f"Agent execution failed: {e}") -``` ### Performance Optimization @@ -8825,15 +10207,13 @@ If you encounter issues or need assistance: We welcome contributions! Here's how to get involved: -- **Report Bugs**: Help us improve by reporting issues - -- **Suggest Features**: Share your ideas for new capabilities - -- **Submit Code**: Contribute improvements and new features - -- **Improve Documentation**: Help make our docs better - -- **Share Examples**: Show how you're using Swarms in your projects +| Contribution Type | Description | +|-------------------------|--------------------------------------------------| +| **Report Bugs** | Help us improve by reporting issues | +| **Suggest Features** | Share your ideas for new capabilities | +| **Submit Code** | Contribute improvements and new features | +| **Improve Documentation** | Help make our docs better | +| **Share Examples** | Show how you're using Swarms in your projects | --- @@ -8841,7 +10221,7 @@ We welcome contributions! Here's how to get involved: -------------------------------------------------- -# File: swarms\agents\iterative_agent.md +# File: swarms/agents/iterative_agent.md # Iterative Reflective Expansion (IRE) Algorithm Documentation @@ -8929,7 +10309,7 @@ The Iterative Reflective Expansion (IRE) Algorithm is a powerful tool for solvin -------------------------------------------------- -# File: swarms\agents\message.md +# File: swarms/agents/message.md # The Module/Class Name: Message @@ -9047,7 +10427,7 @@ For further information on the `Message` class and its usage, refer to the offic -------------------------------------------------- -# File: swarms\agents\new_agent.md +# File: swarms/agents/new_agent.md # How to Create Good Agents @@ -9265,7 +10645,7 @@ By following these guidelines, you can create powerful and flexible agents tailo -------------------------------------------------- -# File: swarms\agents\openai_assistant.md +# File: swarms/agents/openai_assistant.md # OpenAI Assistant @@ -9406,7 +10786,7 @@ The assistant implements robust error handling: -------------------------------------------------- -# File: swarms\agents\reasoning_agent_router.md +# File: swarms/agents/reasoning_agent_router.md # ReasoningAgentRouter @@ -9841,7 +11221,7 @@ graph TD -------------------------------------------------- -# File: swarms\agents\reasoning_agents_overview.md +# File: swarms/agents/reasoning_agents_overview.md # Reasoning Agents Overview @@ -10272,7 +11652,7 @@ Reasoning agents represent a significant advancement in enterprise agent capabil -------------------------------------------------- -# File: swarms\agents\reasoning_duo.md +# File: swarms/agents/reasoning_duo.md # ReasoningDuo @@ -10440,7 +11820,7 @@ For a runnable demonstration, see the [reasoning_duo_batched.py](https://github. -------------------------------------------------- -# File: swarms\agents\reflexion_agent.md +# File: swarms/agents/reflexion_agent.md # ReflexionAgent @@ -10638,7 +12018,7 @@ The ReflexionAgent includes a sophisticated memory system (`ReflexionMemory`) th -------------------------------------------------- -# File: swarms\agents\structured_outputs.md +# File: swarms/agents/structured_outputs.md # :material-code-json: Agentic Structured Outputs @@ -10977,7 +12357,7 @@ parsed_output = str_to_dict(response) -------------------------------------------------- -# File: swarms\agents\third_party.md +# File: swarms/agents/third_party.md # Swarms Framework: Integrating and Customizing Agent Libraries @@ -11607,7 +12987,7 @@ By embracing the power of the swarms framework and the ecosystem of agent librar -------------------------------------------------- -# File: swarms\agents\tool_agent.md +# File: swarms/agents/tool_agent.md # ToolAgent Documentation @@ -11916,7 +13296,7 @@ This documentation provides a comprehensive guide to the `ToolAgent` class, incl -------------------------------------------------- -# File: swarms\artifacts\artifact.md +# File: swarms/artifacts/artifact.md # `Artifact` @@ -12165,7 +13545,689 @@ print(new_artifact.get_metrics()) -------------------------------------------------- -# File: swarms\cli\cli_guide.md +# File: swarms/cli/cli_examples.md + +# Swarms CLI Examples + +This document provides comprehensive examples of how to use the Swarms CLI for various scenarios. Each example includes the complete command, expected output, and explanation. + +## Table of Contents + +- [Basic Usage Examples](#basic-usage-examples) +- [Agent Management Examples](#agent-management-examples) +- [Multi-Agent Workflow Examples](#multi-agent-workflow-examples) +- [Configuration Examples](#configuration-examples) +- [Advanced Usage Examples](#advanced-usage-examples) +- [Troubleshooting Examples](#troubleshooting-examples) + +## Basic Usage Examples + +### 1. Getting Started + +#### Check CLI Installation + +```bash +swarms help +``` + +**Expected Output:** +``` + _________ + / _____/_ _ _______ _______ _____ ______ + \_____ \\ \/ \/ /\__ \\_ __ \/ \ / ___/ + / \\ / / __ \| | \/ Y Y \\___ \ +/_______ / \/\_/ (____ /__| |__|_| /____ > + \/ \/ \/ \/ + +Available Commands +┌─────────────────┬─────────────────────────────────────────────────────────────┐ +│ Command │ Description │ +├─────────────────┼─────────────────────────────────────────────────────────────┤ +│ onboarding │ Start the interactive onboarding process │ +│ help │ Display this help message │ +│ get-api-key │ Retrieve your API key from the platform │ +│ check-login │ Verify login status and initialize cache │ +│ run-agents │ Execute agents from your YAML configuration │ +│ load-markdown │ Load agents from markdown files with YAML frontmatter │ +│ agent │ Create and run a custom agent with specified parameters │ +│ auto-upgrade │ Update Swarms to the latest version │ +│ book-call │ Schedule a strategy session with our team │ +│ autoswarm │ Generate and execute an autonomous swarm │ +└─────────────────┴─────────────────────────────────────────────────────────────┘ +``` + +#### Start Onboarding Process +```bash +swarms onboarding +``` + +This will start an interactive setup process to configure your environment. + +#### Get API Key + +```bash +swarms get-api-key +``` + +**Expected Output:** +``` +✓ API key page opened in your browser +``` + +#### Check Login Status + +```bash +swarms check-login +``` + +**Expected Output:** +``` +✓ Authentication verified +``` + +#### Run Environment Setup Check + +```bash +swarms setup-check +``` + +**Expected Output:** +``` +🔍 Running Swarms Environment Setup Check + +┌─────────────────────────────────────────────────────────────────────────────┐ +│ Environment Check Results │ +├─────────┬─────────────────────────┬─────────────────────────────────────────┤ +│ Status │ Check │ Details │ +├─────────┼─────────────────────────┼─────────────────────────────────────────┤ +│ ✓ │ Python Version │ Python 3.11.5 │ +│ ✓ │ Swarms Version │ Current version: 8.1.1 │ +│ ✓ │ API Keys │ API keys found: OPENAI_API_KEY │ +│ ✓ │ Dependencies │ All required dependencies available │ +│ ✓ │ Environment File │ .env file exists with 1 API key(s) │ +│ ⚠ │ Workspace Directory │ WORKSPACE_DIR environment variable is not set │ +└─────────┴─────────────────────────┴─────────────────────────────────────────┘ + +┌─────────────────────────────────────────────────────────────────────────────┐ +│ Setup Check Complete │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ ⚠️ Some checks failed. Please review the issues above. │ +└─────────────────────────────────────────────────────────────────────────────┘ + +💡 Recommendations: + 1. Set WORKSPACE_DIR environment variable: export WORKSPACE_DIR=/path/to/your/workspace + +Run 'swarms setup-check' again after making changes to verify. +``` + +## Agent Management Examples + +### 2. Creating Custom Agents + +#### Basic Research Agent + +```bash +swarms agent \ + --name "Research Assistant" \ + --description "AI research specialist for academic papers" \ + --system-prompt "You are an expert research assistant specializing in academic research. You help users find, analyze, and synthesize information from various sources. Always provide well-structured, evidence-based responses." \ + --task "Research the latest developments in quantum computing and provide a summary of key breakthroughs in the last 2 years" \ + --model-name "gpt-4" \ + --temperature 0.1 \ + --max-loops 3 +``` + +**Expected Output:** +``` +Creating custom agent: Research Assistant +[✓] Agent 'Research Assistant' completed the task successfully! + +┌─────────────────────────────────────────────────────────────────────────────┐ +│ Agent Execution Results │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ Agent Name: Research Assistant │ +│ Model: gpt-4 │ +│ Task: Research the latest developments in quantum computing... │ +│ Result: │ +│ Recent breakthroughs in quantum computing include: │ +│ 1. Google's 53-qubit Sycamore processor achieving quantum supremacy │ +│ 2. IBM's 433-qubit Osprey processor... │ +│ ... │ +└─────────────────────────────────────────────────────────────────────────────┘ +``` + +#### Code Review Agent + +```bash +swarms agent \ + --name "Code Reviewer" \ + --description "Expert code review assistant with security focus" \ + --system-prompt "You are a senior software engineer specializing in code review, security analysis, and best practices. Review code for bugs, security vulnerabilities, performance issues, and adherence to coding standards." \ + --task "Review this Python code for security vulnerabilities and suggest improvements: def process_user_input(data): return eval(data)" \ + --model-name "gpt-4" \ + --temperature 0.05 \ + --max-loops 2 \ + --verbose +``` + +#### Financial Analysis Agent + +```bash +swarms agent \ + --name "Financial Analyst" \ + --description "Expert financial analyst for market research and investment advice" \ + --system-prompt "You are a certified financial analyst with expertise in market analysis, investment strategies, and risk assessment. Provide data-driven insights and recommendations based on current market conditions." \ + --task "Analyze the current state of the technology sector and provide investment recommendations for the next quarter" \ + --model-name "gpt-4" \ + --temperature 0.2 \ + --max-loops 2 \ + --output-type "json" +``` + +### 3. Advanced Agent Configuration + +#### Agent with Dynamic Features + +```bash +swarms agent \ + --name "Adaptive Writer" \ + --description "Content writer with dynamic temperature and context adjustment" \ + --system-prompt "You are a professional content writer who adapts writing style based on audience and context. You can write in various tones from formal to casual, and adjust complexity based on the target audience." \ + --task "Write a blog post about artificial intelligence for a general audience, explaining complex concepts in simple terms" \ + --model-name "gpt-4" \ + --dynamic-temperature-enabled \ + --dynamic-context-window \ + --context-length 8000 \ + --retry-attempts 3 \ + --return-step-meta \ + --autosave \ + --saved-state-path "./agent_states/" +``` + +#### Agent with MCP Integration + +```bash +swarms agent \ + --name "MCP Agent" \ + --description "Agent with Model Context Protocol integration" \ + --system-prompt "You are a agent with access to external tools and data sources through MCP. Use these capabilities to provide comprehensive and up-to-date information." \ + --task "Search for recent news about climate change and summarize the key findings" \ + --model-name "gpt-4" \ + --mcp-url "https://api.example.com/mcp" \ + --temperature 0.1 \ + --max-loops 5 +``` + +## Multi-Agent Workflow Examples + +### 4. Running Agents from YAML Configuration + +#### Create `research_team.yaml` + +```yaml +agents: + - name: "Data Collector" + description: "Specialist in gathering and organizing data from various sources" + model_name: "gpt-4" + system_prompt: "You are a data collection specialist. Your role is to gather relevant information from multiple sources and organize it in a structured format." + temperature: 0.1 + max_loops: 3 + + - name: "Data Analyzer" + description: "Expert in analyzing and interpreting complex datasets" + model_name: "gpt-4" + system_prompt: "You are a data analyst. Take the collected data and perform comprehensive analysis to identify patterns, trends, and insights." + temperature: 0.2 + max_loops: 4 + + - name: "Report Writer" + description: "Professional writer who creates clear, compelling reports" + model_name: "gpt-4" + system_prompt: "You are a report writer. Take the analyzed data and create a comprehensive, well-structured report that communicates findings clearly." + temperature: 0.3 + max_loops: 3 +``` + +#### Execute the Team + +```bash +swarms run-agents --yaml-file research_team.yaml +``` + +**Expected Output:** +``` +Loading agents from research_team.yaml... +[✓] Agents completed their tasks successfully! + +Results: +Data Collector: [Collected data from 15 sources...] +Data Analyzer: [Identified 3 key trends and 5 significant patterns...] +Report Writer: [Generated comprehensive 25-page report...] +``` + +### 5. Loading Agents from Markdown + +#### Create `agents/researcher.md` + +```markdown +--- +name: Market Researcher +description: Expert in market research and competitive analysis +model_name: gpt-4 +temperature: 0.1 +max_loops: 3 +--- + +You are an expert market researcher with 15+ years of experience in competitive analysis, market sizing, and trend identification. You specialize in technology markets and have deep knowledge of consumer behavior, pricing strategies, and market dynamics. + +Your approach includes: +- Systematic data collection from multiple sources +- Quantitative and qualitative analysis +- Competitive landscape mapping +- Market opportunity identification +- Risk assessment and mitigation strategies +``` + +#### Create `agents/analyst.md` + +```markdown +--- +name: Business Analyst +description: Strategic business analyst focusing on growth opportunities +model_name: gpt-4 +temperature: 0.2 +max_loops: 4 +--- + +You are a senior business analyst specializing in strategic planning and growth strategy. You excel at identifying market opportunities, analyzing competitive advantages, and developing actionable business recommendations. + +Your expertise covers: +- Market opportunity analysis +- Competitive positioning +- Business model innovation +- Risk assessment +- Strategic planning frameworks +``` + +#### Load and Use Agents + +```bash +swarms load-markdown --markdown-path ./agents/ --concurrent +``` + +**Expected Output:** +``` +Loading agents from markdown: ./agents/ +✓ Successfully loaded 2 agents! + +┌─────────────────────────────────────────────────────────────────────────────┐ +│ Loaded Agents │ +├─────────────────┬──────────────┬───────────────────────────────────────────┤ +│ Name │ Model │ Description │ +├─────────────────┼──────────────┼───────────────────────────────────────────┤ +│ Market Researcher│ gpt-4 │ Expert in market research and competitive │ +│ │ │ analysis │ +├─────────────────┼──────────────┼───────────────────────────────────────────┤ +│ Business Analyst│ gpt-4 │ Strategic business analyst focusing on │ +│ │ │ growth opportunities │ +└─────────────────┴──────────────┴───────────────────────────────────────────┘ + +Ready to use 2 agents! +You can now use these agents in your code or run them interactively. +``` + +## Configuration Examples + +### 6. YAML Configuration Templates + +#### Simple Agent Configuration + +```yaml +# simple_agent.yaml +agents: + - name: "Simple Assistant" + description: "Basic AI assistant for general tasks" + model_name: "gpt-3.5-turbo" + system_prompt: "You are a helpful AI assistant." + temperature: 0.7 + max_loops: 1 +``` + +#### Advanced Multi-Agent Configuration + +```yaml +# advanced_team.yaml +agents: + - name: "Project Manager" + description: "Coordinates team activities and ensures project success" + model_name: "gpt-4" + system_prompt: "You are a senior project manager with expertise in agile methodologies, risk management, and team coordination." + temperature: 0.1 + max_loops: 5 + auto_generate_prompt: true + dynamic_temperature_enabled: true + + - name: "Technical Lead" + description: "Provides technical guidance and architecture decisions" + model_name: "gpt-4" + system_prompt: "You are a technical lead with deep expertise in software architecture, system design, and technical decision-making." + temperature: 0.2 + max_loops: 4 + context_length: 12000 + retry_attempts: 3 + + - name: "Quality Assurance" + description: "Ensures quality standards and testing coverage" + model_name: "gpt-4" + system_prompt: "You are a QA specialist focused on quality assurance, testing strategies, and process improvement." + temperature: 0.1 + max_loops: 3 + return_step_meta: true + dashboard: true +``` + +### 7. Markdown Configuration Templates + +#### Research Agent Template + +```markdown +--- +name: Research Specialist +description: Academic research and literature review expert +model_name: gpt-4 +temperature: 0.1 +max_loops: 5 +context_length: 16000 +auto_generate_prompt: true +--- + +You are a research specialist with expertise in academic research methodologies, literature review, and scholarly writing. You excel at: + +- Systematic literature reviews +- Research methodology design +- Data analysis and interpretation +- Academic writing and citation +- Research gap identification + +Always provide evidence-based responses and cite relevant sources when possible. +``` + +#### Creative Writing Agent Template + +```markdown +--- +name: Creative Writer +description: Professional creative writer and storyteller +model_name: gpt-4 +temperature: 0.8 +max_loops: 3 +dynamic_temperature_enabled: true +output_type: markdown +--- + +You are a creative writer with a passion for storytelling, character development, and engaging narratives. You specialize in: + +- Fiction writing across multiple genres +- Character development and dialogue +- Plot structure and pacing +- Creative problem-solving +- Engaging opening hooks and satisfying conclusions + +Your writing style is adaptable, engaging, and always focused on creating memorable experiences for readers. +``` + +## Advanced Usage Examples + +### 8. Autonomous Swarm Generation + +#### Simple Task +```bash +swarms autoswarm \ + --task "Create a weekly meal plan for a family of 4 with dietary restrictions" \ + --model "gpt-4" +``` + +#### Complex Research Task +```bash +swarms autoswarm \ + --task "Conduct a comprehensive analysis of the impact of artificial intelligence on job markets, including historical trends, current state, and future projections. Include case studies from different industries and recommendations for workforce adaptation." \ + --model "gpt-4" +``` + +### 9. Integration Examples + +#### CI/CD Pipeline Integration +```yaml +# .github/workflows/swarms-test.yml +name: Swarms Agent Testing +on: [push, pull_request] + +jobs: + test-agents: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@v3 + - name: Set up Python + uses: actions/setup-python@v4 + with: + python-version: '3.9' + - name: Install dependencies + run: | + pip install swarms + - name: Run Swarms Agents + run: | + swarms run-agents --yaml-file ci_agents.yaml + env: + OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }} +``` + +#### Shell Script Integration +```bash +#!/bin/bash +# run_daily_analysis.sh + +echo "Starting daily market analysis..." + +# Run market research agent +swarms agent \ + --name "Daily Market Analyzer" \ + --description "Daily market analysis and reporting" \ + --system-prompt "You are a market analyst providing daily market insights." \ + --task "Analyze today's market movements and provide key insights" \ + --model-name "gpt-4" \ + --temperature 0.1 + +# Run risk assessment agent +swarms agent \ + --name "Risk Assessor" \ + --description "Risk assessment and mitigation specialist" \ + --system-prompt "You are a risk management expert." \ + --task "Assess current market risks and suggest mitigation strategies" \ + --model-name "gpt-4" \ + --temperature 0.2 + +echo "Daily analysis complete!" +``` + +## Troubleshooting Examples + +### 10. Common Error Scenarios + +#### Missing API Key +```bash +swarms agent \ + --name "Test Agent" \ + --description "Test" \ + --system-prompt "Test" \ + --task "Test" +``` + +**Expected Error:** +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ Error │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ Failed to create or run agent: No API key found │ +└─────────────────────────────────────────────────────────────────────────────┘ + +Please check: +1. Your API keys are set correctly +2. The model name is valid +3. All required parameters are provided +4. Your system prompt is properly formatted +``` + +**Resolution:** +```bash +export OPENAI_API_KEY="your-api-key-here" +``` + +#### Invalid YAML Configuration +```bash +swarms run-agents --yaml-file invalid.yaml +``` + +**Expected Error:** +``` +┌─────────────────────────────────────────────────────────────────────────────┘ +│ Configuration Error │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ Error parsing YAML: Invalid YAML syntax │ +└─────────────────────────────────────────────────────────────────────────────┘ + +Please check your agents.yaml file format. +``` + +#### File Not Found +```bash +swarms load-markdown --markdown-path ./nonexistent/ +``` + +**Expected Error:** +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ File Error │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ Markdown file/directory not found: ./nonexistent/ │ +└─────────────────────────────────────────────────────────────────────────────┘ + +Please make sure the path exists and you're in the correct directory. +``` + +### 11. Debug Mode Usage + +#### Enable Verbose Output +```bash +swarms agent \ + --name "Debug Agent" \ + --description "Agent for debugging" \ + --system-prompt "You are a debugging assistant." \ + --task "Help debug this issue" \ + --model-name "gpt-4" \ + --verbose +``` + +This will provide detailed output including: +- Step-by-step execution details +- API call information +- Internal state changes +- Performance metrics + +## Environment Setup + +### 12. Environment Verification + +The `setup-check` command is essential for ensuring your environment is properly configured: + +```bash +# Run comprehensive environment check +swarms setup-check +``` + +This command checks: +- Python version compatibility (3.10+) +- Swarms package version and updates +- API key configuration +- Required dependencies +- Environment file setup +- Workspace directory configuration + +**Use Cases:** +- **Before starting a new project**: Verify all requirements are met +- **After environment changes**: Confirm configuration updates +- **Troubleshooting**: Identify missing dependencies or configuration issues +- **Team onboarding**: Ensure consistent environment setup across team members + +## Best Practices + +### 13. Performance Optimization + +#### Use Concurrent Processing +```bash +# For multiple markdown files +swarms load-markdown \ + --markdown-path ./large_agent_directory/ \ + --concurrent +``` + +#### Optimize Model Selection +```bash +# For simple tasks +--model-name "gpt-3.5-turbo" --temperature 0.1 + +# For complex reasoning +--model-name "gpt-4" --temperature 0.1 --max-loops 5 +``` + +#### Context Length Management +```bash +# For long documents +--context-length 16000 --dynamic-context-window + +# For concise responses +--context-length 4000 --max-loops 2 +``` + +### 14. Security Considerations + +#### Environment Variable Usage +```bash +# Secure API key management +export OPENAI_API_KEY="your-secure-key" +export ANTHROPIC_API_KEY="your-secure-key" + +# Use in CLI +swarms agent [options] +``` + +#### File Permissions +```bash +# Secure configuration files +chmod 600 agents.yaml +chmod 600 .env +``` + +## Summary + +The Swarms CLI provides a powerful and flexible interface for managing AI agents and multi-agent workflows. These examples demonstrate: + +| Feature | Description | +|------------------------|---------------------------------------------------------| +| **Basic Usage** | Getting started with the CLI | +| **Agent Management** | Creating and configuring custom agents | +| **Multi-Agent Workflows** | Coordinating multiple agents | +| **Configuration** | YAML and markdown configuration formats | +| **Environment Setup** | Environment verification and setup checks | +| **Advanced Features** | Dynamic configuration and MCP integration | +| **Troubleshooting** | Common issues and solutions | +| **Best Practices** | Performance and security considerations | + +For more information, refer to the [CLI Reference](cli_reference.md) documentation. + + +-------------------------------------------------- + +# File: swarms/cli/cli_guide.md # The Ultimate Technical Guide to the Swarms CLI: A Step-by-Step Developer’s Guide @@ -12478,7 +14540,485 @@ With the Swarms CLI, the future of automation is within reach. -------------------------------------------------- -# File: swarms\cli\main.md +# File: swarms/cli/cli_reference.md + +# Swarms CLI Reference + +The Swarms CLI is a comprehensive command-line interface for managing and executing Swarms agents and multi-agent architectures. This reference documents all available commands, arguments, and features. + +## Table of Contents + +- [Installation](#installation) + +- [Basic Usage](#basic-usage) + +- [Commands Reference](#commands-reference) + +- [Global Arguments](#global-arguments) + +- [Command-Specific Arguments](#command-specific-arguments) + +- [Error Handling](#error-handling) + +- [Examples](#examples) + +- [Configuration](#configuration) + + +## Installation + +The CLI is included with the Swarms package installation: + +```bash +pip install swarms +``` + +## Basic Usage + +```bash +swarms [options] +``` + +## Commands Reference + +### Core Commands + +| Command | Description | Required Arguments | +|---------|-------------|-------------------| +| `onboarding` | Start interactive onboarding process | None | +| `help` | Display help message | None | +| `get-api-key` | Open API key portal in browser | None | +| `check-login` | Verify login status and initialize cache | None | +| `run-agents` | Execute agents from YAML configuration | `--yaml-file` | +| `load-markdown` | Load agents from markdown files | `--markdown-path` | +| `agent` | Create and run custom agent | `--name`, `--description`, `--system-prompt`, `--task` | +| `auto-upgrade` | Update Swarms to latest version | None | +| `book-call` | Schedule strategy session | None | +| `autoswarm` | Generate and execute autonomous swarm | `--task`, `--model` | +| `setup-check` | Run comprehensive environment setup check | None | + +## Global Arguments + +All commands support these global options: + +| Argument | Type | Default | Description | +|----------|------|---------|-------------| +| `--verbose` | `bool` | `False` | Enable verbose output | +| `--help`, `-h` | `bool` | `False` | Show help message | + +## Command-Specific Arguments + +### `run-agents` Command + +Execute agents from YAML configuration files. + +```bash +python -m swarms.cli.main run-agents [options] +``` + +| Argument | Type | Default | Required | Description | +|----------|------|---------|----------|-------------| +| `--yaml-file` | `str` | `"agents.yaml"` | No | Path to YAML configuration file | + +**Example:** +```bash +swarms run-agents --yaml-file my_agents.yaml +``` + +### `load-markdown` Command + +Load agents from markdown files with YAML frontmatter. + +```bash +python -m swarms.cli.main load-markdown [options] +``` + +| Argument | Type | Default | Required | Description | +|----------|------|---------|----------|-------------| +| `--markdown-path` | `str` | `None` | **Yes** | Path to markdown file or directory | +| `--concurrent` | `bool` | `True` | No | Enable concurrent processing for multiple files | + +**Example:** +```bash +swarms load-markdown --markdown-path ./agents/ --concurrent +``` + +### `agent` Command + +Create and run a custom agent with specified parameters. + +```bash +python -m swarms.cli.main agent [options] +``` + +#### Required Arguments + +| Argument | Type | Description | +|----------|------|-------------| +| `--name` | `str` | Name of the custom agent | +| `--description` | `str` | Description of the custom agent | +| `--system-prompt` | `str` | System prompt for the custom agent | +| `--task` | `str` | Task for the custom agent to execute | + +#### Optional Arguments + +| Argument | Type | Default | Description | +|----------|------|---------|-------------| +| `--model-name` | `str` | `"gpt-4"` | Model name for the custom agent | +| `--temperature` | `float` | `None` | Temperature setting (0.0-2.0) | +| `--max-loops` | `int` | `None` | Maximum number of loops for the agent | +| `--auto-generate-prompt` | `bool` | `False` | Enable auto-generation of prompts | +| `--dynamic-temperature-enabled` | `bool` | `False` | Enable dynamic temperature adjustment | +| `--dynamic-context-window` | `bool` | `False` | Enable dynamic context window | +| `--output-type` | `str` | `None` | Output type (e.g., 'str', 'json') | +| `--verbose` | `bool` | `False` | Enable verbose mode for the agent | +| `--streaming-on` | `bool` | `False` | Enable streaming mode for the agent | +| `--context-length` | `int` | `None` | Context length for the agent | +| `--retry-attempts` | `int` | `None` | Number of retry attempts for the agent | +| `--return-step-meta` | `bool` | `False` | Return step metadata from the agent | +| `--dashboard` | `bool` | `False` | Enable dashboard for the agent | +| `--autosave` | `bool` | `False` | Enable autosave for the agent | +| `--saved-state-path` | `str` | `None` | Path for saving agent state | +| `--user-name` | `str` | `None` | Username for the agent | +| `--mcp-url` | `str` | `None` | MCP URL for the agent | + +**Example:** +```bash +swarms agent \ + --name "Trading Agent" \ + --description "Advanced trading agent for market analysis" \ + --system-prompt "You are an expert trader..." \ + --task "Analyze market trends for AAPL" \ + --model-name "gpt-4" \ + --temperature 0.1 \ + --max-loops 5 +``` + +### `autoswarm` Command + +Generate and execute an autonomous swarm configuration. + +```bash +swarms autoswarm [options] +``` + +| Argument | Type | Default | Required | Description | +|----------|------|---------|----------|-------------| +| `--task` | `str` | `None` | **Yes** | Task description for the swarm | +| `--model` | `str` | `None` | **Yes** | Model name to use for the swarm | + +**Example:** + +```bash +swarms autoswarm --task "analyze this data" --model "gpt-4" +``` + +### `setup-check` Command + +Run a comprehensive environment setup check to verify your Swarms installation and configuration. + +```bash +swarms setup-check [--verbose] +``` + +**Arguments:** +- `--verbose`: Enable detailed debug output showing version detection methods + +This command performs the following checks: +- **Python Version**: Verifies Python 3.10+ compatibility +- **Swarms Version**: Checks current version and compares with latest available +- **API Keys**: Verifies presence of common API keys in environment variables +- **Dependencies**: Ensures required packages are available +- **Environment File**: Checks for .env file existence and content +- **Workspace Directory**: Verifies WORKSPACE_DIR environment variable + +**Examples:** +```bash +# Basic setup check +swarms setup-check + +# Verbose setup check with debug information +swarms setup-check --verbose +``` + +**Expected Output:** +``` +🔍 Running Swarms Environment Setup Check + +┌─────────────────────────────────────────────────────────────────────────────┐ +│ Environment Check Results │ +├─────────┬─────────────────────────┬─────────────────────────────────────────┤ +│ Status │ Check │ Details │ +├─────────┼─────────────────────────┼─────────────────────────────────────────┤ +│ ✓ │ Python Version │ Python 3.11.5 │ +│ ✓ │ Swarms Version │ Current version: 8.1.1 │ +│ ✓ │ API Keys │ API keys found: OPENAI_API_KEY │ +│ ✓ │ Dependencies │ All required dependencies available │ +│ ✓ │ Environment File │ .env file exists with 1 API key(s) │ +│ ✓ │ Workspace Directory │ WORKSPACE_DIR is set to: /path/to/ws │ +└─────────┴─────────────────────────┴─────────────────────────────────────────┘ + +┌─────────────────────────────────────────────────────────────────────────────┐ +│ Setup Check Complete │ +├─────────────────────────────────────────────────────────────────────────────┤ +│ 🎉 All checks passed! Your environment is ready for Swarms. │ +└─────────────────────────────────────────────────────────────────────────────┘ +``` + +## Error Handling + +The CLI provides comprehensive error handling with formatted error messages: + +### Error Types + +| Error Type | Description | Resolution | +|------------|-------------|------------| +| `FileNotFoundError` | Configuration file not found | Check file path and permissions | +| `ValueError` | Invalid configuration format | Verify YAML/markdown syntax | +| `SwarmCLIError` | Custom CLI-specific errors | Check command arguments and API keys | +| `API Key Error` | Authentication issues | Verify API key configuration | +| `Context Length Error` | Model context exceeded | Reduce input size or use larger model | + +### Error Display Format + +Errors are displayed in formatted panels with: + +- **Error Title**: Clear error identification + +- **Error Message**: Detailed error description + +- **Help Text**: Suggested resolution steps + +- **Color Coding**: Red borders for errors, yellow for warnings + + +## Examples + +### Basic Agent Creation + +```bash +# Create a simple agent +swarms agent \ + --name "Code Reviewer" \ + --description "AI code review assistant" \ + --system-prompt "You are an expert code reviewer..." \ + --task "Review this Python code for best practices" \ + --model-name "gpt-4" \ + --temperature 0.1 +``` + +### Loading Multiple Agents + +```bash +# Load agents from markdown directory +swarms load-markdown \ + --markdown-path ./my_agents/ \ + --concurrent +``` + +### Running YAML Configuration + +```bash +# Execute agents from YAML file +swarms run-agents \ + --yaml-file production_agents.yaml +``` + +### Autonomous Swarm Generation + +```bash +# Generate swarm for complex task +swarms autoswarm \ + --task "Create a comprehensive market analysis report for tech stocks" \ + --model "gpt-4" +``` + +## Configuration + +### YAML Configuration Format + +For `run-agents` command, use this YAML structure: + +```yaml +agents: + - name: "Research Agent" + + description: "Research and analysis specialist" + model_name: "gpt-4" + system_prompt: "You are a research specialist..." + temperature: 0.1 + max_loops: 3 + + - name: "Analysis Agent" + + description: "Data analysis expert" + model_name: "gpt-4" + system_prompt: "You are a data analyst..." + temperature: 0.2 + max_loops: 5 +``` + +### Markdown Configuration Format + +For `load-markdown` command, use YAML frontmatter: + +```markdown +--- +name: Research Agent +description: AI research specialist +model_name: gpt-4 +temperature: 0.1 +max_loops: 3 +--- + +You are an expert research agent specializing in... +``` + +## Advanced Features + +### Progress Indicators + +The CLI provides rich progress indicators for long-running operations: + +- **Spinner Animations**: Visual feedback during execution + + +- **Progress Bars**: For operations with known completion states + +- **Status Updates**: Real-time operation status + + +### Concurrent Processing + +Multiple markdown files can be processed concurrently: + +- **Parallel Execution**: Improves performance for large directories + +- **Resource Management**: Automatic thread management + +- **Error Isolation**: Individual file failures don't affect others + + +### Auto-upgrade System + +```bash +swarms auto-upgrade +``` + +Automatically updates Swarms to the latest version with: + +- Version checking + +- Dependency resolution + +- Safe update process + + +### Interactive Onboarding + +```bash +swarms onboarding +``` + +Guided setup process including: + +- API key configuration + +- Environment setup + +- Basic agent creation + +- Usage examples + + +## Troubleshooting + +### Common Issues + +1. **API Key Not Set** + + ```bash + export OPENAI_API_KEY="your-api-key-here" + ``` + +2. **File Permissions** + ```bash + chmod 644 agents.yaml + ``` + +3. **Model Not Available** + - Verify model name spelling + + - Check API key permissions + + - Ensure sufficient quota + + +### Debug Mode + +Enable verbose output for debugging: + +```bash +swarms --verbose +``` + +## Integration + +### CI/CD Integration + +The CLI can be integrated into CI/CD pipelines: + +```yaml +# GitHub Actions example +- name: Run Swarms Agents + + run: | + swarms run-agents --yaml-file ci_agents.yaml +``` + +### Scripting + +Use in shell scripts: + +```bash +#!/bin/bash +# Run multiple agent configurations +swarms run-agents --yaml-file agents1.yaml +swarms run-agents --yaml-file agents2.yaml +``` + +## Performance Considerations + +| Consideration | Recommendation | +|------------------------|-----------------------------------------------------| +| Concurrent Processing | Use `--concurrent` for multiple files | +| Model Selection | Choose appropriate models for task complexity | +| Context Length | Monitor and optimize input sizes | +| Rate Limiting | Respect API provider limits | + +## Security + +| Security Aspect | Recommendation | +|------------------------|--------------------------------------------------------| +| API Key Management | Store keys in environment variables | +| File Permissions | Restrict access to configuration files | +| Input Validation | CLI validates all inputs before execution | +| Error Sanitization | Sensitive information is not exposed in errors | + +## Support + +For additional support: + +| Support Option | Link | +|----------------------|---------------------------------------------------------------------------------------| +| **Community** | [Discord](https://discord.gg/EamjgSaEQf) | +| **Issues** | [GitHub Issues](https://github.com/kyegomez/swarms/issues) | +| **Strategy Sessions**| [Book a Call](https://cal.com/swarms/swarms-strategy-session) | + + +-------------------------------------------------- + +# File: swarms/cli/main.md # Swarms CLI Documentation @@ -12588,7 +15128,7 @@ Below is a detailed explanation of the available commands: -------------------------------------------------- -# File: swarms\concept\framework_architecture.md +# File: swarms/concept/framework_architecture.md # Swarms Framework Architecture @@ -12752,7 +15292,7 @@ By understanding the purpose and role of each folder in the Swarms framework, us -------------------------------------------------- -# File: swarms\concept\future_swarm_architectures.md +# File: swarms/concept/future_swarm_architectures.md @@ -12879,7 +15419,7 @@ These swarm architectures provide different models for organizing and orchestrat -------------------------------------------------- -# File: swarms\concept\how_to_choose_swarms.md +# File: swarms/concept/how_to_choose_swarms.md # Choosing the Right Swarm for Your Business Problem @@ -13024,7 +15564,7 @@ When integrating agents in a business workflow, it's crucial to balance task com -------------------------------------------------- -# File: swarms\concept\philosophy.md +# File: swarms/concept/philosophy.md # Our Philosophy: Simplifying Multi-Agent Collaboration Through Readable Code and Performance Optimization @@ -13381,7 +15921,7 @@ By adhering to these principles, we create a robust foundation for scalable and -------------------------------------------------- -# File: swarms\concept\purpose\limits_of_individual_agents.md +# File: swarms/concept/purpose/limits_of_individual_agents.md # The Limits of Individual Agents @@ -13441,7 +15981,7 @@ While individual AI agents have made remarkable strides in various domains, thei -------------------------------------------------- -# File: swarms\concept\purpose\why.md +# File: swarms/concept/purpose/why.md # The Swarms Framework: Orchestrating Agents for Enterprise Automation @@ -13580,7 +16120,7 @@ As the field of artificial intelligence continues to advance, the Swarms Framewo -------------------------------------------------- -# File: swarms\concept\purpose\why_swarms.md +# File: swarms/concept/purpose/why_swarms.md # Why Swarms? @@ -13638,7 +16178,7 @@ The collaboration of multiple agents in AI systems presents a robust solution to -------------------------------------------------- -# File: swarms\concept\swarm_architectures.md +# File: swarms/concept/swarm_architectures.md # Multi-Agent Architectures @@ -14497,7 +17037,7 @@ graph TD -------------------------------------------------- -# File: swarms\concept\swarm_ecosystem.md +# File: swarms/concept/swarm_ecosystem.md # Understanding the Swarms Ecosystem @@ -14590,7 +17130,7 @@ Start exploring the possibilities by checking out the [Swarms Ecosystem GitHub r -------------------------------------------------- -# File: swarms\concept\vision.md +# File: swarms/concept/vision.md # Swarms – The Ultimate Multi-Agent LLM Framework for Developers @@ -14744,7 +17284,7 @@ Swarms is not just another multi-agent framework; it's built specifically for de -------------------------------------------------- -# File: swarms\concept\why.md +# File: swarms/concept/why.md **Maximizing Enterprise Automation: Overcoming the Limitations of Individual AI Agents Through Multi-Agent Collaboration** @@ -15254,7 +17794,7 @@ Implementing multi-agent systems requires thoughtful planning, adherence to best -------------------------------------------------- -# File: swarms\contributing.md +# File: swarms/contributing.md # Contribution Guidelines @@ -15497,162 +18037,75 @@ If you have any questions or need assistance, please feel free to open an issue -------------------------------------------------- -# File: swarms\ecosystem.md +# File: swarms/ecosystem.md -# Swarms Ecosystem +# Swarms Infrastructure Stack -*The Complete Enterprise-Grade Multi-Agent AI Platform* +**We're Building the Operating System for the Agent Economy** ---- - -## **Join the Future of AI Development** - -**We're Building the Operating System for the Agent Economy** - The Swarms ecosystem represents the most comprehensive, production-ready multi-agent AI platform available today. From our flagship Python framework to high-performance Rust implementations and client libraries spanning every major programming language, we provide enterprise-grade tools that power the next generation of intelligent applications. - ---- - -## **Complete Product Portfolio** - -| **Product** | **Technology** | **Status** | **Repository** | **Documentation** | -|-------------|---------------|------------|----------------|-------------------| -| **Swarms Python Framework** | Python | **Production** | [swarms](https://github.com/kyegomez/swarms) | [Docs](https://docs.swarms.world/en/latest/swarms/install/install/) | -| **Swarms Rust Framework** | Rust | **Production** | [swarms-rs](https://github.com/The-Swarm-Corporation/swarms-rs) | [Docs](https://docs.swarms.world/en/latest/swarms_rs/overview/) | -| **Python API Client** | Python | **Production** | [swarms-sdk](https://github.com/The-Swarm-Corporation/swarms-sdk) | [Docs](https://docs.swarms.world/en/latest/swarms_cloud/python_client/) | -| **TypeScript/Node.js Client** | TypeScript | **Production** | [swarms-ts](https://github.com/The-Swarm-Corporation/swarms-ts) | *Coming Soon* | -| **Go Client** | Go | **Production** | [swarms-client-go](https://github.com/The-Swarm-Corporation/swarms-client-go) | *Coming Soon* | -| **Java Client** | Java | **Production** | [swarms-java](https://github.com/The-Swarm-Corporation/swarms-java) | *Coming Soon* | -| **Kotlin Client** | Kotlin | **Q2 2025** | *In Development* | *Coming Soon* | -| **Ruby Client** | Ruby | **Q2 2025** | *In Development* | *Coming Soon* | -| **Rust Client** | Rust | **Q2 2025** | *In Development* | *Coming Soon* | -| **C#/.NET Client** | C# | **Q3 2025** | *In Development* | *Coming Soon* | +The Swarms ecosystem represents the most comprehensive, production-ready multi-agent AI platform available today. From our flagship Python framework to high-performance Rust implementations and client libraries spanning every major programming language, we provide enterprise-grade tools that power the next generation of agentic applications. --- -## **Why Choose the Swarms Ecosystem?** +## **Product Portfolio by Language & API** -### **Enterprise-Grade Architecture** +### 🐍 **Python** -- **Production Ready**: Battle-tested in enterprise environments with 99.9%+ uptime - -- **Scalable Infrastructure**: Handle millions of agent interactions with automatic scaling - -- **Security First**: End-to-end encryption, API key management, and enterprise compliance - -- **Observability**: Comprehensive logging, monitoring, and debugging capabilities - -### **Developer Experience** - -- **Multiple Language Support**: Native clients for every major programming language - -- **Unified API**: Consistent interface across all platforms and languages - -- **Rich Documentation**: Comprehensive guides, tutorials, and API references - -- **Active Community**: 24/7 support through Discord, GitHub, and direct channels - -### **Performance & Reliability** - -- **High Throughput**: Process thousands of concurrent agent requests - -- **Low Latency**: Optimized for real-time applications and user experiences - -- **Fault Tolerance**: Automatic retries, circuit breakers, and graceful degradation - -- **Multi-Cloud**: Deploy on AWS, GCP, Azure, or on-premises infrastructure - ---- - -## **Join Our Growing Community** - -### **Connect With Developers Worldwide** - -| **Platform** | **Purpose** | **Join Link** | **Benefits** | -|--------------|-------------|---------------|--------------| -| **Discord Community** | Real-time support & discussions | [Join Discord](https://discord.gg/EamjgSaEQf) | • 24/7 developer support
• Weekly community events
• Direct access to core team
• Beta feature previews | -| **Twitter/X** | Latest updates & announcements | [Follow @swarms_corp](https://x.com/swarms_corp) | • Breaking news & updates
• Community highlights
• Technical insights
• Industry partnerships | -| **LinkedIn** | Professional network & updates | [The Swarm Corporation](https://www.linkedin.com/company/the-swarm-corporation) | • Professional networking
• Career opportunities
• Enterprise partnerships
• Industry insights | -| **YouTube** | Tutorials & technical content | [Swarms Channel](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ) | • In-depth tutorials
• Live coding sessions
• Architecture deep dives
• Community showcases | +| **Product** | **Description** | **Status** | **Repository** | **Documentation** | +|------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|-------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------| +| **Swarms Python Framework** | The core multi-agent orchestration framework for Python. Enables building, managing, and scaling complex agentic systems with robust abstractions, workflows, and integrations. | **Production** | [swarms](https://github.com/kyegomez/swarms) | [Docs](https://docs.swarms.world/en/latest/swarms/install/install/) | +| **Python API Client** | Official Python SDK for interacting with Swarms Cloud and remote agent infrastructure. Simplifies API calls, authentication, and integration into Python applications. | **Production** | [swarms-sdk](https://github.com/The-Swarm-Corporation/swarms-sdk) | [Docs](https://docs.swarms.world/en/latest/swarms_cloud/python_client/) | +| **Swarms Tools** | A comprehensive library of prebuilt tools for various domains, including finance, social media, data processing, and more. Accelerates agent development by providing ready-to-use capabilities and integrations. | **Production** | [swarms-tools](https://github.com/The-Swarm-Corporation/swarms-tools) | *Coming Soon* | +| **Swarms Memory** | A robust library of memory structures and data loaders for Retrieval-Augmented Generation (RAG) processing. Provides advanced memory management, vector stores, and integration with agentic workflows. | **Production** | [swarms-memory](https://github.com/The-Swarm-Corporation/swarms-memory) | *Coming Soon* | --- -## **Contribute to the Ecosystem** - -### **How You Can Make an Impact** +### 🦀 **Rust** -| **Contribution Area** | **Skills Needed** | **Impact Level** | **Getting Started** | -|-----------------------|-------------------|------------------|---------------------| -| **Core Framework Development** | Python, Rust, Systems Design | **High Impact** | [Contributing Guide](https://docs.swarms.world/en/latest/contributors/main/) | -| **Client Library Development** | Various Languages (Go, Java, TS, etc.) | **High Impact** | [Client Development](https://github.com/The-Swarm-Corporation) | -| **Documentation & Tutorials** | Technical Writing, Examples | **High Impact** | [Docs Contributing](https://docs.swarms.world/en/latest/contributors/docs/) | -| **Testing & Quality Assurance** | Testing Frameworks, QA | **Medium Impact** | [Testing Guide](https://docs.swarms.world/en/latest/swarms/framework/test/) | -| **UI/UX & Design** | Design, Frontend Development | **Medium Impact** | [Design Contributions](https://github.com/The-Swarm-Corporation/swarms/issues) | -| **Bug Reports & Feature Requests** | User Experience, Testing | **Easy Start** | [Report Issues](https://github.com/The-Swarm-Corporation/swarms/issues) | +| **Product** | **Description** | **Status** | **Repository** | **Documentation** | +|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|-------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------| +| **Swarms Rust Framework** | High-performance, memory-safe multi-agent orchestration framework written in Rust. Designed for demanding production environments and seamless integration with Rust-based systems. | **Production** | [swarms-rs](https://github.com/The-Swarm-Corporation/swarms-rs) | [Docs](https://docs.swarms.world/en/latest/swarms_rs/overview/) | +| **Rust Client** | Official Rust client library for connecting to Swarms Cloud and orchestrating agents from Rust applications. Provides idiomatic Rust APIs for agent management and communication. | **Q2 2025** | *In Development* | *Coming Soon* | --- -## **We're Hiring Top Talent** - -### **Join the Team Building the Future Of The World Economy** - -**Ready to work on cutting-edge agent technology that's shaping the future?** We're actively recruiting exceptional engineers, researchers, and technical leaders to join our mission of building the operating system for the agent economy. - -| **Why Join Swarms?** | **What We Offer** | -|-----------------------|-------------------| -| **Cutting-Edge Technology** | Work on the most powerful multi-agent systems, distributed computing, and enterprise-scale infrastructure | -| **Global Impact** | Your code will power agent applications used by Fortune 500 companies and millions of developers | -| **World-Class Team** | Collaborate with top engineers, researchers, and industry experts from Google, OpenAI, and more | -| **Fast Growth** | Join a rapidly scaling company with massive market opportunity and venture backing | +### 🌐 **API Clients (Multi-Language)** -### **Open Positions** - -| **Position** | **Role Description** | -|-------------------------------|----------------------------------------------------------| -| **Senior Rust Engineers** | Building high-performance agent infrastructure | -| **Python Framework Engineers**| Expanding our core multi-agent capabilities | -| **DevOps/Platform Engineers** | Scaling cloud infrastructure for millions of agents | -| **Technical Writers** | Creating world-class developer documentation | -| **Solutions Engineers** | Helping enterprises adopt multi-agent AI | - -**Ready to Build the Future?** **[Apply Now at swarms.ai/hiring](https://swarms.ai/hiring)** - ---- +| **Language/Platform** | **Description** | **Status** | **Repository** | **Documentation** | +|----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------|-------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------| +| **TypeScript/Node.js** | Official TypeScript/Node.js SDK for Swarms Cloud. Enables seamless integration of agentic workflows into JavaScript and TypeScript applications, both server-side and in the browser. | **Production** | [swarms-ts](https://github.com/The-Swarm-Corporation/swarms-ts) | *Coming Soon* | +| **Go** | Go client library for Swarms Cloud, providing Go developers with native APIs to manage, orchestrate, and interact with agents in distributed systems and microservices. | **Production** | [swarms-client-go](https://github.com/The-Swarm-Corporation/swarms-client-go) | *Coming Soon* | +| **Java** | Java SDK for Swarms Cloud, allowing enterprise Java applications to leverage multi-agent orchestration and integrate agentic capabilities into JVM-based systems. | **Production** | [swarms-java](https://github.com/The-Swarm-Corporation/swarms-java) | *Coming Soon* | +| **Kotlin** | Native Kotlin client for Swarms Cloud, designed for modern JVM and Android applications seeking to embed agentic intelligence and orchestration. | **Q2 2025** | *In Development* | *Coming Soon* | +| **Ruby** | Ruby SDK for Swarms Cloud, enabling Ruby and Rails developers to easily connect, manage, and orchestrate agents within their applications. | **Q2 2025** | *In Development* | *Coming Soon* | +| **C#/.NET** | Official C#/.NET client library for Swarms Cloud, providing .NET developers with tools to integrate agentic workflows into desktop, web, and cloud applications. | **Q3 2025** | *In Development* | *Coming Soon* | --- -## **Get Started Today** - -### **Quick Start Guide** - -| **Step** | **Action** | **Time Required** | -|----------|------------|-------------------| -| **1** | [Install Swarms Python Framework](https://docs.swarms.world/en/latest/swarms/install/install/) | 5 minutes | -| **2** | [Run Your First Agent](https://docs.swarms.world/en/latest/swarms/examples/basic_agent/) | 10 minutes | -| **3** | [Try Multi-Agent Workflows](https://docs.swarms.world/en/latest/swarms/examples/sequential_example/) | 15 minutes | -| **4** | [Join Our Discord Community](https://discord.gg/EamjgSaEQf) | 2 minutes | -| **5** | [Explore Enterprise Features](https://docs.swarms.world/en/latest/swarms_cloud/swarms_api/) | 20 minutes | - ---- - -## **Enterprise Support & Partnerships** - -### **Ready to Scale with Swarms?** +## **Why Choose the Swarms Ecosystem?** -| **Contact Type** | **Best For** | **Response Time** | **Contact Information** | -|------------------|--------------|-------------------|-------------------------| -| **Technical Support** | Development questions, troubleshooting | < 24 hours | [Book Support Call](https://cal.com/swarms/swarms-technical-support) | -| **Enterprise Sales** | Custom deployments, enterprise licensing | < 4 hours | [kye@swarms.world](mailto:kye@swarms.world) | -| **Partnerships** | Integration partnerships, technology alliances | < 48 hours | [kye@swarms.world](mailto:kye@swarms.world) | -| **Investor Relations** | Investment opportunities, funding updates | By appointment | [kye@swarms.world](mailto:kye@swarms.world) | +| **Feature** | **Description** | +|----------------------------|------------------------------------------------------------------------------------------------------| +| **Production Ready** | Battle-tested in enterprise environments with 99.9%+ uptime | +| **Scalable Infrastructure** | Handle millions of agent interactions with automatic scaling | +| **Security First** | End-to-end encryption, API key management, and enterprise compliance | +| **Observability** | Comprehensive logging, monitoring, and debugging capabilities | +| **Multiple Language Support** | Native clients for every major programming language | +| **Unified API** | Consistent interface across all platforms and languages | +| **Rich Documentation** | Comprehensive guides, tutorials, and API references | +| **Active Community** | 24/7 support through Discord, GitHub, and direct channels | +| **High Throughput** | Process thousands of concurrent agent requests | +| **Low Latency** | Optimized for real-time applications and user experiences | +| **Fault Tolerance** | Automatic retries, circuit breakers, and graceful degradation | +| **Multi-Cloud** | Deploy on AWS, GCP, Azure, or on-premises infrastructure | --- -**Ready to build the future of AI? Start with Swarms today and join thousands of developers creating the next generation of intelligent applications.** - -------------------------------------------------- -# File: swarms\examples\agent_output_types.md +# File: swarms/examples/agent_output_types.md # Agent Output Types Examples with Vision Capabilities @@ -15736,7 +18189,7 @@ The vision-enabled agents support various image formats including: -------------------------------------------------- -# File: swarms\examples\agent_structured_outputs.md +# File: swarms/examples/agent_structured_outputs.md # Agent Structured Outputs @@ -15941,7 +18394,7 @@ This example shows how to structure complex nested objects, arrays, and various -------------------------------------------------- -# File: swarms\examples\agent_with_tools.md +# File: swarms/examples/agent_with_tools.md # Basic Agent Example @@ -16593,7 +19046,7 @@ agent.run("Get the market sentiment for bitcoin") -------------------------------------------------- -# File: swarms\examples\agents_as_tools.md +# File: swarms/examples/agents_as_tools.md # Agents as Tools Tutorial @@ -17185,7 +19638,7 @@ print(out) -------------------------------------------------- -# File: swarms\examples\aggregate.md +# File: swarms/examples/aggregate.md # Aggregate Multi-Agent Responses @@ -17263,7 +19716,7 @@ print(result) -------------------------------------------------- -# File: swarms\examples\azure.md +# File: swarms/examples/azure.md # Azure OpenAI Integration @@ -17317,11 +19770,175 @@ for model in model_list: ``` Common Azure model names include: -- `azure/gpt-4` + +- `azure/gpt-4.1` + - `azure/gpt-4o` + - `azure/gpt-4o-mini` -- `azure/gpt-35-turbo` -- `azure/gpt-35-turbo-16k` + + + +## Models Supported + +```txt +azure_ai/grok-3 +azure_ai/global/grok-3 +azure_ai/global/grok-3-mini +azure_ai/grok-3-mini +azure_ai/deepseek-r1 +azure_ai/deepseek-v3 +azure_ai/deepseek-v3-0324 +azure_ai/jamba-instruct +azure_ai/jais-30b-chat +azure_ai/mistral-nemo +azure_ai/mistral-medium-2505 +azure_ai/mistral-large +azure_ai/mistral-small +azure_ai/mistral-small-2503 +azure_ai/mistral-large-2407 +azure_ai/mistral-large-latest +azure_ai/ministral-3b +azure_ai/Llama-3.2-11B-Vision-Instruct +azure_ai/Llama-3.3-70B-Instruct +azure_ai/Llama-4-Scout-17B-16E-Instruct +azure_ai/Llama-4-Maverick-17B-128E-Instruct-FP8 +azure_ai/Llama-3.2-90B-Vision-Instruct +azure_ai/Meta-Llama-3-70B-Instruct +azure_ai/Meta-Llama-3.1-8B-Instruct +azure_ai/Meta-Llama-3.1-70B-Instruct +azure_ai/Meta-Llama-3.1-405B-Instruct +azure_ai/Phi-4-mini-instruct +azure_ai/Phi-4-multimodal-instruct +azure_ai/Phi-4 +azure_ai/Phi-3.5-mini-instruct +azure_ai/Phi-3.5-vision-instruct +azure_ai/Phi-3.5-MoE-instruct +azure_ai/Phi-3-mini-4k-instruct +azure_ai/Phi-3-mini-128k-instruct +azure_ai/Phi-3-small-8k-instruct +azure_ai/Phi-3-small-128k-instruct +azure_ai/Phi-3-medium-4k-instruct +azure_ai/Phi-3-medium-128k-instruct +azure_ai/cohere-rerank-v3.5 +azure_ai/cohere-rerank-v3-multilingual +azure_ai/cohere-rerank-v3-english +azure_ai/Cohere-embed-v3-english +azure_ai/Cohere-embed-v3-multilingual +azure_ai/embed-v-4-0 +azure/gpt-4o-mini-tts +azure/computer-use-preview +azure/gpt-4o-audio-preview-2024-12-17 +azure/gpt-4o-mini-audio-preview-2024-12-17 +azure/gpt-4.1 +azure/gpt-4.1-2025-04-14 +azure/gpt-4.1-mini +azure/gpt-4.1-mini-2025-04-14 +azure/gpt-4.1-nano +azure/gpt-4.1-nano-2025-04-14 +azure/o3-pro +azure/o3-pro-2025-06-10 +azure/o3 +azure/o3-2025-04-16 +azure/o3-deep-research +azure/o4-mini +azure/gpt-4o-mini-realtime-preview-2024-12-17 +azure/eu/gpt-4o-mini-realtime-preview-2024-12-17 +azure/us/gpt-4o-mini-realtime-preview-2024-12-17 +azure/gpt-4o-realtime-preview-2024-12-17 +azure/us/gpt-4o-realtime-preview-2024-12-17 +azure/eu/gpt-4o-realtime-preview-2024-12-17 +azure/gpt-4o-realtime-preview-2024-10-01 +azure/us/gpt-4o-realtime-preview-2024-10-01 +azure/eu/gpt-4o-realtime-preview-2024-10-01 +azure/o4-mini-2025-04-16 +azure/o3-mini-2025-01-31 +azure/us/o3-mini-2025-01-31 +azure/eu/o3-mini-2025-01-31 +azure/tts-1 +azure/tts-1-hd +azure/whisper-1 +azure/gpt-4o-transcribe +azure/gpt-4o-mini-transcribe +azure/o3-mini +azure/o1-mini +azure/o1-mini-2024-09-12 +azure/us/o1-mini-2024-09-12 +azure/eu/o1-mini-2024-09-12 +azure/o1 +azure/o1-2024-12-17 +azure/us/o1-2024-12-17 +azure/eu/o1-2024-12-17 +azure/codex-mini +azure/o1-preview +azure/o1-preview-2024-09-12 +azure/us/o1-preview-2024-09-12 +azure/eu/o1-preview-2024-09-12 +azure/gpt-4.5-preview +azure/gpt-4o +azure/global/gpt-4o-2024-11-20 +azure/gpt-4o-2024-08-06 +azure/global/gpt-4o-2024-08-06 +azure/gpt-4o-2024-11-20 +azure/us/gpt-4o-2024-11-20 +azure/eu/gpt-4o-2024-11-20 +azure/gpt-4o-2024-05-13 +azure/global-standard/gpt-4o-2024-08-06 +azure/us/gpt-4o-2024-08-06 +azure/eu/gpt-4o-2024-08-06 +azure/global-standard/gpt-4o-2024-11-20 +azure/global-standard/gpt-4o-mini +azure/gpt-4o-mini +azure/gpt-4o-mini-2024-07-18 +azure/us/gpt-4o-mini-2024-07-18 +azure/eu/gpt-4o-mini-2024-07-18 +azure/gpt-4-turbo-2024-04-09 +azure/gpt-4-0125-preview +azure/gpt-4-1106-preview +azure/gpt-4-0613 +azure/gpt-4-32k-0613 +azure/gpt-4-32k +azure/gpt-4 +azure/gpt-4-turbo +azure/gpt-4-turbo-vision-preview +azure/gpt-35-turbo-16k-0613 +azure/gpt-35-turbo-1106 +azure/gpt-35-turbo-0613 +azure/gpt-35-turbo-0301 +azure/gpt-35-turbo-0125 +azure/gpt-3.5-turbo-0125 +azure/gpt-35-turbo-16k +azure/gpt-35-turbo +azure/gpt-3.5-turbo +azure/mistral-large-latest +azure/mistral-large-2402 +azure/command-r-plus +azure/ada +azure/text-embedding-ada-002 +azure/text-embedding-3-large +azure/text-embedding-3-small +azure/gpt-image-1 +azure/low/1024-x-1024/gpt-image-1 +azure/medium/1024-x-1024/gpt-image-1 +azure/high/1024-x-1024/gpt-image-1 +azure/low/1024-x-1536/gpt-image-1 +azure/medium/1024-x-1536/gpt-image-1 +azure/high/1024-x-1536/gpt-image-1 +azure/low/1536-x-1024/gpt-image-1 +azure/medium/1536-x-1024/gpt-image-1 +azure/high/1536-x-1024/gpt-image-1 +azure/standard/1024-x-1024/dall-e-3 +azure/hd/1024-x-1024/dall-e-3 +azure/standard/1024-x-1792/dall-e-3 +azure/standard/1792-x-1024/dall-e-3 +azure/hd/1024-x-1792/dall-e-3 +azure/hd/1792-x-1024/dall-e-3 +azure/standard/1024-x-1024/dall-e-2 +azure/gpt-3.5-turbo-instruct-0914 +azure/gpt-35-turbo-instruct +azure/gpt-35-turbo-instruct-0914 +``` + ## Basic Usage @@ -17428,7 +20045,7 @@ print(response) -------------------------------------------------- -# File: swarms\examples\basic_agent.md +# File: swarms/examples/basic_agent.md # Basic Agent Example @@ -17540,7 +20157,7 @@ You can modify the system prompt and agent parameters to create specialized agen -------------------------------------------------- -# File: swarms\examples\claude.md +# File: swarms/examples/claude.md # Agent with Anthropic/Claude @@ -17570,7 +20187,7 @@ agent.run("What are the components of a startup's stock incentive equity plan?") -------------------------------------------------- -# File: swarms\examples\cohere.md +# File: swarms/examples/cohere.md # Agent with Cohere @@ -17600,7 +20217,7 @@ agent.run("What are the components of a startup's stock incentive equity plan?") -------------------------------------------------- -# File: swarms\examples\concurrent_workflow.md +# File: swarms/examples/concurrent_workflow.md # ConcurrentWorkflow Examples @@ -18009,7 +20626,7 @@ This guide demonstrates how to effectively use the ConcurrentWorkflow architectu -------------------------------------------------- -# File: swarms\examples\deepseek.md +# File: swarms/examples/deepseek.md # Agent with DeepSeek @@ -18065,7 +20682,7 @@ agent.run("What are the components of a startup's stock incentive equity plan?") -------------------------------------------------- -# File: swarms\examples\groq.md +# File: swarms/examples/groq.md # Agent with Groq @@ -18078,8 +20695,6 @@ agent.run("What are the components of a startup's stock incentive equity plan?") ```python import os -from swarm_models import OpenAIChat - from swarms import Agent company = "NVDA" @@ -18111,7 +20726,7 @@ managing_director = Agent( -------------------------------------------------- -# File: swarms\examples\groupchat_example.md +# File: swarms/examples/groupchat_example.md # GroupChat Example @@ -18325,7 +20940,7 @@ from swarms import Agent, GroupChat -------------------------------------------------- -# File: swarms\examples\hhcs_examples.md +# File: swarms/examples/hhcs_examples.md # Hybrid Hierarchical-Cluster Swarm (HHCS) Example @@ -18458,7 +21073,7 @@ if __name__ == "__main__": -------------------------------------------------- -# File: swarms\examples\hierarchical_swarm_example.md +# File: swarms/examples/hierarchical_swarm_example.md # Hierarchical Swarm Examples @@ -18689,7 +21304,7 @@ For more detailed information about the `HierarchicalSwarm` API and advanced usa -------------------------------------------------- -# File: swarms\examples\igc_example.md +# File: swarms/examples/igc_example.md ## Interactive Groupchat Examples @@ -18830,7 +21445,7 @@ Join our community of agent engineers and researchers for technical support, cut -------------------------------------------------- -# File: swarms\examples\interactive_groupchat_example.md +# File: swarms/examples/interactive_groupchat_example.md # Interactive GroupChat Example @@ -18971,7 +21586,7 @@ if __name__ == "__main__": -------------------------------------------------- -# File: swarms\examples\llama4.md +# File: swarms/examples/llama4.md # Llama4 Model Integration @@ -19149,7 +21764,7 @@ print( -------------------------------------------------- -# File: swarms\examples\lumo.md +# File: swarms/examples/lumo.md # Lumo Example Introducing Lumo-70B-Instruct - the largest and most advanced AI model ever created for the Solana ecosystem. Built on Meta's groundbreaking LLaMa 3.3 70B Instruct foundation, this revolutionary model represents a quantum leap in blockchain-specific artificial intelligence. With an unprecedented 70 billion parameters and trained on the most comprehensive Solana documentation dataset ever assembled, Lumo-70B-Instruct sets a new standard for developer assistance in the blockchain space. @@ -19217,7 +21832,7 @@ Agent( -------------------------------------------------- -# File: swarms\examples\mixture_of_agents.md +# File: swarms/examples/mixture_of_agents.md # MixtureOfAgents Examples @@ -19483,7 +22098,7 @@ This comprehensive guide demonstrates how to effectively use the MixtureOfAgents -------------------------------------------------- -# File: swarms\examples\moa_example.md +# File: swarms/examples/moa_example.md # Mixture of Agents Example @@ -19621,7 +22236,7 @@ If you're facing issues or want to learn more, check out the following resources -------------------------------------------------- -# File: swarms\examples\model_providers.md +# File: swarms/examples/model_providers.md # Model Providers Overview @@ -19804,7 +22419,7 @@ agent = Agent( -------------------------------------------------- -# File: swarms\examples\multi_agent_router_minimal.md +# File: swarms/examples/multi_agent_router_minimal.md # MultiAgentRouter Minimal Example @@ -19842,7 +22457,7 @@ View the source on [GitHub](https://github.com/kyegomez/swarms/blob/master/examp -------------------------------------------------- -# File: swarms\examples\multiple_images.md +# File: swarms/examples/multiple_images.md # Processing Multiple Images @@ -19925,7 +22540,7 @@ If you're facing issues or want to learn more, check out the following resources -------------------------------------------------- -# File: swarms\examples\ollama.md +# File: swarms/examples/ollama.md # Agent with Ollama @@ -19954,7 +22569,7 @@ agent.run("What are the components of a startup's stock incentive equity plan?") -------------------------------------------------- -# File: swarms\examples\openai_example.md +# File: swarms/examples/openai_example.md # Agent with GPT-4o-Mini @@ -19975,7 +22590,7 @@ Agent( -------------------------------------------------- -# File: swarms\examples\openrouter.md +# File: swarms/examples/openrouter.md # Agent with OpenRouter @@ -20007,7 +22622,7 @@ agent.run("What are the components of a startup's stock incentive equity plan?") -------------------------------------------------- -# File: swarms\examples\quant_crypto_agent.md +# File: swarms/examples/quant_crypto_agent.md # Quant Crypto Agent @@ -20141,7 +22756,7 @@ agent.run( -------------------------------------------------- -# File: swarms\examples\sequential_example.md +# File: swarms/examples/sequential_example.md # Sequential Workflow Example @@ -20315,7 +22930,7 @@ from swarms import Agent, SequentialWorkflow -------------------------------------------------- -# File: swarms\examples\swarm_router.md +# File: swarms/examples/swarm_router.md # SwarmRouter Examples @@ -20560,7 +23175,7 @@ This comprehensive guide demonstrates how to effectively use the SwarmRouter in -------------------------------------------------- -# File: swarms\examples\swarms_api_finance.md +# File: swarms/examples/swarms_api_finance.md # Finance Swarm Example @@ -20693,7 +23308,7 @@ python financial_swarm.py -------------------------------------------------- -# File: swarms\examples\swarms_api_medical.md +# File: swarms/examples/swarms_api_medical.md # Medical Swarm Example @@ -20826,7 +23441,7 @@ python medical_swarm.py -------------------------------------------------- -# File: swarms\examples\swarms_api_ml_model.md +# File: swarms/examples/swarms_api_ml_model.md # ML Model Code Generation Swarm Example @@ -20969,7 +23584,7 @@ if __name__ == "__main__": -------------------------------------------------- -# File: swarms\examples\swarms_dao.md +# File: swarms/examples/swarms_dao.md # Swarms DAO Example @@ -21211,7 +23826,7 @@ print("Collaborative Strategy Output:\n", output) -------------------------------------------------- -# File: swarms\examples\swarms_of_browser_agents.md +# File: swarms/examples/swarms_of_browser_agents.md # Swarms x Browser Use @@ -21281,7 +23896,7 @@ swarm.run( -------------------------------------------------- -# File: swarms\examples\swarms_tools_htx.md +# File: swarms/examples/swarms_tools_htx.md # Swarms Tools Example with HTX + CoinGecko @@ -21323,7 +23938,7 @@ agent.run( -------------------------------------------------- -# File: swarms\examples\swarms_tools_htx_gecko.md +# File: swarms/examples/swarms_tools_htx_gecko.md # Swarms Tools Example with HTX + CoinGecko @@ -21371,7 +23986,7 @@ agent.run("Analyze the $swarms token on htx") -------------------------------------------------- -# File: swarms\examples\templates_index.md +# File: swarms/examples/templates_index.md # The Swarms Index @@ -21450,7 +24065,7 @@ The Swarms Index is a comprehensive catalog of repositories under The Swarm Corp -------------------------------------------------- -# File: swarms\examples\unique_swarms.md +# File: swarms/examples/unique_swarms.md In this section, we present a diverse collection of unique swarms, each with its own distinct characteristics and applications. These examples are designed to illustrate the versatility and potential of swarm intelligence in various domains. By exploring these examples, you can gain a deeper understanding of how swarms can be leveraged to solve complex problems and improve decision-making processes. @@ -22089,7 +24704,7 @@ if __name__ == "__main__": -------------------------------------------------- -# File: swarms\examples\vision_processing.md +# File: swarms/examples/vision_processing.md # Vision Processing Examples @@ -22245,7 +24860,7 @@ batch_results = process_image_batch(image_folder, visual_analyst) -------------------------------------------------- -# File: swarms\examples\vision_tools.md +# File: swarms/examples/vision_tools.md # Agents with Vision and Tool Usage @@ -22389,7 +25004,7 @@ If you're facing issues or want to learn more, check out the following resources -------------------------------------------------- -# File: swarms\examples\vllm.md +# File: swarms/examples/vllm.md # VLLM Swarm Agents @@ -22823,7 +25438,7 @@ swarm.run("Analyze the best etfs for gold and other similiar commodities in vola -------------------------------------------------- -# File: swarms\examples\vllm_integration.md +# File: swarms/examples/vllm_integration.md @@ -23022,7 +25637,7 @@ result = workflow.run("Analyze the impact of renewable energy") -------------------------------------------------- -# File: swarms\examples\xai.md +# File: swarms/examples/xai.md # Agent with XAI @@ -23054,7 +25669,7 @@ agent.run("What are the components of a startup's stock incentive equity plan?") -------------------------------------------------- -# File: swarms\examples\yahoo_finance.md +# File: swarms/examples/yahoo_finance.md # Swarms Tools Example with Yahoo Finance @@ -23101,7 +25716,7 @@ agent.run("Analyze the latest metrics for nvidia") -------------------------------------------------- -# File: swarms\features.md +# File: swarms/features.md ## ✨ Enterprise Features @@ -23149,7 +25764,7 @@ Our team is committed to ensuring Swarms meets your enterprise multi-agent infra -------------------------------------------------- -# File: swarms\framework\agents_explained.md +# File: swarms/framework/agents_explained.md # An Analysis of Agents @@ -23236,7 +25851,7 @@ The Swarms framework's agents are powerful units that combine LLMs, tools, and l -------------------------------------------------- -# File: swarms\framework\code_cleanliness.md +# File: swarms/framework/code_cleanliness.md # Code Cleanliness in Python: A Comprehensive Guide @@ -23648,7 +26263,7 @@ By following the principles and best practices outlined in this article, you'll -------------------------------------------------- -# File: swarms\framework\concept.md +# File: swarms/framework/concept.md To create a comprehensive overview of the Swarms framework, we can break it down into key concepts such as models, agents, tools, Retrieval-Augmented Generation (RAG) systems, and swarm systems. Below are conceptual explanations of these components along with mermaid diagrams to illustrate their interactions. @@ -23720,7 +26335,7 @@ The Swarms framework leverages models, agents, tools, RAG systems, and swarm sys -------------------------------------------------- -# File: swarms\framework\index.md +# File: swarms/framework/index.md ## Swarms Framework Conceptual Breakdown @@ -23842,7 +26457,7 @@ This hierarchical design ensures scalability, flexibility, and robustness, makin -------------------------------------------------- -# File: swarms\framework\reference.md +# File: swarms/framework/reference.md # API Reference Documentation @@ -25267,7 +27882,7 @@ print(agent) -------------------------------------------------- -# File: swarms\framework\test.md +# File: swarms/framework/test.md # How to Run Tests Using Pytest: A Comprehensive Guide @@ -25516,7 +28131,7 @@ Happy testing! -------------------------------------------------- -# File: swarms\framework\vision.md +# File: swarms/framework/vision.md ### Swarms Vision @@ -25676,7 +28291,7 @@ Swarms promotes an open and extensible ecosystem, encouraging community-driven i -------------------------------------------------- -# File: swarms\glossary.md +# File: swarms/glossary.md # Glossary of Terms @@ -25729,7 +28344,7 @@ By understanding these terms, you can effectively build and orchestrate agents a -------------------------------------------------- -# File: swarms\install\docker_setup.md +# File: swarms/install/docker_setup.md # Docker Setup Guide for Contributors to Swarms @@ -25921,7 +28536,7 @@ Remember to secure sensitive data, use tagged releases for your images, and foll -------------------------------------------------- -# File: swarms\install\env.md +# File: swarms/install/env.md # Environment Variables @@ -26129,7 +28744,7 @@ openai_key = os.getenv("OPENAI_API_KEY") -------------------------------------------------- -# File: swarms\install\install.md +# File: swarms/install/install.md # Swarms Installation Guide @@ -26159,9 +28774,9 @@ Before you begin, ensure you have the following installed: === "pip (Recommended)" - #### Headless Installation + #### Simple Installation - The headless installation of `swarms` is designed for environments where graphical user interfaces (GUI) are not needed, making it more lightweight and suitable for server-side applications. + Simplest manner of installing swarms leverages using PIP. For faster installs and build times, we recommend using UV ```bash pip install swarms @@ -26198,6 +28813,49 @@ Before you begin, ensure you have the following installed: uv pip install -e .[desktop] ``` +=== "Poetry Installation" + + Poetry is a modern dependency management and packaging tool for Python. It provides a more robust way to manage project dependencies and virtual environments. + + === "Basic Installation" + + ```bash + # Install Poetry first + curl -sSL https://install.python-poetry.org | python3 - + + # Install swarms using Poetry + poetry add swarms + ``` + + === "Development Installation" + + ```bash + # Clone the repository + git clone https://github.com/kyegomez/swarms.git + cd swarms + + # Install in editable mode + poetry install + ``` + + For desktop installation with extras: + + ```bash + poetry install --extras "desktop" + ``` + + === "Using Poetry with existing projects" + + If you have an existing project with a `pyproject.toml` file: + + ```bash + # Add swarms to your project dependencies + poetry add swarms + + # Or add with specific extras + poetry add "swarms[desktop]" + ``` + === "Development Installation" === "Using virtualenv" @@ -26465,7 +29123,7 @@ Before you begin, ensure you have the following installed: -------------------------------------------------- -# File: swarms\install\quickstart.md +# File: swarms/install/quickstart.md ## Quickstart @@ -26973,7 +29631,7 @@ These are the key swarm architectures available in the **Swarms Framework**. Eac -------------------------------------------------- -# File: swarms\install\workspace_manager.md +# File: swarms/install/workspace_manager.md # Swarms Framework Environment Configuration @@ -27164,7 +29822,7 @@ Common issues and solutions: -------------------------------------------------- -# File: swarms\memory\diy_memory.md +# File: swarms/memory/diy_memory.md # Integrating the Agent Class with Memory Systems/RAG in the Swarms Memory Framework @@ -27289,7 +29947,7 @@ Happy coding! -------------------------------------------------- -# File: swarms\papers.md +# File: swarms/papers.md # awesome-multi-agent-papers @@ -27297,7 +29955,7 @@ An awesome list of multi-agent papers that show you various swarm architectures -------------------------------------------------- -# File: swarms\products.md +# File: swarms/products.md # Swarms Products @@ -27463,7 +30121,7 @@ Experience the future of multi-agent collaboration with Swarms. Start building y -------------------------------------------------- -# File: swarms\prompts\essence.md +# File: swarms/prompts/essence.md # **The Essence of Enterprise-Grade Prompting** @@ -27638,7 +30296,7 @@ Ready to take the next step? Let’s explore how to design adaptive prompting fr -------------------------------------------------- -# File: swarms\prompts\main.md +# File: swarms/prompts/main.md # Managing Prompts in Production @@ -27957,7 +30615,916 @@ By using this architecture, you'll be able to scale your system effortlessly whi -------------------------------------------------- -# File: swarms\structs\abstractswarm.md +# File: swarms/structs/BoardOfDirectors.md + +# Board of Directors - Multi-Agent Architecture + +The Board of Directors is a sophisticated multi-agent architecture that implements collective decision-making through democratic processes, voting mechanisms, and role-based leadership. This architecture provides an alternative to single-director patterns by enabling collaborative intelligence through structured governance. + +## Overview + +The Board of Directors architecture follows a democratic workflow pattern: + +1. **Task Reception**: User provides a task to the swarm +2. **Board Meeting**: Board of Directors convenes to discuss and create a plan +3. **Voting & Consensus**: Board members vote and reach consensus on task distribution +4. **Order Distribution**: Board distributes orders to specialized worker agents +5. **Execution**: Individual agents execute their assigned tasks +6. **Feedback Loop**: Board evaluates results and issues new orders if needed (up to `max_loops`) +7. **Context Preservation**: All conversation history and context is maintained throughout the process + +## Architecture Components + +### Core Components + +| Component | Description | Purpose | +|-----------|-------------|---------| +| **BoardOfDirectorsSwarm** | Main orchestration class | Manages the entire board workflow and agent coordination | +| **Board Member Roles** | Role definitions and hierarchy | Defines responsibilities and voting weights for each board member | +| **Decision Making Process** | Voting and consensus mechanisms | Implements democratic decision-making with weighted voting | +| **Workflow Management** | Process orchestration | Manages the complete lifecycle from task reception to final delivery | + +### Board Member Interaction Flow + +```mermaid +sequenceDiagram + participant User + participant Chairman + participant ViceChair + participant Secretary + participant Treasurer + participant ExecDir + participant Agents + + User->>Chairman: Submit Task + Chairman->>ViceChair: Notify Board Meeting + Chairman->>Secretary: Request Meeting Setup + Chairman->>Treasurer: Resource Assessment + Chairman->>ExecDir: Strategic Planning + + Note over Chairman,ExecDir: Board Discussion Phase + + Chairman->>ViceChair: Lead Discussion + ViceChair->>Secretary: Document Decisions + Secretary->>Treasurer: Budget Considerations + Treasurer->>ExecDir: Resource Allocation + ExecDir->>Chairman: Strategic Recommendations + + Note over Chairman,ExecDir: Voting & Consensus + + Chairman->>ViceChair: Call for Vote + ViceChair->>Secretary: Record Votes + Secretary->>Treasurer: Financial Approval + Treasurer->>ExecDir: Resource Approval + ExecDir->>Chairman: Final Decision + + Note over Chairman,Agents: Execution Phase + + Chairman->>Agents: Distribute Orders + Agents->>Chairman: Execute Tasks + Agents->>ViceChair: Progress Reports + Agents->>Secretary: Documentation + Agents->>Treasurer: Resource Usage + Agents->>ExecDir: Strategic Updates + + Note over Chairman,ExecDir: Review & Feedback + + Chairman->>User: Deliver Results +``` + +## Board Member Roles + +The Board of Directors supports various roles with different responsibilities and voting weights: + +| Role | Description | Voting Weight | Responsibilities | +|------|-------------|---------------|------------------| +| `CHAIRMAN` | Primary leader responsible for board meetings and final decisions | 1.5 | Leading meetings, facilitating consensus, making final decisions | +| `VICE_CHAIRMAN` | Secondary leader who supports the chairman | 1.2 | Supporting chairman, coordinating operations | +| `SECRETARY` | Responsible for documentation and meeting minutes | 1.0 | Documenting meetings, maintaining records | +| `TREASURER` | Manages financial aspects and resource allocation | 1.0 | Financial oversight, resource management | +| `EXECUTIVE_DIRECTOR` | Executive-level board member with operational authority | 1.5 | Strategic planning, operational oversight | +| `MEMBER` | General board member with specific expertise | 1.0 | Contributing expertise, participating in decisions | + +### Role Hierarchy and Authority + +```python +# Example: Role hierarchy implementation +class BoardRoleHierarchy: + def __init__(self): + self.roles = { + "CHAIRMAN": { + "voting_weight": 1.5, + "authority_level": "FINAL", + "supervises": ["VICE_CHAIRMAN", "EXECUTIVE_DIRECTOR", "SECRETARY", "TREASURER", "MEMBER"], + "responsibilities": ["leadership", "final_decision", "consensus_facilitation"], + "override_capability": True + }, + "VICE_CHAIRMAN": { + "voting_weight": 1.2, + "authority_level": "SENIOR", + "supervises": ["MEMBER"], + "responsibilities": ["operational_support", "coordination", "implementation"], + "backup_for": "CHAIRMAN" + }, + "EXECUTIVE_DIRECTOR": { + "voting_weight": 1.5, + "authority_level": "SENIOR", + "supervises": ["MEMBER"], + "responsibilities": ["strategic_planning", "execution_oversight", "performance_management"], + "strategic_authority": True + }, + "SECRETARY": { + "voting_weight": 1.0, + "authority_level": "STANDARD", + "supervises": [], + "responsibilities": ["documentation", "record_keeping", "communication"], + "administrative_authority": True + }, + "TREASURER": { + "voting_weight": 1.0, + "authority_level": "STANDARD", + "supervises": [], + "responsibilities": ["financial_oversight", "resource_management", "budget_control"], + "financial_authority": True + }, + "MEMBER": { + "voting_weight": 1.0, + "authority_level": "STANDARD", + "supervises": [], + "responsibilities": ["expertise_contribution", "analysis", "voting"], + "specialized_expertise": True + } + } +``` + +## Quick Start + +### Basic Setup + +```python +from swarms import Agent +from swarms.structs.board_of_directors_swarm import ( + BoardOfDirectorsSwarm, + BoardMember, + BoardMemberRole +) +from swarms.config.board_config import enable_board_feature + +# Enable the Board of Directors feature +enable_board_feature() + +# Create board members with specific roles +chairman = Agent( + agent_name="Chairman", + agent_description="Chairman of the Board responsible for leading meetings", + model_name="gpt-4o-mini", + system_prompt="You are the Chairman of the Board..." +) + +vice_chairman = Agent( + agent_name="Vice-Chairman", + agent_description="Vice Chairman who supports the Chairman", + model_name="gpt-4o-mini", + system_prompt="You are the Vice Chairman..." +) + +# Create BoardMember objects with roles and expertise +board_members = [ + BoardMember(chairman, BoardMemberRole.CHAIRMAN, 1.5, ["leadership", "strategy"]), + BoardMember(vice_chairman, BoardMemberRole.VICE_CHAIRMAN, 1.2, ["operations", "coordination"]), +] + +# Create worker agents +research_agent = Agent( + agent_name="Research-Specialist", + agent_description="Expert in market research and analysis", + model_name="gpt-4o", +) + +financial_agent = Agent( + agent_name="Financial-Analyst", + agent_description="Specialist in financial analysis and valuation", + model_name="gpt-4o", +) + +# Initialize the Board of Directors swarm +board_swarm = BoardOfDirectorsSwarm( + name="Executive_Board_Swarm", + description="Executive board with specialized roles for strategic decision-making", + board_members=board_members, + agents=[research_agent, financial_agent], + max_loops=2, + verbose=True, + decision_threshold=0.6, + enable_voting=True, + enable_consensus=True, +) + +# Execute a complex task with democratic decision-making +result = board_swarm.run(task="Analyze the market potential for Tesla (TSLA) stock") +print(result) +``` + +## Comprehensive Examples + +### 1. Strategic Investment Analysis + +```python +# Create specialized agents for investment analysis +market_research_agent = Agent( + agent_name="Market-Research-Specialist", + agent_description="Expert in market research, competitive analysis, and industry trends", + model_name="gpt-4o", + system_prompt="""You are a Market Research Specialist. Your responsibilities include: +1. Conducting comprehensive market research and analysis +2. Identifying market trends, opportunities, and risks +3. Analyzing competitive landscape and positioning +4. Providing market size and growth projections +5. Supporting strategic decision-making with research findings + +You should be thorough, analytical, and objective in your research.""" +) + +financial_analyst_agent = Agent( + agent_name="Financial-Analyst", + agent_description="Specialist in financial analysis, valuation, and investment assessment", + model_name="gpt-4o", + system_prompt="""You are a Financial Analyst. Your responsibilities include: +1. Conducting financial analysis and valuation +2. Assessing investment opportunities and risks +3. Analyzing financial performance and metrics +4. Providing financial insights and recommendations +5. Supporting financial decision-making + +You should be financially astute, analytical, and focused on value creation.""" +) + +technical_assessor_agent = Agent( + agent_name="Technical-Assessor", + agent_description="Expert in technical feasibility and implementation assessment", + model_name="gpt-4o", + system_prompt="""You are a Technical Assessor. Your responsibilities include: +1. Evaluating technical feasibility and requirements +2. Assessing implementation challenges and risks +3. Analyzing technology stack and architecture +4. Providing technical insights and recommendations +5. Supporting technical decision-making + +You should be technically proficient, practical, and solution-oriented.""" +) + +# Create comprehensive board members +board_members = [ + BoardMember( + chairman, + BoardMemberRole.CHAIRMAN, + 1.5, + ["leadership", "strategy", "governance", "decision_making"] + ), + BoardMember( + vice_chairman, + BoardMemberRole.VICE_CHAIRMAN, + 1.2, + ["operations", "coordination", "communication", "implementation"] + ), + BoardMember( + secretary, + BoardMemberRole.SECRETARY, + 1.0, + ["documentation", "compliance", "record_keeping", "communication"] + ), + BoardMember( + treasurer, + BoardMemberRole.TREASURER, + 1.0, + ["finance", "budgeting", "risk_management", "resource_allocation"] + ), + BoardMember( + executive_director, + BoardMemberRole.EXECUTIVE_DIRECTOR, + 1.5, + ["strategy", "operations", "innovation", "performance_management"] + ) +] + +# Initialize the investment analysis board +investment_board = BoardOfDirectorsSwarm( + name="Investment_Analysis_Board", + description="Specialized board for investment analysis and decision-making", + board_members=board_members, + agents=[market_research_agent, financial_analyst_agent, technical_assessor_agent], + max_loops=3, + verbose=True, + decision_threshold=0.75, # Higher threshold for investment decisions + enable_voting=True, + enable_consensus=True, + max_workers=3, + output_type="dict" +) + +# Execute investment analysis +investment_task = """ +Analyze the strategic investment opportunity for a $50M Series B funding round in a +fintech startup. Consider market conditions, competitive landscape, financial projections, +technical feasibility, and strategic fit. Provide comprehensive recommendations including: +1. Investment recommendation (proceed/hold/decline) +2. Valuation analysis and suggested terms +3. Risk assessment and mitigation strategies +4. Strategic value and synergies +5. Implementation timeline and milestones +""" + +result = investment_board.run(task=investment_task) +print("Investment Analysis Results:") +print(json.dumps(result, indent=2)) +``` + +### 2. Technology Strategy Development + +```python +# Create technology-focused agents +tech_strategy_agent = Agent( + agent_name="Tech-Strategy-Specialist", + agent_description="Expert in technology strategy and digital transformation", + model_name="gpt-4o", + system_prompt="""You are a Technology Strategy Specialist. Your responsibilities include: +1. Developing technology roadmaps and strategies +2. Assessing digital transformation opportunities +3. Evaluating emerging technologies and trends +4. Planning technology investments and priorities +5. Supporting technology decision-making + +You should be strategic, forward-thinking, and technology-savvy.""" +) + +implementation_planner_agent = Agent( + agent_name="Implementation-Planner", + agent_description="Expert in implementation planning and project management", + model_name="gpt-4o", + system_prompt="""You are an Implementation Planner. Your responsibilities include: +1. Creating detailed implementation plans +2. Assessing resource requirements and timelines +3. Identifying implementation risks and challenges +4. Planning change management strategies +5. Supporting implementation decision-making + +You should be practical, organized, and execution-focused.""" +) + +# Technology strategy board configuration +tech_board = BoardOfDirectorsSwarm( + name="Technology_Strategy_Board", + description="Specialized board for technology strategy and digital transformation", + board_members=board_members, + agents=[tech_strategy_agent, implementation_planner_agent, technical_assessor_agent], + max_loops=4, # More loops for complex technology planning + verbose=True, + decision_threshold=0.7, + enable_voting=True, + enable_consensus=True, + max_workers=3, + output_type="dict" +) + +# Execute technology strategy development +tech_strategy_task = """ +Develop a comprehensive technology strategy for a mid-size manufacturing company +looking to digitize operations and implement Industry 4.0 technologies. Consider: +1. Current technology assessment and gaps +2. Technology roadmap and implementation plan +3. Investment requirements and ROI analysis +4. Risk assessment and mitigation strategies +5. Change management and training requirements +6. Competitive positioning and market advantages +""" + +result = tech_board.run(task=tech_strategy_task) +print("Technology Strategy Results:") +print(json.dumps(result, indent=2)) +``` + +### 3. Crisis Management and Response + +```python +# Create crisis management agents +crisis_coordinator_agent = Agent( + agent_name="Crisis-Coordinator", + agent_description="Expert in crisis management and emergency response", + model_name="gpt-4o", + system_prompt="""You are a Crisis Coordinator. Your responsibilities include: +1. Coordinating crisis response efforts +2. Assessing crisis severity and impact +3. Developing immediate response plans +4. Managing stakeholder communications +5. Supporting crisis decision-making + +You should be calm, decisive, and action-oriented.""" +) + +communications_specialist_agent = Agent( + agent_name="Communications-Specialist", + agent_description="Expert in crisis communications and stakeholder management", + model_name="gpt-4o", + system_prompt="""You are a Communications Specialist. Your responsibilities include: +1. Developing crisis communication strategies +2. Managing stakeholder communications +3. Coordinating public relations efforts +4. Ensuring message consistency and accuracy +5. Supporting communication decision-making + +You should be clear, empathetic, and strategic in communications.""" +) + +# Crisis management board configuration +crisis_board = BoardOfDirectorsSwarm( + name="Crisis_Management_Board", + description="Specialized board for crisis management and emergency response", + board_members=board_members, + agents=[crisis_coordinator_agent, communications_specialist_agent, financial_analyst_agent], + max_loops=2, # Faster response needed + verbose=True, + decision_threshold=0.6, # Lower threshold for urgent decisions + enable_voting=True, + enable_consensus=True, + max_workers=3, + output_type="dict" +) + +# Execute crisis management +crisis_task = """ +Our company is facing a major data breach. Develop an immediate response plan. +Include: +1. Immediate containment and mitigation steps +2. Communication strategy for stakeholders +3. Legal and regulatory compliance requirements +4. Financial impact assessment +5. Long-term recovery and prevention measures +6. Timeline and resource allocation +""" + +result = crisis_board.run(task=crisis_task) +print("Crisis Management Results:") +print(json.dumps(result, indent=2)) +``` + +## Configuration and Parameters + +### BoardOfDirectorsSwarm Parameters + +```python +# Complete parameter reference +board_swarm = BoardOfDirectorsSwarm( + # Basic Configuration + name="Board_Name", # Name of the board + description="Board description", # Description of the board's purpose + + # Board Members and Agents + board_members=board_members, # List of BoardMember objects + agents=worker_agents, # List of worker Agent objects + + # Execution Control + max_loops=3, # Maximum number of refinement loops + max_workers=4, # Maximum parallel workers + + # Decision Making + decision_threshold=0.7, # Consensus threshold (0.0-1.0) + enable_voting=True, # Enable voting mechanisms + enable_consensus=True, # Enable consensus building + + # Advanced Features + auto_assign_roles=True, # Auto-assign roles based on expertise + role_mapping={ # Custom role mapping + "financial_analysis": ["Treasurer", "Financial_Member"], + "strategic_planning": ["Chairman", "Executive_Director"] + }, + + # Consensus Configuration + consensus_timeout=300, # Consensus timeout in seconds + min_participation_rate=0.8, # Minimum participation rate + auto_fallback_to_chairman=True, # Chairman can make final decisions + consensus_rounds=3, # Maximum consensus building rounds + + # Output Configuration + output_type="dict", # Output format: "dict", "str", "list" + verbose=True, # Enable detailed logging + + # Quality Control + quality_threshold=0.8, # Quality threshold for outputs + enable_quality_gates=True, # Enable quality checkpoints + enable_peer_review=True, # Enable peer review mechanisms + + # Performance Optimization + parallel_execution=True, # Enable parallel execution + enable_agent_pooling=True, # Enable agent pooling + timeout_per_agent=300, # Timeout per agent in seconds + + # Monitoring and Logging + enable_logging=True, # Enable detailed logging + log_level="INFO", # Logging level + enable_metrics=True, # Enable performance metrics + enable_tracing=True # Enable request tracing +) +``` + +### Voting Configuration + +```python +# Voting system configuration +voting_config = { + "method": "weighted_majority", # Voting method + "threshold": 0.75, # Consensus threshold + "weights": { # Role-based voting weights + "CHAIRMAN": 1.5, + "VICE_CHAIRMAN": 1.2, + "SECRETARY": 1.0, + "TREASURER": 1.0, + "EXECUTIVE_DIRECTOR": 1.5 + }, + "tie_breaker": "CHAIRMAN", # Tie breaker role + "allow_abstention": True, # Allow board members to abstain + "secret_ballot": False, # Use secret ballot voting + "transparent_process": True # Transparent voting process +} +``` + +### Quality Control Configuration + +```python +# Quality control configuration +quality_config = { + "quality_gates": True, # Enable quality checkpoints + "quality_threshold": 0.8, # Quality threshold + "enable_peer_review": True, # Enable peer review + "review_required": True, # Require peer review + "output_validation": True, # Validate outputs + "enable_metrics_tracking": True, # Track quality metrics + + # Quality metrics + "quality_metrics": { + "completeness": {"weight": 0.2, "threshold": 0.8}, + "accuracy": {"weight": 0.25, "threshold": 0.85}, + "feasibility": {"weight": 0.2, "threshold": 0.8}, + "risk": {"weight": 0.15, "threshold": 0.7}, + "impact": {"weight": 0.2, "threshold": 0.8} + } +} +``` + +## Performance Monitoring and Analytics + +### Board Performance Metrics + +```python +# Get comprehensive board performance metrics +board_summary = board_swarm.get_board_summary() +print("Board Summary:") +print(f"Board Name: {board_summary['board_name']}") +print(f"Total Board Members: {board_summary['total_members']}") +print(f"Total Worker Agents: {board_summary['total_agents']}") +print(f"Decision Threshold: {board_summary['decision_threshold']}") +print(f"Max Loops: {board_summary['max_loops']}") + +# Display board member details +print("\nBoard Members:") +for member in board_summary['members']: + print(f"- {member['name']} (Role: {member['role']}, Weight: {member['voting_weight']})") + print(f" Expertise: {', '.join(member['expertise_areas'])}") + +# Display worker agent details +print("\nWorker Agents:") +for agent in board_summary['agents']: + print(f"- {agent['name']}: {agent['description']}") +``` + +### Decision Analysis + +```python +# Analyze decision-making patterns +if hasattr(result, 'get') and callable(result.get): + conversation_history = result.get('conversation_history', []) + + print(f"\nDecision Analysis:") + print(f"Total Messages: {len(conversation_history)}") + + # Count board member contributions + board_contributions = {} + for msg in conversation_history: + if 'Board' in msg.get('role', ''): + member_name = msg.get('agent_name', 'Unknown') + board_contributions[member_name] = board_contributions.get(member_name, 0) + 1 + + print(f"Board Member Contributions:") + for member, count in board_contributions.items(): + print(f"- {member}: {count} contributions") + + # Count agent executions + agent_executions = {} + for msg in conversation_history: + if any(agent.agent_name in msg.get('role', '') for agent in worker_agents): + agent_name = msg.get('agent_name', 'Unknown') + agent_executions[agent_name] = agent_executions.get(agent_name, 0) + 1 + + print(f"\nAgent Executions:") + for agent, count in agent_executions.items(): + print(f"- {agent}: {count} executions") +``` + +### Performance Monitoring System + +```python +# Performance monitoring system +class PerformanceMonitor: + def __init__(self): + self.metrics = { + "execution_times": [], + "quality_scores": [], + "consensus_rounds": [], + "error_rates": [] + } + + def track_execution_time(self, phase, duration): + """Track execution time for different phases""" + self.metrics["execution_times"].append({ + "phase": phase, + "duration": duration, + "timestamp": datetime.now().isoformat() + }) + + def track_quality_score(self, score): + """Track quality scores""" + self.metrics["quality_scores"].append({ + "score": score, + "timestamp": datetime.now().isoformat() + }) + + def generate_performance_report(self): + """Generate comprehensive performance report""" + return { + "average_execution_time": self.calculate_average_execution_time(), + "quality_trends": self.analyze_quality_trends(), + "consensus_efficiency": self.analyze_consensus_efficiency(), + "error_analysis": self.analyze_errors(), + "recommendations": self.generate_recommendations() + } + +# Usage example +monitor = PerformanceMonitor() +# ... track metrics during execution ... +report = monitor.generate_performance_report() +print("Performance Report:") +print(json.dumps(report, indent=2)) +``` + +## Advanced Features and Customization + +### Custom Board Templates + +```python +from swarms.config.board_config import get_default_board_template + +# Get pre-configured board templates +financial_board = get_default_board_template("financial_analysis") +strategic_board = get_default_board_template("strategic_planning") +tech_board = get_default_board_template("technology_assessment") +crisis_board = get_default_board_template("crisis_management") + +# Custom board template +custom_template = { + "name": "Custom_Board", + "description": "Custom board for specific use case", + "board_members": [ + {"role": "CHAIRMAN", "expertise": ["leadership", "strategy"]}, + {"role": "VICE_CHAIRMAN", "expertise": ["operations", "coordination"]}, + {"role": "SECRETARY", "expertise": ["documentation", "communication"]}, + {"role": "TREASURER", "expertise": ["finance", "budgeting"]}, + {"role": "EXECUTIVE_DIRECTOR", "expertise": ["strategy", "operations"]} + ], + "agents": [ + {"name": "Research_Agent", "expertise": ["research", "analysis"]}, + {"name": "Technical_Agent", "expertise": ["technical", "implementation"]} + ], + "config": { + "max_loops": 3, + "decision_threshold": 0.7, + "enable_voting": True, + "enable_consensus": True + } +} +``` + +### Dynamic Role Assignment + +```python +# Automatically assign roles based on task requirements +board_swarm = BoardOfDirectorsSwarm( + board_members=board_members, + agents=agents, + auto_assign_roles=True, + role_mapping={ + "financial_analysis": ["Treasurer", "Financial_Member"], + "strategic_planning": ["Chairman", "Executive_Director"], + "technical_assessment": ["Technical_Member", "Executive_Director"], + "research_analysis": ["Research_Member", "Secretary"], + "crisis_management": ["Chairman", "Vice_Chairman", "Communications_Member"] + } +) +``` + +### Consensus Optimization + +```python +# Advanced consensus-building mechanisms +board_swarm = BoardOfDirectorsSwarm( + board_members=board_members, + agents=agents, + enable_consensus=True, + consensus_timeout=300, # 5 minutes timeout + min_participation_rate=0.8, # 80% minimum participation + auto_fallback_to_chairman=True, # Chairman can make final decisions + consensus_rounds=3, # Maximum consensus building rounds + consensus_method="weighted_majority", # Consensus method + enable_mediation=True, # Enable mediation for conflicts + mediation_timeout=120 # Mediation timeout in seconds +) +``` + +## Troubleshooting and Debugging + +### Common Issues and Solutions + +1. **Consensus Failures** + - **Issue**: Board cannot reach consensus within loop limit + - **Solution**: Lower voting threshold, increase max_loops, or adjust voting weights + ```python + board_swarm = BoardOfDirectorsSwarm( + decision_threshold=0.6, # Lower threshold + max_loops=5, # More loops + consensus_timeout=600 # Longer timeout + ) + ``` + +2. **Agent Timeout** + - **Issue**: Individual agents take too long to respond + - **Solution**: Increase timeout settings or optimize agent prompts + ```python + board_swarm = BoardOfDirectorsSwarm( + timeout_per_agent=600, # 10 minutes per agent + enable_agent_pooling=True # Use agent pooling + ) + ``` + +3. **Poor Quality Output** + - **Issue**: Final output doesn't meet quality standards + - **Solution**: Enable quality gates, increase max_loops, or improve agent prompts + ```python + board_swarm = BoardOfDirectorsSwarm( + enable_quality_gates=True, + quality_threshold=0.8, + enable_peer_review=True, + max_loops=4 + ) + ``` + +4. **Resource Exhaustion** + - **Issue**: System runs out of resources during execution + - **Solution**: Implement resource limits, use agent pooling, or optimize parallel execution + ```python + board_swarm = BoardOfDirectorsSwarm( + max_workers=2, # Limit parallel workers + enable_agent_pooling=True, + parallel_execution=False # Disable parallel execution + ) + ``` + +### Debugging Techniques + +```python +# Debugging configuration +debug_config = BoardConfig( + max_loops=1, # Limit loops for debugging + enable_logging=True, + log_level="DEBUG", + enable_tracing=True, + debug_mode=True +) + +# Create debug swarm +debug_swarm = BoardOfDirectorsSwarm( + agents=agents, + config=debug_config +) + +# Execute with debugging +try: + result = debug_swarm.run(task) +except Exception as e: + print(f"Error: {e}") + print(f"Debug info: {debug_swarm.get_debug_info()}") + +# Enable detailed logging +import logging +logging.basicConfig( + level=logging.DEBUG, + format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' +) + +# Create swarm with logging enabled +logging_swarm = BoardOfDirectorsSwarm( + agents=agents, + config=BoardConfig( + enable_logging=True, + log_level="DEBUG", + enable_metrics=True, + enable_tracing=True + ) +) +``` + +## Use Cases + +### Corporate Governance +- **Strategic Planning**: Long-term business strategy development +- **Risk Management**: Comprehensive risk assessment and mitigation +- **Resource Allocation**: Optimal distribution of company resources +- **Performance Oversight**: Monitoring and evaluating organizational performance + +### Financial Analysis +- **Portfolio Management**: Investment portfolio optimization and rebalancing +- **Market Analysis**: Comprehensive market research and trend analysis +- **Risk Assessment**: Financial risk evaluation and management +- **Compliance Monitoring**: Regulatory compliance and audit preparation + +### Research & Development +- **Technology Assessment**: Evaluation of emerging technologies +- **Product Development**: Strategic product planning and development +- **Innovation Management**: Managing innovation pipelines and initiatives +- **Quality Assurance**: Ensuring high standards across development processes + +### Project Management +- **Complex Project Planning**: Multi-faceted project strategy development +- **Resource Optimization**: Efficient allocation of project resources +- **Stakeholder Management**: Coordinating diverse stakeholder interests +- **Risk Mitigation**: Identifying and addressing project risks + +### Crisis Management +- **Emergency Response**: Rapid response to critical situations +- **Stakeholder Communication**: Managing communications during crises +- **Recovery Planning**: Developing recovery and prevention strategies +- **Legal Compliance**: Ensuring compliance during crisis situations + +## Success Criteria + +A successful Board of Directors implementation should demonstrate: + +- **Democratic Decision Making**: All board members contribute to decisions +- **Consensus Achievement**: Decisions reached through collaborative processes +- **Role Effectiveness**: Each board member fulfills their responsibilities +- **Agent Coordination**: Worker agents execute tasks efficiently +- **Quality Output**: High-quality results through collective intelligence +- **Process Transparency**: Clear visibility into decision-making processes +- **Performance Optimization**: Efficient resource utilization and execution +- **Continuous Improvement**: Learning from each execution cycle + +## Best Practices + +### 1. Role Definition +- Clearly define responsibilities for each board member +- Ensure expertise areas align with organizational needs +- Balance voting weights based on role importance +- Document role interactions and communication protocols + +### 2. Task Formulation +- Provide clear, specific task descriptions +- Include relevant context and constraints +- Specify expected outputs and deliverables +- Define quality criteria and success metrics + +### 3. Consensus Building +- Allow adequate time for discussion and consensus +- Encourage diverse perspectives and viewpoints +- Use structured decision-making processes +- Implement conflict resolution mechanisms + +### 4. Performance Monitoring +- Track decision quality and outcomes +- Monitor board member participation +- Analyze agent utilization and effectiveness +- Implement continuous improvement processes + +### 5. Resource Management +- Optimize agent allocation and utilization +- Implement parallel execution where appropriate +- Monitor resource usage and performance +- Scale resources based on task complexity + +--- + +The Board of Directors architecture represents a sophisticated approach to multi-agent collaboration, enabling organizations to leverage collective intelligence through structured governance and democratic decision-making processes. This comprehensive implementation provides the tools and frameworks needed to build effective, scalable, and intelligent decision-making systems. + + +-------------------------------------------------- + +# File: swarms/structs/abstractswarm.md # `BaseSwarm` Documentation @@ -28478,31 +32045,9 @@ This comprehensive documentation covers the Swarms library, including the `BaseS -------------------------------------------------- -# File: swarms\structs\agent.md +# File: swarms/structs/agent.md -# `Agent` - -Swarm Agent is a powerful autonomous agent framework designed to connect Language Models (LLMs) with various tools and long-term memory. This class provides the ability to ingest and process various types of documents such as PDFs, text files, Markdown files, JSON files, and more. The Agent structure offers a wide range of features to enhance the capabilities of LLMs and facilitate efficient task execution. - -## Overview - -The `Agent` class establishes a conversational loop with a language model, allowing for interactive task execution, feedback collection, and dynamic response generation. It includes features such as: - -1. **Conversational Loop**: Enables back-and-forth interaction with the model. -2. **Feedback Collection**: Allows users to provide feedback on generated responses. -3. **Stoppable Conversation**: Supports custom stopping conditions for the conversation. -4. **Retry Mechanism**: Implements a retry system for handling issues in response generation. -5. **Tool Integration**: Supports the integration of various tools for enhanced capabilities. -6. **Long-term Memory Management**: Incorporates vector databases for efficient information retrieval. -7. **Document Ingestion**: Processes various document types for information extraction. -8. **Interactive Mode**: Allows real-time communication with the agent. -9. **Sentiment Analysis**: Evaluates the sentiment of generated responses. -10. **Output Filtering and Cleaning**: Ensures generated responses meet specific criteria. -11. **Asynchronous and Concurrent Execution**: Supports efficient parallelization of tasks. -12. **Planning and Reasoning**: Implements techniques like algorithm of thoughts for enhanced decision-making. - - -## Architecture +# `Agent` Structure Reference Documentation ```mermaid graph TD @@ -28523,113 +32068,170 @@ graph TD L -->|Proceeds to Final LLM Processing| I ``` +The `Agent` class is the core component of the Swarm Agent framework. It serves as an autonomous agent that bridges Language Models (LLMs) with external tools and long-term memory systems. The class is designed to handle a variety of document types—including PDFs, text files, Markdown, and JSON—enabling robust document ingestion and processing. By integrating these capabilities, the `Agent` class empowers LLMs to perform complex tasks, utilize external resources, and manage information efficiently, making it a versatile solution for advanced autonomous workflows. + + +## Features +The `Agent` class establishes a conversational loop with a language model, allowing for interactive task execution, feedback collection, and dynamic response generation. It includes features such as: + +| Feature | Description | +|------------------------------------------|--------------------------------------------------------------------------------------------------| +| **Conversational Loop** | Enables back-and-forth interaction with the model. | +| **Feedback Collection** | Allows users to provide feedback on generated responses. | +| **Stoppable Conversation** | Supports custom stopping conditions for the conversation. | +| **Retry Mechanism** | Implements a retry system for handling issues in response generation. | +| **Tool Integration** | Supports the integration of various tools for enhanced capabilities. | +| **Long-term Memory Management** | Incorporates vector databases for efficient information retrieval. | +| **Document Ingestion** | Processes various document types for information extraction. | +| **Interactive Mode** | Allows real-time communication with the agent. | +| **Sentiment Analysis** | Evaluates the sentiment of generated responses. | +| **Output Filtering and Cleaning** | Ensures generated responses meet specific criteria. | +| **Asynchronous and Concurrent Execution**| Supports efficient parallelization of tasks. | +| **Planning and Reasoning** | Implements techniques like algorithm of thoughts for enhanced decision-making. | -## `Agent` Attributes -| Attribute | Description | -|-----------|-------------| -| `id` | Unique identifier for the agent instance. | -| `llm` | Language model instance used by the agent. | -| `template` | Template used for formatting responses. | -| `max_loops` | Maximum number of loops the agent can run. | -| `stopping_condition` | Callable function determining when to stop looping. | -| `loop_interval` | Interval (in seconds) between loops. | -| `retry_attempts` | Number of retry attempts for failed LLM calls. | -| `retry_interval` | Interval (in seconds) between retry attempts. | -| `return_history` | Boolean indicating whether to return conversation history. | -| `stopping_token` | Token that stops the agent from looping when present in the response. | -| `dynamic_loops` | Boolean indicating whether to dynamically determine the number of loops. | -| `interactive` | Boolean indicating whether to run in interactive mode. | -| `dashboard` | Boolean indicating whether to display a dashboard. | -| `agent_name` | Name of the agent instance. | -| `agent_description` | Description of the agent instance. | -| `system_prompt` | System prompt used to initialize the conversation. | -| `tools` | List of callable functions representing tools the agent can use. | -| `dynamic_temperature_enabled` | Boolean indicating whether to dynamically adjust the LLM's temperature. | -| `sop` | Standard operating procedure for the agent. | -| `sop_list` | List of strings representing the standard operating procedure. | -| `saved_state_path` | File path for saving and loading the agent's state. | -| `autosave` | Boolean indicating whether to automatically save the agent's state. | -| `context_length` | Maximum length of the context window (in tokens) for the LLM. | -| `user_name` | Name used to represent the user in the conversation. | -| `self_healing_enabled` | Boolean indicating whether to attempt self-healing in case of errors. | -| `code_interpreter` | Boolean indicating whether to interpret and execute code snippets. | -| `multi_modal` | Boolean indicating whether to support multimodal inputs. | -| `pdf_path` | File path of a PDF document to be ingested. | -| `list_of_pdf` | List of file paths for PDF documents to be ingested. | -| `tokenizer` | Instance of a tokenizer used for token counting and management. | -| `long_term_memory` | Instance of a `BaseVectorDatabase` implementation for long-term memory management. | -| `preset_stopping_token` | Boolean indicating whether to use a preset stopping token. | -| `traceback` | Object used for traceback handling. | -| `traceback_handlers` | List of traceback handlers. | -| `streaming_on` | Boolean indicating whether to stream responses. | -| `docs` | List of document paths or contents to be ingested. | -| `docs_folder` | Path to a folder containing documents to be ingested. | -| `verbose` | Boolean indicating whether to print verbose output. | -| `parser` | Callable function used for parsing input data. | -| `best_of_n` | Integer indicating the number of best responses to generate. | -| `callback` | Callable function to be called after each agent loop. | -| `metadata` | Dictionary containing metadata for the agent. | -| `callbacks` | List of callable functions to be called during execution. | -| `logger_handler` | Handler for logging messages. | -| `search_algorithm` | Callable function for long-term memory retrieval. | -| `logs_to_filename` | File path for logging agent activities. | -| `evaluator` | Callable function for evaluating the agent's responses. | -| `stopping_func` | Callable function used as a stopping condition. | -| `custom_loop_condition` | Callable function used as a custom loop condition. | -| `sentiment_threshold` | Float value representing the sentiment threshold for evaluating responses. | -| `custom_exit_command` | String representing a custom command for exiting the agent's loop. | -| `sentiment_analyzer` | Callable function for sentiment analysis on outputs. | -| `limit_tokens_from_string` | Callable function for limiting the number of tokens in a string. | -| `custom_tools_prompt` | Callable function for generating a custom prompt for tool usage. | -| `tool_schema` | Data structure representing the schema for the agent's tools. | -| `output_type` | Type representing the expected output type of responses. | -| `function_calling_type` | String representing the type of function calling. | -| `output_cleaner` | Callable function for cleaning the agent's output. | -| `function_calling_format_type` | String representing the format type for function calling. | -| `list_base_models` | List of base models used for generating tool schemas. | -| `metadata_output_type` | String representing the output type for metadata. | -| `state_save_file_type` | String representing the file type for saving the agent's state. | -| `chain_of_thoughts` | Boolean indicating whether to use the chain of thoughts technique. | -| `algorithm_of_thoughts` | Boolean indicating whether to use the algorithm of thoughts technique. | -| `tree_of_thoughts` | Boolean indicating whether to use the tree of thoughts technique. | -| `tool_choice` | String representing the method for tool selection. | -| `execute_tool` | Boolean indicating whether to execute tools. | -| `rules` | String representing the rules for the agent's behavior. | -| `planning` | Boolean indicating whether to perform planning. | -| `planning_prompt` | String representing the prompt for planning. | -| `device` | String representing the device on which the agent should run. | -| `custom_planning_prompt` | String representing a custom prompt for planning. | -| `memory_chunk_size` | Integer representing the maximum size of memory chunks for long-term memory retrieval. | -| `agent_ops_on` | Boolean indicating whether agent operations should be enabled. | -| `return_step_meta` | Boolean indicating whether to return JSON of all steps and additional metadata. | -| `output_type` | Literal type indicating whether to output "string", "str", "list", "json", "dict", or "yaml". | -| `time_created` | Float representing the time the agent was created. | -| `tags` | Optional list of strings for tagging the agent. | -| `use_cases` | Optional list of dictionaries describing use cases for the agent. | -| `step_pool` | List of Step objects representing the agent's execution steps. | -| `print_every_step` | Boolean indicating whether to print every step of execution. | -| `agent_output` | ManySteps object containing the agent's output and metadata. | -| `executor_workers` | Integer representing the number of executor workers for concurrent operations. | -| `data_memory` | Optional callable for data memory operations. | -| `load_yaml_path` | String representing the path to a YAML file for loading configurations. | -| `auto_generate_prompt` | Boolean indicating whether to automatically generate prompts. | -| `rag_every_loop` | Boolean indicating whether to query RAG database for context on every loop | -| `plan_enabled` | Boolean indicating whether planning functionality is enabled | -| `artifacts_on` | Boolean indicating whether to save artifacts from agent execution | -| `artifacts_output_path` | File path where artifacts should be saved | -| `artifacts_file_extension` | File extension to use for saved artifacts | -| `device` | Device to run computations on ("cpu" or "gpu") | -| `all_cores` | Boolean indicating whether to use all CPU cores | -| `device_id` | ID of the GPU device to use if running on GPU | -| `scheduled_run_date` | Optional datetime for scheduling future agent runs | + +## `Agent` Attributes + +| Attribute | Type | Description | +|-----------|------|-------------| +| `id` | `Optional[str]` | Unique identifier for the agent instance. | +| `llm` | `Optional[Any]` | Language model instance used by the agent. | +| `template` | `Optional[str]` | Template used for formatting responses. | +| `max_loops` | `Optional[Union[int, str]]` | Maximum number of loops the agent can run. | +| `stopping_condition` | `Optional[Callable[[str], bool]]` | Callable function determining when to stop looping. | +| `loop_interval` | `Optional[int]` | Interval (in seconds) between loops. | +| `retry_attempts` | `Optional[int]` | Number of retry attempts for failed LLM calls. | +| `retry_interval` | `Optional[int]` | Interval (in seconds) between retry attempts. | +| `return_history` | `Optional[bool]` | Boolean indicating whether to return conversation history. | +| `stopping_token` | `Optional[str]` | Token that stops the agent from looping when present in the response. | +| `dynamic_loops` | `Optional[bool]` | Boolean indicating whether to dynamically determine the number of loops. | +| `interactive` | `Optional[bool]` | Boolean indicating whether to run in interactive mode. | +| `dashboard` | `Optional[bool]` | Boolean indicating whether to display a dashboard. | +| `agent_name` | `Optional[str]` | Name of the agent instance. | +| `agent_description` | `Optional[str]` | Description of the agent instance. | +| `system_prompt` | `Optional[str]` | System prompt used to initialize the conversation. | +| `tools` | `List[Callable]` | List of callable functions representing tools the agent can use. | +| `dynamic_temperature_enabled` | `Optional[bool]` | Boolean indicating whether to dynamically adjust the LLM's temperature. | +| `sop` | `Optional[str]` | Standard operating procedure for the agent. | +| `sop_list` | `Optional[List[str]]` | List of strings representing the standard operating procedure. | +| `saved_state_path` | `Optional[str]` | File path for saving and loading the agent's state. | +| `autosave` | `Optional[bool]` | Boolean indicating whether to automatically save the agent's state. | +| `context_length` | `Optional[int]` | Maximum length of the context window (in tokens) for the LLM. | +| `user_name` | `Optional[str]` | Name used to represent the user in the conversation. | +| `self_healing_enabled` | `Optional[bool]` | Boolean indicating whether to attempt self-healing in case of errors. | +| `code_interpreter` | `Optional[bool]` | Boolean indicating whether to interpret and execute code snippets. | +| `multi_modal` | `Optional[bool]` | Boolean indicating whether to support multimodal inputs. | +| `pdf_path` | `Optional[str]` | File path of a PDF document to be ingested. | +| `list_of_pdf` | `Optional[str]` | List of file paths for PDF documents to be ingested. | +| `tokenizer` | `Optional[Any]` | Instance of a tokenizer used for token counting and management. | +| `long_term_memory` | `Optional[Union[Callable, Any]]` | Instance of a `BaseVectorDatabase` implementation for long-term memory management. | +| `preset_stopping_token` | `Optional[bool]` | Boolean indicating whether to use a preset stopping token. | +| `traceback` | `Optional[Any]` | Object used for traceback handling. | +| `traceback_handlers` | `Optional[Any]` | List of traceback handlers. | +| `streaming_on` | `Optional[bool]` | Boolean indicating whether to stream responses. | +| `docs` | `List[str]` | List of document paths or contents to be ingested. | +| `docs_folder` | `Optional[str]` | Path to a folder containing documents to be ingested. | +| `verbose` | `Optional[bool]` | Boolean indicating whether to print verbose output. | +| `parser` | `Optional[Callable]` | Callable function used for parsing input data. | +| `best_of_n` | `Optional[int]` | Integer indicating the number of best responses to generate. | +| `callback` | `Optional[Callable]` | Callable function to be called after each agent loop. | +| `metadata` | `Optional[Dict[str, Any]]` | Dictionary containing metadata for the agent. | +| `callbacks` | `Optional[List[Callable]]` | List of callable functions to be called during execution. | +| `search_algorithm` | `Optional[Callable]` | Callable function for long-term memory retrieval. | +| `logs_to_filename` | `Optional[str]` | File path for logging agent activities. | +| `evaluator` | `Optional[Callable]` | Callable function for evaluating the agent's responses. | +| `stopping_func` | `Optional[Callable]` | Callable function used as a stopping condition. | +| `custom_loop_condition` | `Optional[Callable]` | Callable function used as a custom loop condition. | +| `sentiment_threshold` | `Optional[float]` | Float value representing the sentiment threshold for evaluating responses. | +| `custom_exit_command` | `Optional[str]` | String representing a custom command for exiting the agent's loop. | +| `sentiment_analyzer` | `Optional[Callable]` | Callable function for sentiment analysis on outputs. | +| `limit_tokens_from_string` | `Optional[Callable]` | Callable function for limiting the number of tokens in a string. | +| `custom_tools_prompt` | `Optional[Callable]` | Callable function for generating a custom prompt for tool usage. | +| `tool_schema` | `ToolUsageType` | Data structure representing the schema for the agent's tools. | +| `output_type` | `OutputType` | Type representing the expected output type of responses. | +| `function_calling_type` | `str` | String representing the type of function calling. | +| `output_cleaner` | `Optional[Callable]` | Callable function for cleaning the agent's output. | +| `function_calling_format_type` | `Optional[str]` | String representing the format type for function calling. | +| `list_base_models` | `Optional[List[BaseModel]]` | List of base models used for generating tool schemas. | +| `metadata_output_type` | `str` | String representing the output type for metadata. | +| `state_save_file_type` | `str` | String representing the file type for saving the agent's state. | +| `chain_of_thoughts` | `bool` | Boolean indicating whether to use the chain of thoughts technique. | +| `algorithm_of_thoughts` | `bool` | Boolean indicating whether to use the algorithm of thoughts technique. | +| `tree_of_thoughts` | `bool` | Boolean indicating whether to use the tree of thoughts technique. | +| `tool_choice` | `str` | String representing the method for tool selection. | +| `execute_tool` | `bool` | Boolean indicating whether to execute tools. | +| `rules` | `str` | String representing the rules for the agent's behavior. | +| `planning` | `Optional[str]` | Boolean indicating whether to perform planning. | +| `planning_prompt` | `Optional[str]` | String representing the prompt for planning. | +| `device` | `str` | String representing the device on which the agent should run. | +| `custom_planning_prompt` | `str` | String representing a custom prompt for planning. | +| `memory_chunk_size` | `int` | Integer representing the maximum size of memory chunks for long-term memory retrieval. | +| `agent_ops_on` | `bool` | Boolean indicating whether agent operations should be enabled. | +| `return_step_meta` | `Optional[bool]` | Boolean indicating whether to return JSON of all steps and additional metadata. | +| `time_created` | `Optional[str]` | Float representing the time the agent was created. | +| `tags` | `Optional[List[str]]` | Optional list of strings for tagging the agent. | +| `use_cases` | `Optional[List[Dict[str, str]]]` | Optional list of dictionaries describing use cases for the agent. | +| `step_pool` | `List[Step]` | List of Step objects representing the agent's execution steps. | +| `print_every_step` | `Optional[bool]` | Boolean indicating whether to print every step of execution. | +| `agent_output` | `ManySteps` | ManySteps object containing the agent's output and metadata. | +| `data_memory` | `Optional[Callable]` | Optional callable for data memory operations. | +| `load_yaml_path` | `str` | String representing the path to a YAML file for loading configurations. | +| `auto_generate_prompt` | `bool` | Boolean indicating whether to automatically generate prompts. | +| `rag_every_loop` | `bool` | Boolean indicating whether to query RAG database for context on every loop | +| `plan_enabled` | `bool` | Boolean indicating whether planning functionality is enabled | +| `artifacts_on` | `bool` | Boolean indicating whether to save artifacts from agent execution | +| `artifacts_output_path` | `str` | File path where artifacts should be saved | +| `artifacts_file_extension` | `str` | File extension to use for saved artifacts | +| `all_cores` | `bool` | Boolean indicating whether to use all CPU cores | +| `device_id` | `int` | ID of the GPU device to use if running on GPU | +| `scheduled_run_date` | `Optional[datetime]` | Optional datetime for scheduling future agent runs | +| `do_not_use_cluster_ops` | `bool` | Boolean indicating whether to avoid cluster operations | +| `all_gpus` | `bool` | Boolean indicating whether to use all available GPUs | +| `model_name` | `str` | String representing the name of the model to use | +| `llm_args` | `dict` | Dictionary containing additional arguments for the LLM | +| `load_state_path` | `str` | String representing the path to load state from | +| `role` | `agent_roles` | String representing the role of the agent (e.g., "worker") | +| `print_on` | `bool` | Boolean indicating whether to print output | +| `tools_list_dictionary` | `Optional[List[Dict[str, Any]]]` | List of dictionaries representing tool schemas | +| `mcp_url` | `Optional[Union[str, MCPConnection]]` | String or MCPConnection representing the MCP server URL | +| `mcp_urls` | `List[str]` | List of strings representing multiple MCP server URLs | +| `react_on` | `bool` | Boolean indicating whether to enable ReAct reasoning | +| `safety_prompt_on` | `bool` | Boolean indicating whether to enable safety prompts | +| `random_models_on` | `bool` | Boolean indicating whether to randomly select models | +| `mcp_config` | `Optional[MCPConnection]` | MCPConnection object containing MCP configuration | +| `top_p` | `Optional[float]` | Float representing the top-p sampling parameter | +| `conversation_schema` | `Optional[ConversationSchema]` | ConversationSchema object for conversation formatting | +| `llm_base_url` | `Optional[str]` | String representing the base URL for the LLM API | +| `llm_api_key` | `Optional[str]` | String representing the API key for the LLM | +| `tool_call_summary` | `bool` | Boolean indicating whether to summarize tool calls | +| `output_raw_json_from_tool_call` | `bool` | Boolean indicating whether to output raw JSON from tool calls | +| `summarize_multiple_images` | `bool` | Boolean indicating whether to summarize multiple image outputs | +| `tool_retry_attempts` | `int` | Integer representing the number of retry attempts for tool execution | +| `reasoning_prompt_on` | `bool` | Boolean indicating whether to enable reasoning prompts | +| `dynamic_context_window` | `bool` | Boolean indicating whether to dynamically adjust context window | +| `show_tool_execution_output` | `bool` | Boolean indicating whether to show tool execution output | +| `created_at` | `float` | Float representing the timestamp when the agent was created | +| `workspace_dir` | `str` | String representing the workspace directory for the agent | +| `timeout` | `Optional[int]` | Integer representing the timeout for operations in seconds | +| `temperature` | `float` | Float representing the temperature for the LLM | +| `max_tokens` | `int` | Integer representing the maximum number of tokens | +| `frequency_penalty` | `float` | Float representing the frequency penalty | +| `presence_penalty` | `float` | Float representing the presence penalty | +| `tool_system_prompt` | `str` | String representing the system prompt for tools | +| `log_directory` | `str` | String representing the directory for logs | + ## `Agent` Methods | Method | Description | Inputs | Usage Example | |--------|-------------|--------|----------------| -| `run(task, img=None, is_last=False, device="cpu", device_id=0, all_cores=True, *args, **kwargs)` | Runs the autonomous agent loop to complete the given task. | `task` (str): The task to be performed.
`img` (str, optional): Path to an image file.
`is_last` (bool): Whether this is the last task.
`device` (str): Device to run on ("cpu" or "gpu").
`device_id` (int): ID of the GPU to use.
`all_cores` (bool): Whether to use all CPU cores.
`*args`, `**kwargs`: Additional arguments. | `response = agent.run("Generate a report on financial performance.")` | +| `run(task, img=None, imgs=None, correct_answer=None, streaming_callback=None, *args, **kwargs)` | Runs the autonomous agent loop to complete the given task with enhanced parameters. | `task` (str): The task to be performed.
`img` (str, optional): Path to a single image file.
`imgs` (List[str], optional): List of image paths for batch processing.
`correct_answer` (str, optional): Expected correct answer for validation with automatic retries.
`streaming_callback` (Callable, optional): Callback function for real-time token streaming.
`*args`, `**kwargs`: Additional arguments. | `response = agent.run("Generate a report on financial performance.")` | +| `run_batched(tasks, imgs=None, *args, **kwargs)` | Runs multiple tasks concurrently in batch mode. | `tasks` (List[str]): List of tasks to run.
`imgs` (List[str], optional): List of images to process.
`*args`, `**kwargs`: Additional arguments. | `responses = agent.run_batched(["Task 1", "Task 2"])` | +| `run_multiple_images(task, imgs, *args, **kwargs)` | Runs the agent with multiple images using concurrent processing. | `task` (str): The task to perform on each image.
`imgs` (List[str]): List of image paths or URLs.
`*args`, `**kwargs`: Additional arguments. | `outputs = agent.run_multiple_images("Describe image", ["img1.jpg", "img2.png"])` | +| `continuous_run_with_answer(task, img=None, correct_answer=None, max_attempts=10)` | Runs the agent until the correct answer is provided. | `task` (str): The task to perform.
`img` (str, optional): Image to process.
`correct_answer` (str): Expected answer.
`max_attempts` (int): Maximum attempts. | `response = agent.continuous_run_with_answer("Math problem", correct_answer="42")` | +| `tool_execution_retry(response, loop_count)` | Executes tools with retry logic for handling failures. | `response` (any): Response containing tool calls.
`loop_count` (int): Current loop number. | `agent.tool_execution_retry(response, 1)` | | `__call__(task, img=None, *args, **kwargs)` | Alternative way to call the `run` method. | Same as `run`. | `response = agent("Generate a report on financial performance.")` | | `parse_and_execute_tools(response, *args, **kwargs)` | Parses the agent's response and executes any tools mentioned in it. | `response` (str): The agent's response to be parsed.
`*args`, `**kwargs`: Additional arguments. | `agent.parse_and_execute_tools(response)` | | `add_memory(message)` | Adds a message to the agent's memory. | `message` (str): The message to add. | `agent.add_memory("Important information")` | @@ -28637,6 +32239,8 @@ graph TD | `run_concurrent(task, *args, **kwargs)` | Runs a task concurrently. | `task` (str): The task to run.
`*args`, `**kwargs`: Additional arguments. | `response = await agent.run_concurrent("Concurrent task")` | | `run_concurrent_tasks(tasks, *args, **kwargs)` | Runs multiple tasks concurrently. | `tasks` (List[str]): List of tasks to run.
`*args`, `**kwargs`: Additional arguments. | `responses = agent.run_concurrent_tasks(["Task 1", "Task 2"])` | | `bulk_run(inputs)` | Generates responses for multiple input sets. | `inputs` (List[Dict[str, Any]]): List of input dictionaries. | `responses = agent.bulk_run([{"task": "Task 1"}, {"task": "Task 2"}])` | +| `run_multiple_images(task, imgs, *args, **kwargs)` | Runs the agent with multiple images using concurrent processing. | `task` (str): The task to perform on each image.
`imgs` (List[str]): List of image paths or URLs.
`*args`, `**kwargs`: Additional arguments. | `outputs = agent.run_multiple_images("Describe image", ["img1.jpg", "img2.png"])` | +| `continuous_run_with_answer(task, img=None, correct_answer=None, max_attempts=10)` | Runs the agent until the correct answer is provided. | `task` (str): The task to perform.
`img` (str, optional): Image to process.
`correct_answer` (str): Expected answer.
`max_attempts` (int): Maximum attempts. | `response = agent.continuous_run_with_answer("Math problem", correct_answer="42")` | | `save()` | Saves the agent's history to a file. | None | `agent.save()` | | `load(file_path)` | Loads the agent's history from a file. | `file_path` (str): Path to the file. | `agent.load("agent_history.json")` | | `graceful_shutdown()` | Gracefully shuts down the system, saving the state. | None | `agent.graceful_shutdown()` | @@ -28660,8 +32264,6 @@ graph TD | `send_agent_message(agent_name, message, *args, **kwargs)` | Sends a message from the agent to a user. | `agent_name` (str): Name of the agent.
`message` (str): Message to send.
`*args`, `**kwargs`: Additional arguments. | `response = agent.send_agent_message("AgentX", "Task completed")` | | `add_tool(tool)` | Adds a tool to the agent's toolset. | `tool` (Callable): Tool to add. | `agent.add_tool(my_custom_tool)` | | `add_tools(tools)` | Adds multiple tools to the agent's toolset. | `tools` (List[Callable]): List of tools to add. | `agent.add_tools([tool1, tool2])` | -| `remove_tool(tool)` | Removes a tool from the agent's toolset. || Method | Description | Inputs | Usage Example | -|--------|-------------|--------|----------------| | `remove_tool(tool)` | Removes a tool from the agent's toolset. | `tool` (Callable): Tool to remove. | `agent.remove_tool(my_custom_tool)` | | `remove_tools(tools)` | Removes multiple tools from the agent's toolset. | `tools` (List[Callable]): List of tools to remove. | `agent.remove_tools([tool1, tool2])` | | `get_docs_from_doc_folders()` | Retrieves and processes documents from the specified folder. | None | `agent.get_docs_from_doc_folders()` | @@ -28690,75 +32292,135 @@ graph TD | `handle_sop_ops()` | Handles operations related to standard operating procedures. | None | `agent.handle_sop_ops()` | | `agent_output_type(responses)` | Processes and returns the agent's output based on the specified output type. | `responses` (list): List of responses. | `formatted_output = agent.agent_output_type(responses)` | | `check_if_no_prompt_then_autogenerate(task)` | Checks if a system prompt is not set and auto-generates one if needed. | `task` (str): The task to use for generating a prompt. | `agent.check_if_no_prompt_then_autogenerate("Analyze data")` | -| `check_if_no_prompt_then_autogenerate(task)` | Checks if auto_generate_prompt is enabled and generates a prompt by combining agent name, description and system prompt | `task` (str, optional): Task to use as fallback | `agent.check_if_no_prompt_then_autogenerate("Analyze data")` | | `handle_artifacts(response, output_path, extension)` | Handles saving artifacts from agent execution | `response` (str): Agent response
`output_path` (str): Output path
`extension` (str): File extension | `agent.handle_artifacts(response, "outputs/", ".txt")` | +| `showcase_config()` | Displays the agent's configuration in a formatted table. | None | `agent.showcase_config()` | +| `talk_to(agent, task, img=None, *args, **kwargs)` | Initiates a conversation with another agent. | `agent` (Any): Target agent.
`task` (str): Task to discuss.
`img` (str, optional): Image to share.
`*args`, `**kwargs`: Additional arguments. | `response = agent.talk_to(other_agent, "Let's collaborate")` | +| `talk_to_multiple_agents(agents, task, *args, **kwargs)` | Talks to multiple agents concurrently. | `agents` (List[Any]): List of target agents.
`task` (str): Task to discuss.
`*args`, `**kwargs`: Additional arguments. | `responses = agent.talk_to_multiple_agents([agent1, agent2], "Group discussion")` | +| `get_agent_role()` | Returns the role of the agent. | None | `role = agent.get_agent_role()` | +| `pretty_print(response, loop_count)` | Prints the response in a formatted panel. | `response` (str): Response to print.
`loop_count` (int): Current loop number. | `agent.pretty_print("Analysis complete", 1)` | +| `parse_llm_output(response)` | Parses and standardizes the output from the LLM. | `response` (Any): Response from the LLM. | `parsed_response = agent.parse_llm_output(llm_output)` | +| `sentiment_and_evaluator(response)` | Performs sentiment analysis and evaluation on the response. | `response` (str): Response to analyze. | `agent.sentiment_and_evaluator("Great response!")` | +| `output_cleaner_op(response)` | Applies output cleaning operations to the response. | `response` (str): Response to clean. | `cleaned_response = agent.output_cleaner_op(response)` | +| `mcp_tool_handling(response, current_loop)` | Handles MCP tool execution and responses. | `response` (Any): Response containing tool calls.
`current_loop` (int): Current loop number. | `agent.mcp_tool_handling(response, 1)` | +| `temp_llm_instance_for_tool_summary()` | Creates a temporary LLM instance for tool summaries. | None | `temp_llm = agent.temp_llm_instance_for_tool_summary()` | +| `execute_tools(response, loop_count)` | Executes tools based on the LLM response. | `response` (Any): Response containing tool calls.
`loop_count` (int): Current loop number. | `agent.execute_tools(response, 1)` | +| `list_output_types()` | Returns available output types. | None | `types = agent.list_output_types()` | +| `tool_execution_retry(response, loop_count)` | Executes tools with retry logic for handling failures. | `response` (Any): Response containing tool calls.
`loop_count` (int): Current loop number. | `agent.tool_execution_retry(response, 1)` | -## Updated Run Method +## `Agent.run(*args, **kwargs)` -Update the run method documentation to include new parameters: +The `run` method has been significantly enhanced with new parameters for advanced functionality: -| Method | Description | Inputs | Usage Example | -|--------|-------------|--------|----------------| -| `run(task, img=None, is_last=False, device="cpu", device_id=0, all_cores=True, scheduled_run_date=None)` | Runs the agent with specified parameters | `task` (str): Task to run
`img` (str, optional): Image path
`is_last` (bool): If this is last task
`device` (str): Device to use
`device_id` (int): GPU ID
`all_cores` (bool): Use all CPU cores
`scheduled_run_date` (datetime, optional): Future run date | `agent.run("Analyze data", device="gpu", device_id=0)` | +### Method Signature +```python +def run( + self, + task: Optional[Union[str, Any]] = None, + img: Optional[str] = None, + imgs: Optional[List[str]] = None, + correct_answer: Optional[str] = None, + streaming_callback: Optional[Callable[[str], None]] = None, + *args, + **kwargs, +) -> Any: +``` +### Parameters +| Parameter | Type | Description | Default | +|-----------|------|-------------|---------| +| `task` | `Optional[Union[str, Any]]` | The task to be executed | `None` | +| `img` | `Optional[str]` | Path to a single image file | `None` | +| `imgs` | `Optional[List[str]]` | List of image paths for batch processing | `None` | +| `correct_answer` | `Optional[str]` | Expected correct answer for validation with automatic retries | `None` | +| `streaming_callback` | `Optional[Callable[[str], None]]` | Callback function to receive streaming tokens in real-time | `None` | +| `*args` | `Any` | Additional positional arguments | - | +| `**kwargs` | `Any` | Additional keyword arguments | - | -## Getting Started +### Examples -To use the Swarm Agent, first install the required dependencies: -```bash -pip3 install -U swarms -``` +```python +# --- Enhanced Run Method Examples --- -Then, you can initialize and use the agent as follows: +# Basic Usage +# Simple task execution +response = agent.run("Generate a report on financial performance.") -```python -from swarms.structs.agent import Agent -from swarms.prompts.finance_agent_sys_prompt import FINANCIAL_AGENT_SYS_PROMPT +# Single Image Processing +# Process a single image +response = agent.run( + task="Analyze this image and describe what you see", + img="path/to/image.jpg" +) -# Initialize the Financial Analysis Agent with GPT-4o-mini model -agent = Agent( - agent_name="Financial-Analysis-Agent", - system_prompt=FINANCIAL_AGENT_SYS_PROMPT, - model_name="gpt-4o-mini", - max_loops=1, - autosave=True, - dashboard=False, - verbose=True, - dynamic_temperature_enabled=True, - saved_state_path="finance_agent.json", - user_name="swarms_corp", - retry_attempts=1, - context_length=200000, - return_step_meta=False, - output_type="str", +# Multiple Image Processing +# Process multiple images concurrently +response = agent.run( + task="Analyze these images and identify common patterns", + imgs=["image1.jpg", "image2.png", "image3.jpeg"] ) -# Run the agent +# Answer Validation with Retries +# Run until correct answer is found response = agent.run( - "How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?" + task="What is the capital of France?", + correct_answer="Paris" ) -print(response) +# Real-time Streaming +def streaming_callback(token: str): + print(token, end="", flush=True) + +response = agent.run( + task="Tell me a long story about space exploration", + streaming_callback=streaming_callback +) + +# Combined Parameters +# Complex task with multiple features +response = agent.run( + task="Analyze these financial charts and provide insights", + imgs=["chart1.png", "chart2.png", "chart3.png"], + correct_answer="market volatility", + streaming_callback=my_callback +) ``` -## Advanced Usage +### Return Types + +The `run` method returns different types based on the input parameters: + +| Scenario | Return Type | Description | +|-----------------------|-----------------------------------------------|---------------------------------------------------------| +| Single task | `str` | Returns the agent's response | +| Multiple images | `List[Any]` | Returns a list of results, one for each image | +| Answer validation | `str` | Returns the correct answer as a string | +| Streaming | `str` | Returns the complete response after streaming completes | + + + +## Advanced Capabilities ### Tool Integration -To integrate tools with the Swarm `Agent`, you can pass a list of callable functions with types and doc strings to the `tools` parameter when initializing the `Agent` instance. The agent will automatically convert these functions into an OpenAI function calling schema and make them available for use during task execution. +The `Agent` class allows seamless integration of external tools by accepting a list of Python functions via the `tools` parameter during initialization. Each tool function must include type annotations and a docstring. The `Agent` class automatically converts these functions into an OpenAI-compatible function calling schema, making them accessible for use during task execution. + +Learn more about tools [here](https://docs.swarms.world/en/latest/swarms/tools/tools_examples/) ## Requirements for a tool -- Function - - With types - - with doc strings + +| Requirement | Description | +|---------------------|------------------------------------------------------------------| +| Function | The tool must be a Python function. | +| With types | The function must have type annotations for its parameters. | +| With doc strings | The function must include a docstring describing its behavior. | +| Must return a string| The function must return a string value. | ```python from swarms import Agent -from swarm_models import OpenAIChat import subprocess def terminal(code: str): @@ -28777,7 +32439,7 @@ def terminal(code: str): # Initialize the agent with a tool agent = Agent( agent_name="Terminal-Agent", - llm=OpenAIChat(api_key=os.getenv("OPENAI_API_KEY")), + model_name="claude-sonnet-4-20250514", tools=[terminal], system_prompt="You are an agent that can execute terminal commands. Use the tools provided to assist the user.", ) @@ -28793,7 +32455,6 @@ The Swarm Agent supports integration with vector databases for long-term memory ```python from swarms import Agent -from swarm_models import Anthropic from swarms_memory import ChromaDB # Initialize ChromaDB @@ -28805,7 +32466,7 @@ chromadb = ChromaDB( # Initialize the agent with long-term memory agent = Agent( agent_name="Financial-Analysis-Agent", - llm=Anthropic(anthropic_api_key=os.getenv("ANTHROPIC_API_KEY")), + model_name="claude-sonnet-4-20250514", long_term_memory=chromadb, system_prompt="You are a financial analysis agent with access to long-term memory.", ) @@ -28822,7 +32483,7 @@ To enable interactive mode, set the `interactive` parameter to `True` when initi ```python agent = Agent( agent_name="Interactive-Agent", - llm=OpenAIChat(api_key=os.getenv("OPENAI_API_KEY")), + model_name="claude-sonnet-4-20250514", interactive=True, system_prompt="You are an interactive agent. Engage in a conversation with the user.", ) @@ -28831,31 +32492,6 @@ agent = Agent( agent.run("Let's start a conversation") ``` -### Sentiment Analysis - -To perform sentiment analysis on the agent's outputs, you can provide a sentiment analyzer function: - -```python -from textblob import TextBlob - -def sentiment_analyzer(text): - analysis = TextBlob(text) - return analysis.sentiment.polarity - -agent = Agent( - agent_name="Sentiment-Analysis-Agent", - llm=OpenAIChat(api_key=os.getenv("OPENAI_API_KEY")), - sentiment_analyzer=sentiment_analyzer, - sentiment_threshold=0.5, - system_prompt="You are an agent that generates responses with sentiment analysis.", -) - -response = agent.run("Generate a positive statement about AI") -print(response) -``` - - - ### Undo Functionality ```python @@ -28902,9 +32538,79 @@ tasks = [ ] responses = agent.bulk_run(tasks) print(responses) + +# Run multiple tasks in batch mode (new method) +task_list = ["Analyze data", "Generate report", "Create summary"] +batch_responses = agent.run_batched(task_list) +print(f"Completed {len(batch_responses)} tasks in batch mode") ``` +### Batch Processing with `run_batched` + +The new `run_batched` method allows you to process multiple tasks efficiently: + +#### Method Signature + +```python +def run_batched( + self, + tasks: List[str], + imgs: List[str] = None, + *args, + **kwargs, +) -> List[Any]: +``` + +#### Parameters + +| Parameter | Type | Description | Default | +|-----------|------|-------------|---------| +| `tasks` | `List[str]` | List of tasks to run concurrently | Required | +| `imgs` | `List[str]` | List of images to process with each task | `None` | +| `*args` | `Any` | Additional positional arguments | - | +| `**kwargs` | `Any` | Additional keyword arguments | - | + +#### Usage Examples + +```python +# Process multiple tasks in batch +tasks = [ + "Analyze the financial data for Q1", + "Generate a summary report for stakeholders", + "Create recommendations for Q2 planning" +] + +# Run all tasks concurrently +batch_results = agent.run_batched(tasks) + +# Process results +for i, (task, result) in enumerate(zip(tasks, batch_results)): + print(f"Task {i+1}: {task}") + print(f"Result: {result}\n") +``` + +#### Batch Processing with Images + +```python +# Process multiple tasks with multiple images +tasks = [ + "Analyze this chart for trends", + "Identify patterns in this data visualization", + "Summarize the key insights from this graph" +] + +images = ["chart1.png", "chart2.png", "chart3.png"] + +# Each task will process all images +batch_results = agent.run_batched(tasks, imgs=images) +``` + +#### Return Type + +- **Returns**: `List[Any]` - List of results from each task execution +- **Order**: Results are returned in the same order as the input tasks + ### Various other settings ```python @@ -28960,35 +32666,27 @@ agent.model_dump_json() print(agent.to_toml()) ``` -## Auto Generate Prompt + CPU Execution + +## Examples + +### Auto Generate Prompt + CPU Execution ```python import os from swarms import Agent -from swarm_models import OpenAIChat from dotenv import load_dotenv # Load environment variables load_dotenv() -# Retrieve the OpenAI API key from the environment variable -api_key = os.getenv("GROQ_API_KEY") - -# Initialize the model for OpenAI Chat -model = OpenAIChat( - openai_api_base="https://api.groq.com/openai/v1", - openai_api_key=api_key, - model_name="llama-3.1-70b-versatile", - temperature=0.1, -) - # Initialize the agent with automated prompt engineering enabled agent = Agent( agent_name="Financial-Analysis-Agent", system_prompt=None, # System prompt is dynamically generated + model_name="gpt-4.1", agent_description=None, llm=model, max_loops=1, @@ -29007,22 +32705,23 @@ agent = Agent( # Run the agent with a task description and specify the device agent.run( "How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria", - ## Will design a system prompt based on the task if description and system prompt are None - device="cpu", ) # Print the dynamically generated system prompt print(agent.system_prompt) - ``` ## Agent Structured Outputs - Create a structured output schema for the agent [List[Dict]] + - Input in the `tools_list_dictionary` parameter + - Output is a dictionary + - Use the `str_to_dict` function to convert the output to a dictionary + ```python from dotenv import load_dotenv @@ -29093,28 +32792,283 @@ print(type(str_to_dict(out))) ``` +## Comprehensive Agent Configuration Examples + +### Advanced Agent with All New Features + +```python +from swarms import Agent +from swarms_memory import ChromaDB +from datetime import datetime, timedelta + +# Initialize advanced agent with comprehensive configuration +agent = Agent( + # Basic Configuration + agent_name="Advanced-Analysis-Agent", + agent_description="Multi-modal analysis agent with advanced capabilities", + system_prompt="You are an advanced analysis agent capable of processing multiple data types.", + + # Enhanced Run Parameters + max_loops=3, + dynamic_loops=True, + interactive=False, + dashboard=True, + + # Device and Resource Management + device="gpu", + device_id=0, + all_cores=True, + all_gpus=False, + do_not_use_cluster_ops=True, + + # Memory and Context Management + context_length=100000, + memory_chunk_size=3000, + dynamic_context_window=True, + rag_every_loop=True, + + # Advanced Features + auto_generate_prompt=True, + plan_enabled=True, + react_on=True, + safety_prompt_on=True, + reasoning_prompt_on=True, + + # Tool Management + tool_retry_attempts=5, + tool_call_summary=True, + show_tool_execution_output=True, + function_calling_format_type="OpenAI", + + # Artifacts and Output + artifacts_on=True, + artifacts_output_path="./outputs", + artifacts_file_extension=".md", + output_type="json", + + # LLM Configuration + model_name="gpt-4o", + temperature=0.3, + max_tokens=8000, + top_p=0.95, + frequency_penalty=0.1, + presence_penalty=0.1, + + # MCP Integration + mcp_url="http://localhost:8000", + mcp_config=None, + + # Performance Settings + timeout=300, + retry_attempts=3, + retry_interval=2, + + # Scheduling + scheduled_run_date=datetime.now() + timedelta(hours=1), + + # Metadata and Organization + tags=["analysis", "multi-modal", "advanced"], + use_cases=[{"name": "Data Analysis", "description": "Process and analyze complex datasets"}], + + # Verbosity and Logging + verbose=True, + print_on=True, + print_every_step=True, + log_directory="./logs" +) + +# Run with multiple images and streaming +def streaming_callback(token: str): + print(token, end="", flush=True) + +response = agent.run( + task="Analyze these financial charts and provide comprehensive insights", + imgs=["chart1.png", "chart2.png", "chart3.png"], + streaming_callback=streaming_callback +) + +# Run batch processing +tasks = [ + "Analyze Q1 financial performance", + "Generate Q2 projections", + "Create executive summary" +] + +batch_results = agent.run_batched(tasks) + +# Run with answer validation +validated_response = agent.run( + task="What is the current market trend?", + correct_answer="bullish", + max_attempts=5 +) +``` + +### MCP-Enabled Agent Example + +```python +from swarms import Agent +from swarms.schemas.mcp_schemas import MCPConnection + +# Configure MCP connection +mcp_config = MCPConnection( + server_path="http://localhost:8000", + server_name="my_mcp_server", + capabilities=["tools", "filesystem"] +) + +# Initialize agent with MCP integration +mcp_agent = Agent( + agent_name="MCP-Enabled-Agent", + system_prompt="You are an agent with access to external tools via MCP.", + mcp_config=mcp_config, + mcp_urls=["http://localhost:8000", "http://localhost:8001"], + tool_call_summary=True, + output_raw_json_from_tool_call=True +) + +# Run with MCP tools +response = mcp_agent.run("Use the available tools to analyze the current system status") +``` + +### Multi-Image Processing Agent + +```python +# Initialize agent optimized for image processing +image_agent = Agent( + agent_name="Image-Analysis-Agent", + system_prompt="You are an expert at analyzing images and extracting insights.", + multi_modal=True, + summarize_multiple_images=True, + artifacts_on=True, + artifacts_output_path="./image_analysis", + artifacts_file_extension=".txt" +) + +# Process multiple images with summarization +images = ["product1.jpg", "product2.jpg", "product3.jpg"] +analysis = image_agent.run( + task="Analyze these product images and identify design patterns", + imgs=images +) + +# The agent will automatically summarize results if summarize_multiple_images=True +print(f"Analysis complete: {len(analysis)} images processed") +``` + +## New Features and Parameters + +### Enhanced Run Method Parameters + +The `run` method now supports several new parameters for advanced functionality: + +- **`imgs`**: Process multiple images simultaneously instead of just one +- **`correct_answer`**: Validate responses against expected answers with automatic retries +- **`streaming_callback`**: Real-time token streaming for interactive applications + +### MCP (Model Context Protocol) Integration + +| Parameter | Description | +|----------------|-----------------------------------------------------| +| `mcp_url` | Connect to a single MCP server | +| `mcp_urls` | Connect to multiple MCP servers | +| `mcp_config` | Advanced MCP configuration options | + +### Advanced Reasoning and Safety + +| Parameter | Description | +|----------------------|--------------------------------------------------------------------| +| `react_on` | Enable ReAct reasoning for complex problem-solving | +| `safety_prompt_on` | Add safety constraints to agent responses | +| `reasoning_prompt_on`| Enable multi-loop reasoning for complex tasks | + +### Performance and Resource Management + +| Parameter | Description | +|--------------------------|--------------------------------------------------------------------------| +| `dynamic_context_window` | Automatically adjust context window based on available tokens | +| `tool_retry_attempts` | Configure retry behavior for tool execution | +| `summarize_multiple_images` | Automatically summarize results from multiple image processing | + +### Device and Resource Management + +| Parameter | Description | +|--------------------------|--------------------------------------------------------------------------| +| `device` | Specify CPU or GPU execution (`"cpu"` or `"gpu"`) | +| `device_id` | Specify which GPU device to use | +| `all_cores` | Enable use of all available CPU cores | +| `all_gpus` | Enable use of all available GPUs | +| `do_not_use_cluster_ops` | Control cluster operation usage | + +### Advanced Memory and Context + +| Parameter | Description | +|--------------------------|--------------------------------------------------------------------------| +| `rag_every_loop` | Query RAG database on every loop iteration | +| `memory_chunk_size` | Control memory chunk size for long-term memory | +| `auto_generate_prompt` | Automatically generate system prompts based on tasks | +| `plan_enabled` | Enable planning functionality for complex tasks | + +### Artifacts and Output Management + +| Parameter | Description | +|--------------------------|--------------------------------------------------------------------------| +| `artifacts_on` | Enable saving artifacts from agent execution | +| `artifacts_output_path` | Specify where to save artifacts | +| `artifacts_file_extension` | Control artifact file format | +| `output_raw_json_from_tool_call` | Control tool call output format | + +### Enhanced Tool Management + +| Parameter | Description | +|--------------------------|--------------------------------------------------------------------------| +| `tools_list_dictionary` | Provide tool schemas in dictionary format | +| `tool_call_summary` | Enable automatic summarization of tool calls | +| `show_tool_execution_output` | Control visibility of tool execution details | +| `function_calling_format_type` | Specify function calling format (OpenAI, etc.) | + +### Advanced LLM Configuration + +| Parameter | Description | +|--------------------------|--------------------------------------------------------------------------| +| `llm_args` | Pass additional arguments to the LLM | +| `llm_base_url` | Specify custom LLM API endpoint | +| `llm_api_key` | Provide LLM API key directly | +| `top_p` | Control top-p sampling parameter | +| `frequency_penalty` | Control frequency penalty | +| `presence_penalty` | Control presence penalty | + + + + ## Best Practices -1. Always provide a clear and concise `system_prompt` to guide the agent's behavior. -2. Use `tools` to extend the agent's capabilities for specific tasks. -3. Implement error handling and utilize the `retry_attempts` feature for robust execution. -4. Leverage `long_term_memory` for tasks that require persistent information. -5. Use `interactive` mode for real-time conversations and `dashboard` for monitoring. -6. Implement `sentiment_analysis` for applications requiring tone management. -7. Utilize `autosave` and `save`/`load` methods for continuity across sessions. -8. Optimize token usage with `dynamic_context_window` and `tokens_checks` methods. -9. Use `concurrent` and `async` methods for performance-critical applications. -10. Regularly review and analyze feedback using the `analyze_feedback` method. -11. Use `artifacts_on` to save important outputs from agent execution -12. Configure `device` and `device_id` appropriately for optimal performance -13. Enable `rag_every_loop` when continuous context from long-term memory is needed -14. Use `scheduled_run_date` for automated task scheduling +| Best Practice / Feature | Description | +|---------------------------------------------------------|--------------------------------------------------------------------------------------------------| +| `system_prompt` | Always provide a clear and concise system prompt to guide the agent's behavior. | +| `tools` | Use tools to extend the agent's capabilities for specific tasks. | +| `retry_attempts` & error handling | Implement error handling and utilize the retry_attempts feature for robust execution. | +| `long_term_memory` | Leverage long_term_memory for tasks that require persistent information. | +| `interactive` & `dashboard` | Use interactive mode for real-time conversations and dashboard for monitoring. | +| `sentiment_analysis` | Implement sentiment_analysis for applications requiring tone management. | +| `autosave`, `save`/`load` | Utilize autosave and save/load methods for continuity across sessions. | +| `dynamic_context_window` & `tokens_checks` | Optimize token usage with dynamic_context_window and tokens_checks methods. | +| `concurrent` & `async` methods | Use concurrent and async methods for performance-critical applications. | +| `analyze_feedback` | Regularly review and analyze feedback using the analyze_feedback method. | +| `artifacts_on` | Use artifacts_on to save important outputs from agent execution. | +| `device` & `device_id` | Configure device and device_id appropriately for optimal performance. | +| `rag_every_loop` | Enable rag_every_loop when continuous context from long-term memory is needed. | +| `scheduled_run_date` | Use scheduled_run_date for automated task scheduling. | +| `run_batched` | Leverage run_batched for efficient processing of multiple related tasks. | +| `mcp_url` or `mcp_urls` | Use mcp_url or mcp_urls to extend agent capabilities with external tools. | +| `react_on` | Enable react_on for complex reasoning tasks requiring step-by-step analysis. | +| `tool_retry_attempts` | Configure tool_retry_attempts for robust tool execution in production environments. | By following these guidelines and leveraging the Swarm Agent's extensive features, you can create powerful, flexible, and efficient autonomous agents for a wide range of applications. -------------------------------------------------- -# File: swarms\structs\agent_docs_v1.md +# File: swarms/structs/agent_docs_v1.md # `Agent` Documentation @@ -29654,7 +33608,7 @@ print(agent.to_toml()) -------------------------------------------------- -# File: swarms\structs\agent_mcp.md +# File: swarms/structs/agent_mcp.md # Agent MCP Integration Guide @@ -30452,7 +34406,7 @@ The MCP integration brings powerful external tool connectivity to Swarms agents, -------------------------------------------------- -# File: swarms\structs\agent_multi_agent_communication.md +# File: swarms/structs/agent_multi_agent_communication.md # Agent Multi-Agent Communication Methods @@ -30662,11 +34616,11 @@ This comprehensive guide covers all aspects of multi-agent communication using t -------------------------------------------------- -# File: swarms\structs\agent_rearrange.md +# File: swarms/structs/agent_rearrange.md # `AgentRearrange` Class -The `AgentRearrange` class represents a swarm of agents for rearranging tasks. It allows you to create a swarm of agents, add or remove agents from the swarm, and run the swarm to process tasks based on a specified flow pattern. +The `AgentRearrange` class represents a swarm of agents for rearranging tasks. It allows you to create a swarm of agents, add or remove agents from the swarm, and run the swarm to process tasks based on a specified flow pattern. The class now includes **sequential awareness** features that allow agents to know about the agents ahead and behind them in sequential flows. ## Attributes ---------- @@ -30683,26 +34637,38 @@ The `AgentRearrange` class represents a swarm of agents for rearranging tasks. I | `memory_system` | `BaseVectorDatabase` | Memory system for storing agent interactions | | `human_in_the_loop` | `bool` | Whether human intervention is enabled | | `custom_human_in_the_loop` | `Callable` | Custom function for human intervention | -| `return_json` | `bool` | Whether to return output in JSON format | | `output_type` | `OutputType` | Format of output ("all", "final", "list", or "dict") | -| `docs` | `List[str]` | List of document paths to add to agent prompts | -| `doc_folder` | `str` | Folder path containing documents to add to agent prompts | -| `swarm_history` | `dict` | History of agent interactions | - +| `autosave` | `bool` | Whether to automatically save agent data | +| `rules` | `str` | Custom rules to add to the conversation | +| `team_awareness` | `bool` | Whether to enable team awareness and sequential flow information | +| `time_enabled` | `bool` | Whether to enable timestamps in conversation | +| `message_id_on` | `bool` | Whether to enable message IDs in conversation | ## Methods ------- -### `__init__(self, agents: List[Agent] = None, flow: str = None, max_loops: int = 1, verbose: bool = True)` +### `__init__(self, id: str = swarm_id(), name: str = "AgentRearrange", description: str = "A swarm of agents for rearranging tasks.", agents: List[Union[Agent, Callable]] = None, flow: str = None, max_loops: int = 1, verbose: bool = True, memory_system: Any = None, human_in_the_loop: bool = False, custom_human_in_the_loop: Optional[Callable[[str], str]] = None, output_type: OutputType = "all", autosave: bool = True, rules: str = None, team_awareness: bool = False, time_enabled: bool = False, message_id_on: bool = False, *args, **kwargs)` -Initializes the `AgentRearrange` object. +Initializes the `AgentRearrange` object with enhanced sequential awareness capabilities. | Parameter | Type | Description | | --- | --- | --- | -| `agents` | `List[Agent]` (optional) | A list of `Agent` objects. Defaults to `None`. | +| `id` | `str` (optional) | Unique identifier for the swarm. Defaults to auto-generated UUID. | +| `name` | `str` (optional) | Name of the swarm. Defaults to "AgentRearrange". | +| `description` | `str` (optional) | Description of the swarm's purpose. Defaults to "A swarm of agents for rearranging tasks.". | +| `agents` | `List[Union[Agent, Callable]]` (optional) | A list of `Agent` objects or callables. Defaults to `None`. | | `flow` | `str` (optional) | The flow pattern of the tasks. Defaults to `None`. | | `max_loops` | `int` (optional) | The maximum number of loops for the agents to run. Defaults to `1`. | | `verbose` | `bool` (optional) | Whether to enable verbose logging or not. Defaults to `True`. | +| `memory_system` | `Any` (optional) | Memory system for storing agent interactions. Defaults to `None`. | +| `human_in_the_loop` | `bool` (optional) | Whether human intervention is enabled. Defaults to `False`. | +| `custom_human_in_the_loop` | `Callable[[str], str]` (optional) | Custom function for human intervention. Defaults to `None`. | +| `output_type` | `OutputType` (optional) | Format of output. Defaults to `"all"`. | +| `autosave` | `bool` (optional) | Whether to automatically save agent data. Defaults to `True`. | +| `rules` | `str` (optional) | Custom rules to add to the conversation. Defaults to `None`. | +| `team_awareness` | `bool` (optional) | Whether to enable team awareness and sequential flow information. Defaults to `False`. | +| `time_enabled` | `bool` (optional) | Whether to enable timestamps in conversation. Defaults to `False`. | +| `message_id_on` | `bool` (optional) | Whether to enable message IDs in conversation. Defaults to `False`. | ### `add_agent(self, agent: Agent)` @@ -30740,54 +34706,130 @@ Validates the flow pattern. - `bool`: `True` if the flow pattern is valid. -### `run(self, task: str = None, img: str = None, device: str = "cpu", device_id: int = 1, all_cores: bool = True, all_gpus: bool = False, *args, **kwargs)` +### **Sequential Awareness Methods** + +#### `get_agent_sequential_awareness(self, agent_name: str) -> str` + +Gets the sequential awareness information for a specific agent, showing which agents come before and after in the sequence. + +| Parameter | Type | Description | +| --- | --- | --- | +| `agent_name` | `str` | The name of the agent to get awareness for. | + +**Returns:** + +- `str`: A string describing the agents ahead and behind in the sequence. + +**Example:** +```python +awareness = agent_system.get_agent_sequential_awareness("Agent2") +# Returns: "Sequential awareness: Agent ahead: Agent1 | Agent behind: Agent3" +``` + +#### `get_sequential_flow_structure(self) -> str` + +Gets the overall sequential flow structure information showing the complete workflow with relationships between agents. + +**Returns:** + +- `str`: A string describing the complete sequential flow structure. + +**Example:** +```python +flow_structure = agent_system.get_sequential_flow_structure() +# Returns: "Sequential Flow Structure: +# Step 1: Agent1 +# Step 2: Agent2 (follows: Agent1) (leads to: Agent3) +# Step 3: Agent3 (follows: Agent2)" +``` + +### `run(self, task: str = None, img: str = None, *args, **kwargs)` Executes the agent rearrangement task with specified compute resources. | Parameter | Type | Description | | --- | --- | --- | -| `task` | `str` | The task to execute | -| `img` | `str` | Path to input image if required | -| `device` | `str` | Computing device to use ('cpu' or 'gpu') | -| `device_id` | `int` | ID of specific device to use | -| `all_cores` | `bool` | Whether to use all CPU cores | -| `all_gpus` | `bool` | Whether to use all available GPUs | +| `task` | `str` (optional) | The task to execute. Defaults to `None`. | +| `img` | `str` (optional) | Path to input image if required. Defaults to `None`. | +| `*args` | - | Additional positional arguments passed to `_run()`. | +| `**kwargs` | - | Additional keyword arguments passed to `_run()`. | **Returns:** -- `str`: The final processed task. +- The result from executing the task through the cluster operations wrapper. -### `batch_run(self, tasks: List[str], img: Optional[List[str]] = None, batch_size: int = 10, device: str = "cpu", device_id: int = None, all_cores: bool = True, all_gpus: bool = False, *args, **kwargs)` +### `batch_run(self, tasks: List[str], img: Optional[List[str]] = None, batch_size: int = 10, *args, **kwargs)` Process multiple tasks in batches. | Parameter | Type | Description | | --- | --- | --- | | `tasks` | `List[str]` | List of tasks to process | -| `img` | `List[str]` | Optional list of images corresponding to tasks | +| `img` | `List[str]` (optional) | Optional list of images corresponding to tasks | | `batch_size` | `int` | Number of tasks to process simultaneously | -| `device` | `str` | Computing device to use | -| `device_id` | `int` | Specific device ID if applicable | -| `all_cores` | `bool` | Whether to use all CPU cores | -| `all_gpus` | `bool` | Whether to use all available GPUs | +| `*args` | - | Additional positional arguments | +| `**kwargs` | - | Additional keyword arguments | +**Returns:** +- `List[str]`: List of results corresponding to input tasks -### `concurrent_run(self, tasks: List[str], img: Optional[List[str]] = None, max_workers: Optional[int] = None, device: str = "cpu", device_id: int = None, all_cores: bool = True, all_gpus: bool = False, *args, **kwargs)` +### `concurrent_run(self, tasks: List[str], img: Optional[List[str]] = None, max_workers: Optional[int] = None, *args, **kwargs)` Process multiple tasks concurrently using ThreadPoolExecutor. | Parameter | Type | Description | | --- | --- | --- | | `tasks` | `List[str]` | List of tasks to process | -| `img` | `List[str]` | Optional list of images corresponding to tasks | -| `max_workers` | `int` | Maximum number of worker threads | -| `device` | `str` | Computing device to use | -| `device_id` | `int` | Specific device ID if applicable | -| `all_cores` | `bool` | Whether to use all CPU cores | -| `all_gpus` | `bool` | Whether to use all available GPUs | +| `img` | `List[str]` (optional) | Optional list of images corresponding to tasks | +| `max_workers` | `int` (optional) | Maximum number of worker threads | +| `*args` | - | Additional positional arguments | +| `**kwargs` | - | Additional keyword arguments | + +**Returns:** + +- `List[str]`: List of results corresponding to input tasks +## **Sequential Awareness Feature** + +The `AgentRearrange` class now includes a powerful **sequential awareness** feature that enhances agent collaboration in sequential workflows. When agents are executed sequentially, they automatically receive information about: + +- **Agent ahead**: The agent that completed their task before them +- **Agent behind**: The agent that will receive their output next + +This feature is automatically enabled when using sequential flows and provides agents with context about their position in the workflow, improving coordination and task understanding. + +### How It Works + +1. **Automatic Detection**: The system automatically detects when agents are running sequentially vs. in parallel +2. **Context Injection**: Before each sequential agent runs, awareness information is added to the conversation +3. **Enhanced Collaboration**: Agents can reference previous agents' work and prepare output for the next agent + +### Example with Sequential Awareness + +```python +from swarms import Agent, AgentRearrange + +# Create agents +agent1 = Agent(agent_name="Researcher", system_prompt="Research the topic") +agent2 = Agent(agent_name="Writer", system_prompt="Write based on research") +agent3 = Agent(agent_name="Editor", system_prompt="Edit the written content") + +# Create sequential workflow +workflow = AgentRearrange( + agents=[agent1, agent2, agent3], + flow="Researcher -> Writer -> Editor", + team_awareness=True # Enables sequential awareness +) + +# Run the workflow +result = workflow.run("Research and write about artificial intelligence") +``` +**What happens automatically:** +- **Researcher** runs first (no awareness info needed) +- **Writer** receives: "Sequential awareness: Agent ahead: Researcher | Agent behind: Editor" +- **Editor** receives: "Sequential awareness: Agent ahead: Writer" ## Documentation for `rearrange` Function ====================================== @@ -30799,9 +34841,12 @@ The `rearrange` function is a helper function that rearranges the given list of | Parameter | Type | Description | | --- | --- | --- | +| `name` | `str` (optional) | Name for the agent system. Defaults to `None`. | +| `description` | `str` (optional) | Description for the agent system. Defaults to `None`. | | `agents` | `List[Agent]` | The list of agents to be rearranged. | | `flow` | `str` | The flow used for rearranging the agents. | | `task` | `str` (optional) | The task to be performed during rearrangement. Defaults to `None`. | +| `img` | `str` (optional) | Path to input image if required. Defaults to `None`. | | `*args` | - | Additional positional arguments. | | `**kwargs` | - | Additional keyword arguments. | @@ -30823,7 +34868,7 @@ rearrange(agents, flow, task) ### Example Usage ------------- -Here's an example of how to use the `AgentRearrange` class and the `rearrange` function: +Here's an example of how to use the `AgentRearrange` class and the `rearrange` function with the new sequential awareness features: ```python from swarms import Agent, AgentRearrange @@ -30877,20 +34922,35 @@ agents = [director, worker1, worker2] # Define the flow pattern flow = "Accounting Director -> Accountant 1 -> Accountant 2" -# Using AgentRearrange class -agent_system = AgentRearrange(agents=agents, flow=flow) +# Using AgentRearrange class with sequential awareness +agent_system = AgentRearrange( + agents=agents, + flow=flow, + team_awareness=True, # Enables sequential awareness + time_enabled=True, # Enable timestamps + message_id_on=True # Enable message IDs +) + +# Get sequential flow information +flow_structure = agent_system.get_sequential_flow_structure() +print("Flow Structure:", flow_structure) + +# Get awareness for specific agents +worker1_awareness = agent_system.get_agent_sequential_awareness("Accountant 1") +print("Worker1 Awareness:", worker1_awareness) + +# Run the workflow output = agent_system.run("Process monthly financial statements") print(output) - ``` In this example, we first initialize three agents: `director`, `worker1`, and `worker2`. Then, we create a list of these agents and define the flow pattern `"Director -> Worker1 -> Worker2"`. -We can use the `AgentRearrange` class by creating an instance of it with the list of agents and the flow pattern. We then call the `run` method with the initial task, and it will execute the agents in the specified order, passing the output of one agent as the input to the next agent. - -Alternatively, we can use the `rearrange` function by passing the list of agents, the flow pattern, and the initial task as arguments. - -Both the `AgentRearrange` class and the `rearrange` function will return the final output after processing the task through the agents according to the specified flow pattern. +The new sequential awareness features provide: +- **Automatic context**: Each agent knows who came before and who comes after +- **Better coordination**: Agents can reference previous work and prepare for next steps +- **Flow visualization**: You can see the complete workflow structure +- **Enhanced logging**: Better tracking of agent interactions ## Error Handling -------------- @@ -30908,70 +34968,77 @@ output = agent_system.run("Some task")` This will raise a `ValueError` with the message `"Agent 'Worker3' is not registered."`. - ## Parallel and Sequential Processing ---------------------------------- -The `AgentRearrange` class supports both parallel and sequential processing of tasks based on the specified flow pattern. If the flow pattern includes multiple agents separated by commas (e.g., `"agent1, agent2"`), the agents will be executed in parallel, and their outputs will be concatenated with a semicolon (`;`). If the flow pattern includes a single agent, it will be executed sequentially. - +The `AgentRearrange` class supports both parallel and sequential processing of tasks based on the specified flow pattern. If the flow pattern includes multiple agents separated by commas (e.g., `"agent1, agent2"`), the agents will be executed in parallel, and their outputs will be concatenated. If the flow pattern includes a single agent, it will be executed sequentially with enhanced awareness. ### Parallel processing `parallel_flow = "Worker1, Worker2 -> Director"` -### Sequential processing +### Sequential processing with awareness `sequential_flow = "Worker1 -> Worker2 -> Director"` -In the `parallel_flow` example, `Worker1` and `Worker2` will be executed in parallel, and their outputs will be concatenated and passed to `Director`. In the `sequential_flow` example, `Worker1` will be executed first, and its output will be passed to `Worker2`, and then the output of `Worker2` will be passed to `Director`. +In the `parallel_flow` example, `Worker1` and `Worker2` will be executed in parallel, and their outputs will be concatenated and passed to `Director`. -## Logging -------- +In the `sequential_flow` example, `Worker1` will be executed first, then `Worker2` will receive awareness that `Worker1` came before and `Director` comes after, and finally `Director` will receive awareness that `Worker2` came before. -The `AgentRearrange` class includes logging capabilities using the `loguru` library. If `verbose` is set to `True` during initialization, a log file named `agent_rearrange.log` will be created, and log messages will be written to it. You can use this log file to track the execution of the agents and any potential issues or errors that may occur. +## Logging and Monitoring +------- +The `AgentRearrange` class includes comprehensive logging capabilities using the `loguru` library. The new sequential awareness features add enhanced logging: ```bash 2023-05-08 10:30:15.456 | INFO | agent_rearrange:__init__:34 - Adding agent Director to the swarm. 2023-05-08 10:30:15.457 | INFO | agent_rearrange:__init__:34 - Adding agent Worker1 to the swarm. 2023-05-08 10:30:15.457 | INFO | agent_rearrange:__init__:34 - Adding agent Worker2 to the swarm. 2023-05-08 10:30:15.458 | INFO | agent_rearrange:run:118 - Running agents in parallel: ['Worker1', 'Worker2'] -2023-05-08 10:30:15.459 | INFO | agent_rearrange:run:121 - Running agents sequentially: ['Director']` +2023-05-08 10:30:15.459 | INFO | agent_rearrange:run:121 - Running agents sequentially: ['Director'] +2023-05-08 10:30:15.460 | INFO | agent_rearrange:run:125 - Added sequential awareness for Worker2: Sequential awareness: Agent ahead: Worker1 | Agent behind: Director ``` ## Additional Parameters --------------------- -The `AgentRearrange` class also accepts additional parameters that can be passed to the `run` method using `*args` and `**kwargs`. These parameters will be forwarded to the individual agents during execution. - -`agent_system = AgentRearrange(agents=agents, flow=flow)` -`output = agent_system.run("Some task", max_tokens=200, temperature=0.7)` +The `AgentRearrange` class now accepts additional parameters for enhanced functionality: -In this example, the `max_tokens` and `temperature` parameters will be passed to each agent during execution. +```python +agent_system = AgentRearrange( + agents=agents, + flow=flow, + team_awareness=True, # Enable sequential awareness + time_enabled=True, # Enable conversation timestamps + message_id_on=True, # Enable message IDs + verbose=True # Enable detailed logging +) +``` ## Customization ------------- -The `AgentRearrange` class and the `rearrange` function can be customized and extended to suit specific use cases. For example, you can create custom agents by inheriting from the `Agent` class and implementing custom logic for task processing. You can then add these custom agents to the swarm and define the flow pattern accordingly. - -Additionally, you can modify the `run` method of the `AgentRearrange` class to implement custom logic for task processing and agent interaction. - +The `AgentRearrange` class and the `rearrange` function can be customized and extended to suit specific use cases. The new sequential awareness features provide a foundation for building more sophisticated agent coordination systems. ## Limitations ----------- -It's important to note that the `AgentRearrange` class and the `rearrange` function rely on the individual agents to process tasks correctly. The quality of the output will depend on the capabilities and configurations of the agents used in the swarm. Additionally, the `AgentRearrange` class does not provide any mechanisms for task prioritization or load balancing among the agents. +It's important to note that the `AgentRearrange` class and the `rearrange` function rely on the individual agents to process tasks correctly. The quality of the output will depend on the capabilities and configurations of the agents used in the swarm. + +The sequential awareness feature works best with agents that can understand and utilize context about their position in the workflow. ## Conclusion ---------- -The `AgentRearrange` class and the `rearrange` function provide a flexible and extensible framework for orchestrating swarms of agents to process tasks based on a specified flow pattern. By combining the capabilities of individual agents, you can create complex workflows and leverage the strengths of different agents to tackle various tasks efficiently. +The `AgentRearrange` class and the `rearrange` function provide a flexible and extensible framework for orchestrating swarms of agents to process tasks based on a specified flow pattern. The new **sequential awareness** features significantly enhance agent collaboration by providing context about workflow relationships. + +By combining the capabilities of individual agents with enhanced awareness of their position in the workflow, you can create more intelligent and coordinated multi-agent systems that understand not just their individual tasks, but also their role in the larger workflow. + +Whether you're working on natural language processing tasks, data analysis, or any other domain where agent-based systems can be beneficial, the enhanced `AgentRearrange` class provides a solid foundation for building sophisticated swarm-based solutions with improved coordination and context awareness. -While the current implementation offers basic functionality for agent rearrangement, there is room for future improvements and customizations to enhance the system's capabilities and cater to more specific use cases. -Whether you're working on natural language processing tasks, data analysis, or any other domain where agent-based systems can be beneficial, the `AgentRearrange` class and the `rearrange` function provide a solid foundation for building and experimenting with swarm-based solutions. -------------------------------------------------- -# File: swarms\structs\agent_registry.md +# File: swarms/structs/agent_registry.md # AgentRegistry Documentation @@ -31216,7 +35283,7 @@ Each method in the `AgentRegistry` class includes logging to track the execution -------------------------------------------------- -# File: swarms\structs\artifact.md +# File: swarms/structs/artifact.md # swarms.structs Documentation @@ -31325,7 +35392,7 @@ This comprehensive documentation provides an in-depth understanding of the `Arti -------------------------------------------------- -# File: swarms\structs\auto_agent_builder.md +# File: swarms/structs/auto_agent_builder.md # Agent Builder @@ -31531,7 +35598,7 @@ Common issues and solutions: -------------------------------------------------- -# File: swarms\structs\auto_swarm.md +# File: swarms/structs/auto_swarm.md # AutoSwarm @@ -31727,7 +35794,7 @@ The `AutoSwarm` class provides a robust framework for managing and executing tas -------------------------------------------------- -# File: swarms\structs\auto_swarm_builder.md +# File: swarms/structs/auto_swarm_builder.md # AutoSwarmBuilder Documentation @@ -31912,7 +35979,7 @@ results = swarm.batch_run(tasks) -------------------------------------------------- -# File: swarms\structs\auto_swarm_router.md +# File: swarms/structs/auto_swarm_router.md # AutoSwarmRouter @@ -32082,7 +36149,7 @@ The `AutoSwarmRouter` class provides a flexible and customizable approach to rou -------------------------------------------------- -# File: swarms\structs\basestructure.md +# File: swarms/structs/basestructure.md # Module/Function Name: BaseStructure @@ -32225,7 +36292,7 @@ Please let me know if you need further elaboration on any specific aspect or fun -------------------------------------------------- -# File: swarms\structs\concurrentworkflow.md +# File: swarms/structs/concurrentworkflow.md # ConcurrentWorkflow Documentation @@ -32496,7 +36563,7 @@ except Exception as e: -------------------------------------------------- -# File: swarms\structs\conversation.md +# File: swarms/structs/conversation.md # Module/Class Name: Conversation @@ -32506,96 +36573,33 @@ The `Conversation` class is a powerful and flexible tool for managing conversati ### Key Features -- **Multiple Storage Backends**: Support for various storage solutions: - - In-memory: Fast, temporary storage for testing and development - - Supabase: PostgreSQL-based cloud storage with real-time capabilities - - Redis: High-performance caching and persistence - - SQLite: Local file-based storage - - DuckDB: Analytical workloads and columnar storage - - Pulsar: Event streaming for distributed systems - - Mem0: Memory-based storage with mem0 integration - -- **Token Management**: - - Built-in token counting with configurable models - - Automatic token tracking for input/output messages - - Token usage analytics and reporting - - Context length management - -- **Metadata and Categories**: - - Support for message metadata - - Message categorization (input/output) - - Role-based message tracking - - Custom message IDs - -- **Data Export/Import**: - - JSON and YAML export formats - - Automatic saving and loading - - Conversation history management - - Batch operations support - -- **Advanced Features**: - - Message search and filtering - - Conversation analytics - - Multi-agent support - - Error handling and fallbacks - - Type hints and validation +| Feature Category | Features / Description | +|----------------------------|-------------------------------------------------------------------------------------------------------------| +| **Multiple Storage Backends** | - In-memory: Fast, temporary storage for testing and development
- Supabase: PostgreSQL-based cloud storage with real-time capabilities
- Redis: High-performance caching and persistence
- SQLite: Local file-based storage
- DuckDB: Analytical workloads and columnar storage
- Pulsar: Event streaming for distributed systems
- Mem0: Memory-based storage with mem0 integration | +| **Token Management** | - Built-in token counting with configurable models
- Automatic token tracking for input/output messages
- Token usage analytics and reporting
- Context length management | +| **Metadata and Categories** | - Support for message metadata
- Message categorization (input/output)
- Role-based message tracking
- Custom message IDs | +| **Data Export/Import** | - JSON and YAML export formats
- Automatic saving and loading
- Conversation history management
- Batch operations support | +| **Advanced Features** | - Message search and filtering
- Conversation analytics
- Multi-agent support
- Error handling and fallbacks
- Type hints and validation | ### Use Cases -1. **Chatbot Development**: - - Store and manage conversation history - - Track token usage and context length - - Analyze conversation patterns - -2. **Multi-Agent Systems**: - - Coordinate multiple AI agents - - Track agent interactions - - Store agent outputs and metadata - -3. **Analytics Applications**: - - Track conversation metrics - - Generate usage reports - - Analyze user interactions - -4. **Production Systems**: - - Persistent storage with various backends - - Error handling and recovery - - Scalable conversation management - -5. **Development and Testing**: - - Fast in-memory storage - - Debugging support - - Easy export/import of test data +| Use Case | Features / Description | +|----------------------------|--------------------------------------------------------------------------------------------------------| +| **Chatbot Development** | - Store and manage conversation history
- Track token usage and context length
- Analyze conversation patterns | +| **Multi-Agent Systems** | - Coordinate multiple AI agents
- Track agent interactions
- Store agent outputs and metadata | +| **Analytics Applications** | - Track conversation metrics
- Generate usage reports
- Analyze user interactions | +| **Production Systems** | - Persistent storage with various backends
- Error handling and recovery
- Scalable conversation management | +| **Development and Testing**| - Fast in-memory storage
- Debugging support
- Easy export/import of test data | ### Best Practices -1. **Storage Selection**: - - Use in-memory for testing and development - - Choose Supabase for multi-user cloud applications - - Use Redis for high-performance requirements - - Select SQLite for single-user local applications - - Pick DuckDB for analytical workloads - - Opt for Pulsar in distributed systems - -2. **Token Management**: - - Enable token counting for production use - - Set appropriate context lengths - - Monitor token usage with export_and_count_categories() - -3. **Error Handling**: - - Implement proper fallback mechanisms - - Use type hints for better code reliability - - Monitor and log errors appropriately - -4. **Data Management**: - - Use appropriate export formats (JSON/YAML) - - Implement regular backup strategies - - Clean up old conversations when needed - -5. **Security**: - - Use environment variables for sensitive credentials - - Implement proper access controls - - Validate input data +| Category | Best Practices | +|---------------------|------------------------------------------------------------------------------------------------------------------------| +| **Storage Selection** | - Use in-memory for testing and development
- Choose Supabase for multi-user cloud applications
- Use Redis for high-performance requirements
- Select SQLite for single-user local applications
- Pick DuckDB for analytical workloads
- Opt for Pulsar in distributed systems | +| **Token Management** | - Enable token counting for production use
- Set appropriate context lengths
- Monitor token usage with `export_and_count_categories()` | +| **Error Handling** | - Implement proper fallback mechanisms
- Use type hints for better code reliability
- Monitor and log errors appropriately | +| **Data Management** | - Use appropriate export formats (JSON/YAML)
- Implement regular backup strategies
- Clean up old conversations when needed | +| **Security** | - Use environment variables for sensitive credentials
- Implement proper access controls
- Validate input data | ## Table of Contents @@ -32613,13 +36617,15 @@ The `Conversation` class is designed to manage conversations by keeping track of **New in this version**: The class now supports multiple storage backends for persistent conversation storage: -- **"in-memory"**: Default memory-based storage (no persistence) -- **"mem0"**: Memory-based storage with mem0 integration (requires: `pip install mem0ai`) -- **"supabase"**: PostgreSQL-based storage using Supabase (requires: `pip install supabase`) -- **"redis"**: Redis-based storage (requires: `pip install redis`) -- **"sqlite"**: SQLite-based storage (built-in to Python) -- **"duckdb"**: DuckDB-based storage (requires: `pip install duckdb`) -- **"pulsar"**: Apache Pulsar messaging backend (requires: `pip install pulsar-client`) +| Backend | Description | Requirements | +|--------------|-------------------------------------------------------------------------------------------------------------|------------------------------------| +| **in-memory**| Default memory-based storage (no persistence) | None (built-in) | +| **mem0** | Memory-based storage with mem0 integration | `pip install mem0ai` | +| **supabase** | PostgreSQL-based storage using Supabase | `pip install supabase` | +| **redis** | Redis-based storage | `pip install redis` | +| **sqlite** | SQLite-based storage (local file) | None (built-in) | +| **duckdb** | DuckDB-based storage (analytical workloads, columnar storage) | `pip install duckdb` | +| **pulsar** | Apache Pulsar messaging backend | `pip install pulsar-client` | All backends use **lazy loading** - database dependencies are only imported when the specific backend is instantiated. Each backend provides helpful error messages if required packages are not installed. @@ -32632,7 +36638,6 @@ All backends use **lazy loading** - database dependencies are only imported when | system_prompt | Optional[str] | System prompt for the conversation | | time_enabled | bool | Flag to enable time tracking for messages | | autosave | bool | Flag to enable automatic saving | -| save_enabled | bool | Flag to control if saving is enabled | | save_filepath | str | File path for saving conversation history | | load_filepath | str | File path for loading conversation history | | conversation_history | list | List storing conversation messages | @@ -33578,7 +37583,7 @@ Choose the appropriate backend based on your needs: -------------------------------------------------- -# File: swarms\structs\council_of_judges.md +# File: swarms/structs/council_of_judges.md # CouncilAsAJudge @@ -33867,7 +37872,7 @@ print(evaluation) -------------------------------------------------- -# File: swarms\structs\create_new_swarm.md +# File: swarms/structs/create_new_swarm.md # How to Add a New Swarm Class @@ -34083,7 +38088,7 @@ By following these guidelines, you can create swarms that are powerful, flexible -------------------------------------------------- -# File: swarms\structs\cron_job.md +# File: swarms/structs/cron_job.md # CronJob @@ -34209,6 +38214,363 @@ cron_job = CronJob( cron_job.run("Perform analysis") ``` + +### Cron Jobs With Multi-Agent Structures + +You can also run Cron Jobs with multi-agent structures like `SequentialWorkflow`, `ConcurrentWorkflow`, `HiearchicalSwarm`, and other methods. + +- Just initialize the class as the agent parameter in the `CronJob(agent=swarm)` + +- Input your arguments into the `.run(task: str)` method + + +```python +""" +Cryptocurrency Concurrent Multi-Agent Cron Job Example + +This example demonstrates how to use ConcurrentWorkflow with CronJob to create +a powerful cryptocurrency tracking system. Each specialized agent analyzes a +specific cryptocurrency concurrently every minute. + +Features: +- ConcurrentWorkflow for parallel agent execution +- CronJob scheduling for automated runs every 1 minute +- Each agent specializes in analyzing one specific cryptocurrency +- Real-time data fetching from CoinGecko API +- Concurrent analysis of multiple cryptocurrencies +- Structured output with professional formatting + +Architecture: +CronJob -> ConcurrentWorkflow -> [Bitcoin Agent, Ethereum Agent, Solana Agent, etc.] -> Parallel Analysis +""" + +from typing import List +from loguru import logger + +from swarms import Agent, CronJob, ConcurrentWorkflow +from swarms_tools import coin_gecko_coin_api + + +def create_crypto_specific_agents() -> List[Agent]: + """ + Creates agents that each specialize in analyzing a specific cryptocurrency. + + Returns: + List[Agent]: List of cryptocurrency-specific Agent instances + """ + + # Bitcoin Specialist Agent + bitcoin_agent = Agent( + agent_name="Bitcoin-Analyst", + agent_description="Expert analyst specializing exclusively in Bitcoin (BTC) analysis and market dynamics", + system_prompt="""You are a Bitcoin specialist and expert analyst. Your expertise includes: + +BITCOIN SPECIALIZATION: +- Bitcoin's unique position as digital gold +- Bitcoin halving cycles and their market impact +- Bitcoin mining economics and hash rate analysis +- Lightning Network and Layer 2 developments +- Bitcoin adoption by institutions and countries +- Bitcoin's correlation with traditional markets +- Bitcoin technical analysis and on-chain metrics +- Bitcoin's role as a store of value and hedge against inflation + +ANALYSIS FOCUS: +- Analyze ONLY Bitcoin data from the provided dataset +- Focus on Bitcoin-specific metrics and trends +- Consider Bitcoin's unique market dynamics +- Evaluate Bitcoin's dominance and market leadership +- Assess institutional adoption trends +- Monitor on-chain activity and network health + +DELIVERABLES: +- Bitcoin-specific analysis and insights +- Price action assessment and predictions +- Market dominance analysis +- Institutional adoption impact +- Technical and fundamental outlook +- Risk factors specific to Bitcoin + +Extract Bitcoin data from the provided dataset and provide comprehensive Bitcoin-focused analysis.""", + model_name="groq/moonshotai/kimi-k2-instruct", + max_loops=1, + dynamic_temperature_enabled=True, + streaming_on=False, + tools=[coin_gecko_coin_api], + ) + + # Ethereum Specialist Agent + ethereum_agent = Agent( + agent_name="Ethereum-Analyst", + agent_description="Expert analyst specializing exclusively in Ethereum (ETH) analysis and ecosystem development", + system_prompt="""You are an Ethereum specialist and expert analyst. Your expertise includes: + +ETHEREUM SPECIALIZATION: +- Ethereum's smart contract platform and DeFi ecosystem +- Ethereum 2.0 transition and proof-of-stake mechanics +- Gas fees, network usage, and scalability solutions +- Layer 2 solutions (Arbitrum, Optimism, Polygon) +- DeFi protocols and TVL (Total Value Locked) analysis +- NFT markets and Ethereum's role in digital assets +- Developer activity and ecosystem growth +- EIP proposals and network upgrades + +ANALYSIS FOCUS: +- Analyze ONLY Ethereum data from the provided dataset +- Focus on Ethereum's platform utility and network effects +- Evaluate DeFi ecosystem health and growth +- Assess Layer 2 adoption and scalability solutions +- Monitor network usage and gas fee trends +- Consider Ethereum's competitive position vs other smart contract platforms + +DELIVERABLES: +- Ethereum-specific analysis and insights +- Platform utility and adoption metrics +- DeFi ecosystem impact assessment +- Network health and scalability evaluation +- Competitive positioning analysis +- Technical and fundamental outlook for ETH + +Extract Ethereum data from the provided dataset and provide comprehensive Ethereum-focused analysis.""", + model_name="groq/moonshotai/kimi-k2-instruct", + max_loops=1, + dynamic_temperature_enabled=True, + streaming_on=False, + tools=[coin_gecko_coin_api], + ) + + # Solana Specialist Agent + solana_agent = Agent( + agent_name="Solana-Analyst", + agent_description="Expert analyst specializing exclusively in Solana (SOL) analysis and ecosystem development", + system_prompt="""You are a Solana specialist and expert analyst. Your expertise includes: + +SOLANA SPECIALIZATION: +- Solana's high-performance blockchain architecture +- Proof-of-History consensus mechanism +- Solana's DeFi ecosystem and DEX platforms (Serum, Raydium) +- NFT marketplaces and creator economy on Solana +- Network outages and reliability concerns +- Developer ecosystem and Rust programming adoption +- Validator economics and network decentralization +- Cross-chain bridges and interoperability + +ANALYSIS FOCUS: +- Analyze ONLY Solana data from the provided dataset +- Focus on Solana's performance and scalability advantages +- Evaluate network stability and uptime improvements +- Assess ecosystem growth and developer adoption +- Monitor DeFi and NFT activity on Solana +- Consider Solana's competitive position vs Ethereum + +DELIVERABLES: +- Solana-specific analysis and insights +- Network performance and reliability assessment +- Ecosystem growth and adoption metrics +- DeFi and NFT market analysis +- Competitive advantages and challenges +- Technical and fundamental outlook for SOL + +Extract Solana data from the provided dataset and provide comprehensive Solana-focused analysis.""", + model_name="groq/moonshotai/kimi-k2-instruct", + max_loops=1, + dynamic_temperature_enabled=True, + streaming_on=False, + tools=[coin_gecko_coin_api], + ) + + # Cardano Specialist Agent + cardano_agent = Agent( + agent_name="Cardano-Analyst", + agent_description="Expert analyst specializing exclusively in Cardano (ADA) analysis and research-driven development", + system_prompt="""You are a Cardano specialist and expert analyst. Your expertise includes: + +CARDANO SPECIALIZATION: +- Cardano's research-driven development approach +- Ouroboros proof-of-stake consensus protocol +- Smart contract capabilities via Plutus and Marlowe +- Cardano's three-layer architecture (settlement, computation, control) +- Academic partnerships and peer-reviewed research +- Cardano ecosystem projects and DApp development +- Native tokens and Cardano's UTXO model +- Sustainability and treasury funding mechanisms + +ANALYSIS FOCUS: +- Analyze ONLY Cardano data from the provided dataset +- Focus on Cardano's methodical development approach +- Evaluate smart contract adoption and ecosystem growth +- Assess academic partnerships and research contributions +- Monitor native token ecosystem development +- Consider Cardano's long-term roadmap and milestones + +DELIVERABLES: +- Cardano-specific analysis and insights +- Development progress and milestone achievements +- Smart contract ecosystem evaluation +- Academic research impact assessment +- Native token and DApp adoption metrics +- Technical and fundamental outlook for ADA + +Extract Cardano data from the provided dataset and provide comprehensive Cardano-focused analysis.""", + model_name="groq/moonshotai/kimi-k2-instruct", + max_loops=1, + dynamic_temperature_enabled=True, + streaming_on=False, + tools=[coin_gecko_coin_api], + ) + + # Binance Coin Specialist Agent + bnb_agent = Agent( + agent_name="BNB-Analyst", + agent_description="Expert analyst specializing exclusively in BNB analysis and Binance ecosystem dynamics", + system_prompt="""You are a BNB specialist and expert analyst. Your expertise includes: + +BNB SPECIALIZATION: +- BNB's utility within the Binance ecosystem +- Binance Smart Chain (BSC) development and adoption +- BNB token burns and deflationary mechanics +- Binance exchange volume and market leadership +- BSC DeFi ecosystem and yield farming +- Cross-chain bridges and multi-chain strategies +- Regulatory challenges facing Binance globally +- BNB's role in transaction fee discounts and platform benefits + +ANALYSIS FOCUS: +- Analyze ONLY BNB data from the provided dataset +- Focus on BNB's utility value and exchange benefits +- Evaluate BSC ecosystem growth and competition with Ethereum +- Assess token burn impact on supply and price +- Monitor Binance platform developments and regulations +- Consider BNB's centralized vs decentralized aspects + +DELIVERABLES: +- BNB-specific analysis and insights +- Utility value and ecosystem benefits assessment +- BSC adoption and DeFi growth evaluation +- Token economics and burn mechanism impact +- Regulatory risk and compliance analysis +- Technical and fundamental outlook for BNB + +Extract BNB data from the provided dataset and provide comprehensive BNB-focused analysis.""", + model_name="groq/moonshotai/kimi-k2-instruct", + max_loops=1, + dynamic_temperature_enabled=True, + streaming_on=False, + tools=[coin_gecko_coin_api], + ) + + # XRP Specialist Agent + xrp_agent = Agent( + agent_name="XRP-Analyst", + agent_description="Expert analyst specializing exclusively in XRP analysis and cross-border payment solutions", + system_prompt="""You are an XRP specialist and expert analyst. Your expertise includes: + +XRP SPECIALIZATION: +- XRP's role in cross-border payments and remittances +- RippleNet adoption by financial institutions +- Central Bank Digital Currency (CBDC) partnerships +- Regulatory landscape and SEC lawsuit implications +- XRP Ledger's consensus mechanism and energy efficiency +- On-Demand Liquidity (ODL) usage and growth +- Competition with SWIFT and traditional payment rails +- Ripple's partnerships with banks and payment providers + +ANALYSIS FOCUS: +- Analyze ONLY XRP data from the provided dataset +- Focus on XRP's utility in payments and remittances +- Evaluate RippleNet adoption and institutional partnerships +- Assess regulatory developments and legal clarity +- Monitor ODL usage and transaction volumes +- Consider XRP's competitive position in payments + +DELIVERABLES: +- XRP-specific analysis and insights +- Payment utility and adoption assessment +- Regulatory landscape and legal developments +- Institutional partnership impact evaluation +- Cross-border payment market analysis +- Technical and fundamental outlook for XRP + +Extract XRP data from the provided dataset and provide comprehensive XRP-focused analysis.""", + model_name="groq/moonshotai/kimi-k2-instruct", + max_loops=1, + dynamic_temperature_enabled=True, + streaming_on=False, + tools=[coin_gecko_coin_api], + ) + + return [ + bitcoin_agent, + ethereum_agent, + solana_agent, + cardano_agent, + bnb_agent, + xrp_agent, + ] + + +def create_crypto_workflow() -> ConcurrentWorkflow: + """ + Creates a ConcurrentWorkflow with cryptocurrency-specific analysis agents. + + Returns: + ConcurrentWorkflow: Configured workflow for crypto analysis + """ + agents = create_crypto_specific_agents() + + workflow = ConcurrentWorkflow( + name="Crypto-Specific-Analysis-Workflow", + description="Concurrent execution of cryptocurrency-specific analysis agents", + agents=agents, + max_loops=1, + ) + + return workflow + + +def create_crypto_cron_job() -> CronJob: + """ + Creates a CronJob that runs cryptocurrency-specific analysis every minute using ConcurrentWorkflow. + + Returns: + CronJob: Configured cron job for automated crypto analysis + """ + # Create the concurrent workflow + workflow = create_crypto_workflow() + + # Create the cron job + cron_job = CronJob( + agent=workflow, # Use the workflow as the agent + interval="5seconds", # Run every 1 minute + ) + + return cron_job + + +def main(): + """ + Main function to run the cryptocurrency-specific concurrent analysis cron job. + """ + cron_job = create_crypto_cron_job() + + prompt = """ + + Conduct a comprehensive analysis of your assigned cryptocurrency. + + """ + + # Start the cron job + logger.info("🔄 Starting automated analysis loop...") + logger.info("⏰ Press Ctrl+C to stop the cron job") + + output = cron_job.run(task=prompt) + print(output) + + +if __name__ == "__main__": + main() +``` + ## Conclusion The CronJob class provides a powerful way to schedule and automate tasks using Swarms Agents or custom functions. Key benefits include: @@ -34227,7 +38589,7 @@ The CronJob class provides a powerful way to schedule and automate tasks using S -------------------------------------------------- -# File: swarms\structs\custom_swarm.md +# File: swarms/structs/custom_swarm.md # Building Custom Swarms: A Comprehensive Guide for Swarm Engineers @@ -35034,199 +39396,7 @@ For more advanced patterns and examples, explore the [Swarms Examples](../../exa -------------------------------------------------- -# File: swarms\structs\deep_research_swarm.md - -# Deep Research Swarm - -!!! abstract "Overview" - The Deep Research Swarm is a powerful, production-grade research system that conducts comprehensive analysis across multiple domains using parallel processing and advanced AI agents. - - Key Features: - - - Parallel search processing - - - Multi-agent research coordination - - - Advanced information synthesis - - - Automated query generation - - - Concurrent task execution - -## Getting Started - -!!! tip "Quick Installation" - ```bash - pip install swarms - ``` - -=== "Basic Usage" - ```python - from swarms.structs import DeepResearchSwarm - - # Initialize the swarm - swarm = DeepResearchSwarm( - name="MyResearchSwarm", - output_type="json", - max_loops=1 - ) - - # Run a single research task - results = swarm.run("What are the latest developments in quantum computing?") - ``` - -=== "Batch Processing" - ```python - # Run multiple research tasks in parallel - tasks = [ - "What are the environmental impacts of electric vehicles?", - "How is AI being used in drug discovery?", - ] - batch_results = swarm.batched_run(tasks) - ``` - -## Configuration - -!!! info "Constructor Arguments" - | Parameter | Type | Default | Description | - |-----------|------|---------|-------------| - | `name` | str | "DeepResearchSwarm" | Name identifier for the swarm | - | `description` | str | "A swarm that conducts..." | Description of the swarm's purpose | - | `research_agent` | Agent | research_agent | Custom research agent instance | - | `max_loops` | int | 1 | Maximum number of research iterations | - | `nice_print` | bool | True | Enable formatted console output | - | `output_type` | str | "json" | Output format ("json" or "string") | - | `max_workers` | int | CPU_COUNT * 2 | Maximum concurrent threads | - | `token_count` | bool | False | Enable token counting | - | `research_model_name` | str | "gpt-4o-mini" | Model to use for research | - -## Core Methods - -### Run -!!! example "Single Task Execution" - ```python - results = swarm.run("What are the latest breakthroughs in fusion energy?") - ``` - -### Batched Run -!!! example "Parallel Task Execution" - ```python - tasks = [ - "What are current AI safety initiatives?", - "How is CRISPR being used in agriculture?", - ] - results = swarm.batched_run(tasks) - ``` - -### Step -!!! example "Single Step Execution" - ```python - results = swarm.step("Analyze recent developments in renewable energy storage") - ``` - -## Domain-Specific Examples - -=== "Scientific Research" - ```python - science_swarm = DeepResearchSwarm( - name="ScienceSwarm", - output_type="json", - max_loops=2 # More iterations for thorough research - ) - - results = science_swarm.run( - "What are the latest experimental results in quantum entanglement?" - ) - ``` - -=== "Market Research" - ```python - market_swarm = DeepResearchSwarm( - name="MarketSwarm", - output_type="json" - ) - - results = market_swarm.run( - "What are the emerging trends in electric vehicle battery technology market?" - ) - ``` - -=== "News Analysis" - ```python - news_swarm = DeepResearchSwarm( - name="NewsSwarm", - output_type="string" # Human-readable output - ) - - results = news_swarm.run( - "What are the global economic impacts of recent geopolitical events?" - ) - ``` - -=== "Medical Research" - ```python - medical_swarm = DeepResearchSwarm( - name="MedicalSwarm", - max_loops=2 - ) - - results = medical_swarm.run( - "What are the latest clinical trials for Alzheimer's treatment?" - ) - ``` - -## Advanced Features - -??? note "Custom Research Agent" - ```python - from swarms import Agent - - custom_agent = Agent( - agent_name="SpecializedResearcher", - system_prompt="Your specialized prompt here", - model_name="gpt-4" - ) - - swarm = DeepResearchSwarm( - research_agent=custom_agent, - max_loops=2 - ) - ``` - -??? note "Parallel Processing Control" - ```python - swarm = DeepResearchSwarm( - max_workers=8, # Limit to 8 concurrent threads - nice_print=False # Disable console output for production - ) - ``` - -## Best Practices - -!!! success "Recommended Practices" - 1. **Query Formulation**: Be specific and clear in your research queries - 2. **Resource Management**: Adjust `max_workers` based on your system's capabilities - 3. **Output Handling**: Use appropriate `output_type` for your use case - 4. **Error Handling**: Implement try-catch blocks around swarm operations - 5. **Model Selection**: Choose appropriate models based on research complexity - -## Limitations - -!!! warning "Known Limitations" - - - Requires valid API keys for external services - - - Performance depends on system resources - - - Rate limits may apply to external API calls - - - Token limits apply to model responses - - - --------------------------------------------------- - -# File: swarms\structs\diy_your_own_agent.md +# File: swarms/structs/diy_your_own_agent.md # Create your own agent with `Agent` class @@ -35527,7 +39697,7 @@ Remember, the journey of building custom agent classes is an iterative and colla -------------------------------------------------- -# File: swarms\structs\forest_swarm.md +# File: swarms/structs/forest_swarm.md # Forest Swarm @@ -35725,7 +39895,7 @@ This **Multi-Agent Tree Structure** provides an efficient, scalable, and accurat -------------------------------------------------- -# File: swarms\structs\graph_workflow.md +# File: swarms/structs/graph_workflow.md # GraphWorkflow @@ -36532,7 +40702,7 @@ The GraphWorkflow system represents a significant advancement in multi-agent orc -------------------------------------------------- -# File: swarms\structs\group_chat.md +# File: swarms/structs/group_chat.md # GroupChat Swarm Documentation @@ -36568,24 +40738,6 @@ A production-grade multi-agent system enabling sophisticated group conversations | max_loops | int | 10 | Maximum conversation turns | -## Table of Contents - -- [Installation](#installation) -- [Core Concepts](#core-concepts) -- [Basic Usage](#basic-usage) -- [Advanced Configuration](#advanced-configuration) -- [Speaker Functions](#speaker-functions) -- [Response Models](#response-models) -- [Advanced Examples](#advanced-examples) -- [API Reference](#api-reference) -- [Best Practices](#best-practices) - -## Installation - -```bash -pip3 install swarms swarm-models loguru -``` - ## Core Concepts The GroupChat system consists of several key components: @@ -36601,55 +40753,23 @@ The GroupChat system consists of several key components: import os from dotenv import load_dotenv -from swarm_models import OpenAIChat from swarms import Agent, GroupChat, expertise_based - if __name__ == "__main__": - load_dotenv() - - # Get the OpenAI API key from the environment variable - api_key = os.getenv("OPENAI_API_KEY") - - # Create an instance of the OpenAIChat class - model = OpenAIChat( - openai_api_key=api_key, - model_name="gpt-4o-mini", - temperature=0.1, - ) - # Example agents agent1 = Agent( agent_name="Financial-Analysis-Agent", system_prompt="You are a financial analyst specializing in investment strategies.", - llm=model, + model_name="gpt-4.1", max_loops=1, - autosave=False, - dashboard=False, - verbose=True, - dynamic_temperature_enabled=True, - user_name="swarms_corp", - retry_attempts=1, - context_length=200000, - output_type="string", - streaming_on=False, ) agent2 = Agent( agent_name="Tax-Adviser-Agent", system_prompt="You are a tax adviser who provides clear and concise guidance on tax-related queries.", - llm=model, + model_name="gpt-4.1", max_loops=1, - autosave=False, - dashboard=False, - verbose=True, - dynamic_temperature_enabled=True, - user_name="swarms_corp", - retry_attempts=1, - context_length=200000, - output_type="string", - streaming_on=False, ) agents = [agent1, agent2] @@ -36748,36 +40868,6 @@ chat = GroupChat( ) ``` -## Response Models - -### Complete Schema - -```python -class AgentResponse(BaseModel): - """Individual agent response in a conversation turn""" - agent_name: str - role: str - message: str - timestamp: datetime = Field(default_factory=datetime.now) - turn_number: int - preceding_context: List[str] = Field(default_factory=list) - -class ChatTurn(BaseModel): - """Single turn in the conversation""" - turn_number: int - responses: List[AgentResponse] - task: str - timestamp: datetime = Field(default_factory=datetime.now) - -class ChatHistory(BaseModel): - """Complete conversation history""" - turns: List[ChatTurn] - total_messages: int - name: str - description: str - start_time: datetime = Field(default_factory=datetime.now) -``` - ## Advanced Examples ### Multi-Agent Analysis Team @@ -36787,19 +40877,19 @@ class ChatHistory(BaseModel): data_analyst = Agent( agent_name="Data-Analyst", system_prompt="You analyze numerical data and patterns", - llm=model + model_name="gpt-4.1", ) market_expert = Agent( agent_name="Market-Expert", system_prompt="You provide market insights and trends", - llm=model + model_name="gpt-4.1", ) strategy_advisor = Agent( agent_name="Strategy-Advisor", system_prompt="You formulate strategic recommendations", - llm=model + model_name="gpt-4.1", ) # Create analysis team @@ -36844,29 +40934,12 @@ for task, history in zip(tasks, histories): ## Best Practices -1. **Agent Design** - - Give agents clear, specific roles - - Use detailed system prompts - - Set appropriate context lengths - - Enable retries for reliability - -2. **Speaker Functions** - - Match function to use case - - Consider conversation flow - - Handle edge cases - - Add appropriate logging - -3. **Error Handling** - - Use try-except blocks - - Log errors appropriately - - Implement retry logic - - Provide fallback responses - -4. **Performance** - - Use concurrent processing for multiple tasks - - Monitor context lengths - - Implement proper cleanup - - Cache responses when appropriate +| Category | Recommendations | +|---------------------|--------------------------------------------------------------------------------------------------| +| **Agent Design** | - Give agents clear, specific roles
- Use detailed system prompts
- Set appropriate context lengths
- Enable retries for reliability | +| **Speaker Functions** | - Match function to use case
- Consider conversation flow
- Handle edge cases
- Add appropriate logging | +| **Error Handling** | - Use try-except blocks
- Log errors appropriately
- Implement retry logic
- Provide fallback responses | +| **Performance** | - Use concurrent processing for multiple tasks
- Monitor context lengths
- Implement proper cleanup
- Cache responses when appropriate | ## API Reference @@ -36889,7 +40962,7 @@ for task, history in zip(tasks, histories): -------------------------------------------------- -# File: swarms\structs\heavy_swarm.md +# File: swarms/structs/heavy_swarm.md # HeavySwarm Documentation @@ -37217,7 +41290,7 @@ HeavySwarm is part of the Swarms ecosystem. Contributions are welcome for: -------------------------------------------------- -# File: swarms\structs\hhcs.md +# File: swarms/structs/hhcs.md # Hybrid Hierarchical-Cluster Swarm [HHCS] @@ -37445,7 +41518,190 @@ if __name__ == "__main__": -------------------------------------------------- -# File: swarms\structs\hierarchical_swarm.md +# File: swarms/structs/hierarchical_structured_communication_framework.md + +# Hierarchical Structured Communication Framework + +The Hierarchical Structured Communication Framework implements the "Talk Structurally, Act Hierarchically" approach for LLM multi-agent systems, based on the research paper arXiv:2502.11098. + +## Overview + +This framework provides: +- **Structured Communication Protocol** with Message (M_ij), Background (B_ij), and Intermediate Output (I_ij) +- **Hierarchical Evaluation System** with supervisor coordination +- **Specialized Agent Classes** for different roles +- **Main Swarm Orchestrator** for workflow management + +## Key Components + +### Agent Classes +- `HierarchicalStructuredCommunicationGenerator` - Creates initial content +- `HierarchicalStructuredCommunicationEvaluator` - Evaluates content quality +- `HierarchicalStructuredCommunicationRefiner` - Improves content based on feedback +- `HierarchicalStructuredCommunicationSupervisor` - Coordinates workflow + +### Main Framework +- `HierarchicalStructuredCommunicationFramework` - Main orchestrator class +- `HierarchicalStructuredCommunicationSwarm` - Convenience alias + +## Quick Start + +```python +from swarms.structs.hierarchical_structured_communication_framework import ( + HierarchicalStructuredCommunicationFramework, + HierarchicalStructuredCommunicationGenerator, + HierarchicalStructuredCommunicationEvaluator, + HierarchicalStructuredCommunicationRefiner, + HierarchicalStructuredCommunicationSupervisor +) + +# Create specialized agents +generator = HierarchicalStructuredCommunicationGenerator( + agent_name="ContentGenerator" +) + +evaluator = HierarchicalStructuredCommunicationEvaluator( + agent_name="QualityEvaluator" +) + +refiner = HierarchicalStructuredCommunicationRefiner( + agent_name="ContentRefiner" +) + +supervisor = HierarchicalStructuredCommunicationSupervisor( + agent_name="WorkflowSupervisor" +) + +# Create the framework +framework = HierarchicalStructuredCommunicationFramework( + name="MyFramework", + supervisor=supervisor, + generators=[generator], + evaluators=[evaluator], + refiners=[refiner], + max_loops=3 +) + +# Run the workflow +result = framework.run("Create a comprehensive analysis of AI trends in 2024") +``` + +## Basic Usage + +```python +from swarms.structs.hierarchical_structured_communication_framework import ( + HierarchicalStructuredCommunicationFramework, + HierarchicalStructuredCommunicationGenerator, + HierarchicalStructuredCommunicationEvaluator, + HierarchicalStructuredCommunicationRefiner +) + +# Create agents with custom names +generator = HierarchicalStructuredCommunicationGenerator(agent_name="ContentGenerator") +evaluator = HierarchicalStructuredCommunicationEvaluator(agent_name="QualityEvaluator") +refiner = HierarchicalStructuredCommunicationRefiner(agent_name="ContentRefiner") + +# Create framework with default supervisor +framework = HierarchicalStructuredCommunicationFramework( + generators=[generator], + evaluators=[evaluator], + refiners=[refiner], + max_loops=3, + verbose=True +) + +# Execute task +result = framework.run("Write a detailed report on renewable energy technologies") +print(result["final_result"]) +``` + +## Advanced Configuration + +```python +from swarms.structs.hierarchical_structured_communication_framework import ( + HierarchicalStructuredCommunicationFramework +) + +# Create framework with custom configuration +framework = HierarchicalStructuredCommunicationFramework( + name="AdvancedFramework", + max_loops=5, + enable_structured_communication=True, + enable_hierarchical_evaluation=True, + shared_memory=True, + model_name="gpt-4o-mini", + verbose=True +) + +# Run with custom parameters +result = framework.run( + "Analyze the impact of climate change on global agriculture", + max_loops=3 +) +``` + +## Integration with Other Swarms + +```python +from swarms.structs.hierarchical_structured_communication_framework import ( + HierarchicalStructuredCommunicationFramework +) +from swarms.structs import AutoSwarmBuilder + +# Use HierarchicalStructuredCommunicationFramework for content generation +framework = HierarchicalStructuredCommunicationFramework( + max_loops=2, + verbose=True +) + +# Integrate with AutoSwarmBuilder +builder = AutoSwarmBuilder() +swarm = builder.create_swarm( + swarm_type="HierarchicalStructuredCommunicationFramework", + task="Generate a comprehensive business plan" +) +``` + +## API Reference + +### HierarchicalStructuredCommunicationFramework + +The main orchestrator class that implements the complete framework. + +#### Parameters +- `name` (str): Name of the framework +- `supervisor`: Main supervisor agent +- `generators` (List): List of generator agents +- `evaluators` (List): List of evaluator agents +- `refiners` (List): List of refiner agents +- `max_loops` (int): Maximum refinement loops +- `enable_structured_communication` (bool): Enable structured protocol +- `enable_hierarchical_evaluation` (bool): Enable hierarchical evaluation +- `verbose` (bool): Enable verbose logging + +#### Methods +- `run(task)`: Execute complete workflow +- `step(task)`: Execute single workflow step +- `send_structured_message()`: Send structured communication +- `run_hierarchical_evaluation()`: Run evaluation system + +## Contributing + +Contributions to improve the Hierarchical Structured Communication Framework are welcome! Please: + +1. Follow the existing code style and patterns +2. Add comprehensive tests for new features +3. Update documentation for any API changes +4. Ensure all imports use the correct module paths + +## License + +This framework is part of the Swarms project and follows the same licensing terms. + + +-------------------------------------------------- + +# File: swarms/structs/hierarchical_swarm.md # `HierarchicalSwarm` @@ -37810,7 +42066,7 @@ The `HierarchicalSwarm` includes comprehensive error handling with detailed logg -------------------------------------------------- -# File: swarms\structs\image_batch_agent.md +# File: swarms/structs/image_batch_agent.md # ImageAgentBatchProcessor Documentation @@ -38087,33 +42343,29 @@ This documentation provides a comprehensive guide to using the `ImageAgentBatchP -------------------------------------------------- -# File: swarms\structs\index.md +# File: swarms/structs/index.md # Introduction to Multi-Agent Collaboration --- -## 🚀 Benefits of Multi-Agent Collaboration - -
- Benefits of Multi-Agent Collaboration -
- Fig. 1: Key benefits and structure of multi-agent collaboration -
+## Benefits of Multi-Agent Collaboration ### Why Multi-Agent Architectures? Multi-agent systems unlock new levels of intelligence, reliability, and efficiency by enabling agents to work together. Here are the core benefits: -1. **Reduction of Hallucination**: Cross-verification between agents ensures more accurate, reliable outputs by reducing hallucination. -2. **Extended Memory**: Agents share knowledge and task history, achieving collective long-term memory for smarter, more adaptive responses. -3. **Specialization & Task Distribution**: Delegating tasks to specialized agents boosts efficiency and quality. -4. **Parallel Processing**: Multiple agents work simultaneously, greatly increasing speed and throughput. -5. **Scalability & Adaptability**: Systems can dynamically scale and adapt, maintaining efficiency as demands change. +| **Benefit** | **Description** | +|------------------------------------|----------------------------------------------------------------------------------------------------------------------| +| **Reduction of Hallucination** | Cross-verification between agents ensures more accurate, reliable outputs by reducing hallucination. | +| **Extended Memory** | Agents share knowledge and task history, achieving collective long-term memory for smarter, more adaptive responses. | +| **Specialization & Task Distribution** | Delegating tasks to specialized agents boosts efficiency and quality. | +| **Parallel Processing** | Multiple agents work simultaneously, greatly increasing speed and throughput. | +| **Scalability & Adaptability** | Systems can dynamically scale and adapt, maintaining efficiency as demands change. | --- -## 🏗️ Multi-Agent Architectures For Production Deployments +## Multi-Agent Architectures For Production Deployments `swarms` provides a variety of powerful, pre-built multi-agent architectures enabling you to orchestrate agents in various ways. Choose the right structure for your specific problem to build efficient and reliable production systems. @@ -38133,7 +42385,7 @@ Multi-agent systems unlock new levels of intelligence, reliability, and efficien --- -### 🏢 HierarchicalSwarm Example +### HierarchicalSwarm Example Hierarchical architectures enable structured, iterative, and scalable problem-solving by combining a director (or router) agent with specialized worker agents or swarms. Below are two key patterns: @@ -38383,14 +42635,14 @@ Join our community of agent engineers and researchers for technical support, cut | Platform | Description | Link | |----------|-------------|------| -| 📚 Documentation | Official documentation and guides | [docs.swarms.world](https://docs.swarms.world) | -| 📝 Blog | Latest updates and technical articles | [Medium](https://medium.com/@kyeg) | -| 💬 Discord | Live chat and community support | [Join Discord](https://discord.gg/EamjgSaEQf) | -| 🐦 Twitter | Latest news and announcements | [@kyegomez](https://twitter.com/kyegomez) | -| 👥 LinkedIn | Professional network and updates | [The Swarm Corporation](https://www.linkedin.com/company/the-swarm-corporation) | -| 📺 YouTube | Tutorials and demos | [Swarms Channel](https://www.youtube.com/channel/UC9yXyitkbU_WSy7bd_41SqQ) | -| 🎫 Events | Join our community events | [Sign up here](https://lu.ma/5p2jnc2v) | -| 🚀 Onboarding Session | Get onboarded with Kye Gomez, creator and lead maintainer of Swarms | [Book Session](https://cal.com/swarms/swarms-onboarding-session) | +| Documentation | Official documentation and guides | [docs.swarms.world](https://docs.swarms.world) | +| Blog | Latest updates and technical articles | [Medium](https://medium.com/@kyeg) | +| Discord | Live chat and community support | [Join Discord](https://discord.gg/EamjgSaEQf) | +| Twitter | Latest news and announcements | [@kyegomez](https://twitter.com/kyegomez) | +| LinkedIn | Professional network and updates | [The Swarm Corporation](https://www.linkedin.com/company/the-swarm-corporation) | +| YouTube | Tutorials and demos | [Swarms Channel](https://www.youtube.com/channel/UC9yXyXyitkbU_WSy7bd_41SqQ) | +| Events | Join our community events | [Sign up here](https://lu.ma/5p2jnc2v) | +| Onboarding Session | Get onboarded with Kye Gomez, creator and lead maintainer of Swarms | [Book Session](https://cal.com/swarms/swarms-onboarding-session) | --- @@ -38398,7 +42650,7 @@ Join our community of agent engineers and researchers for technical support, cut -------------------------------------------------- -# File: swarms\structs\interactive_groupchat.md +# File: swarms/structs/interactive_groupchat.md # InteractiveGroupChat Documentation @@ -39355,7 +43607,7 @@ This project is licensed under the Apache License - see the LICENSE file for det -------------------------------------------------- -# File: swarms\structs\majorityvoting.md +# File: swarms/structs/majorityvoting.md # MajorityVoting Module Documentation @@ -39577,7 +43829,7 @@ print(result) -------------------------------------------------- -# File: swarms\structs\malt.md +# File: swarms/structs/malt.md # MALT: Multi-Agent Learning Task Framework @@ -39848,7 +44100,7 @@ result = malt.run(task) -------------------------------------------------- -# File: swarms\structs\matrix_swarm.md +# File: swarms/structs/matrix_swarm.md # MatrixSwarm @@ -40100,12 +44352,10 @@ class AgentOutput(BaseModel): -------------------------------------------------- -# File: swarms\structs\moa.md +# File: swarms/structs/moa.md # MixtureOfAgents Class Documentation -## Architecture Overview - ```mermaid graph TD A[Input Task] --> B[Initialize MixtureOfAgents] @@ -40130,7 +44380,6 @@ graph TD end ``` -## Overview The `MixtureOfAgents` class represents a mixture of agents operating within a swarm. The workflow of the swarm follows a parallel → sequential → parallel → final output agent process. This implementation is inspired by concepts discussed in the paper: [https://arxiv.org/pdf/2406.04692](https://arxiv.org/pdf/2406.04692). @@ -40177,45 +44426,8 @@ class MixtureOfAgents(BaseSwarm): | `auto_save` | `bool` | Flag indicating whether to auto-save the metadata to a file. | `False` | | `saved_file_name`| `str` | The name of the file where the metadata will be saved. | `"moe_swarm.json"` | -### `agent_check` - -```python -def agent_check(self): -``` - -#### Description - -Checks if the provided `agents` attribute is a list of `Agent` instances. Raises a `TypeError` if the validation fails. - -#### Example Usage - -```python -moe_swarm = MixtureOfAgents(agents=[agent1, agent2]) -moe_swarm.agent_check() # Validates the agents -``` - -### `final_agent_check` -```python -def final_agent_check(self): -``` - -#### Description - -Checks if the provided `final_agent` attribute is an instance of `Agent`. Raises a `TypeError` if the validation fails. - -#### Example Usage - -```python -moe_swarm = MixtureOfAgents(final_agent=final_agent) -moe_swarm.final_agent_check() # Validates the final agent -``` -### `swarm_initialization` - -```python -def swarm_initialization(self): -``` #### Description @@ -40384,48 +44596,28 @@ For further reading and background information on the concepts used in the `Mixt ```python from swarms import MixtureOfAgents, Agent -from swarm_models import OpenAIChat - # Define agents director = Agent( agent_name="Director", system_prompt="Directs the tasks for the accountants", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="director.json", ) # Initialize accountant 1 accountant1 = Agent( agent_name="Accountant1", system_prompt="Prepares financial statements", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="accountant1.json", ) # Initialize accountant 2 accountant2 = Agent( agent_name="Accountant2", system_prompt="Audits financial records", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="accountant2.json", ) @@ -40442,49 +44634,28 @@ print(history) ```python from swarms import MixtureOfAgents, Agent -from swarm_models import OpenAIChat - # Define Agents -# Define agents director = Agent( agent_name="Director", system_prompt="Directs the tasks for the accountants", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="director.json", ) # Initialize accountant 1 accountant1 = Agent( agent_name="Accountant1", system_prompt="Prepares financial statements", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="accountant1.json", ) # Initialize accountant 2 accountant2 = Agent( agent_name="Accountant2", system_prompt="Audits financial records", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="accountant2.json", ) # Initialize the MixtureOfAgents with verbose output and auto-save enabled @@ -40505,49 +44676,30 @@ print(history) ```python from swarms import MixtureOfAgents, Agent -from swarm_models import OpenAIChat # Define agents # Initialize the director agent director = Agent( agent_name="Director", system_prompt="Directs the tasks for the accountants", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="director.json", ) # Initialize accountant 1 accountant1 = Agent( agent_name="Accountant1", system_prompt="Prepares financial statements", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="accountant1.json", ) # Initialize accountant 2 accountant2 = Agent( agent_name="Accountant2", system_prompt="Audits financial records", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="accountant2.json", ) # Initialize the MixtureOfAgents with custom rules and multiple layers @@ -40572,11 +44724,13 @@ The `MixtureOfAgents` class is a powerful and flexible framework for managing an ### Key Takeaways -1. **Flexible Initialization**: The class allows for customizable initialization with various parameters, enabling users to tailor the swarm's configuration to their specific needs. -2. **Robust Agent Management**: With built-in validation methods, the class ensures that all agents and the final agent are correctly instantiated, preventing runtime errors and facilitating smooth execution. -3. **Layered Processing**: The layered approach to processing allows for intermediate results to be iteratively refined, enhancing the overall output quality. -4. **Verbose Logging and Auto-Save**: These features aid in debugging, monitoring, and record-keeping, providing transparency and ease of management. -5. **Comprehensive Documentation**: The detailed class and method documentation, along with numerous usage examples, provide a clear and thorough understanding of how to leverage the `MixtureOfAgents` class effectively. +| Feature | Description | +|-----------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| **Flexible Initialization** | The class allows for customizable initialization with various parameters, enabling users to tailor the swarm's configuration to their specific needs. | +| **Robust Agent Management** | Built-in validation methods ensure that all agents and the final agent are correctly instantiated, preventing runtime errors and facilitating smooth execution. | +| **Layered Processing** | The layered approach to processing allows for intermediate results to be iteratively refined, enhancing the overall output quality. | +| **Verbose Logging and Auto-Save** | Features such as verbose logging and auto-save aid in debugging, monitoring, and record-keeping, providing transparency and ease of management. | +| **Comprehensive Documentation** | Detailed class and method documentation, along with numerous usage examples, provide a clear and thorough understanding of how to leverage the `MixtureOfAgents` class effectively. | ### Practical Applications @@ -40603,46 +44757,28 @@ In conclusion, the `MixtureOfAgents` class represents a versatile and efficient ```python from swarms import MixtureOfAgents, Agent -from swarm_models import OpenAIChat # Initialize agents as in previous examples director = Agent( agent_name="Director", system_prompt="Directs the tasks for the accountants", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="director.json", + ) accountant1 = Agent( agent_name="Accountant1", system_prompt="Prepares financial statements", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="accountant1.json", ) accountant2 = Agent( agent_name="Accountant2", system_prompt="Audits financial records", - llm=OpenAIChat(), + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="accountant2.json", ) # Initialize MixtureOfAgents @@ -40666,7 +44802,6 @@ for task, result in zip(tasks, results): ```python from swarms import MixtureOfAgents, Agent -from swarm_models import OpenAIChat # Initialize agents as before # ... agent initialization code ... @@ -40689,26 +44824,11 @@ for task, result in zip(tasks, results): print(f"Task: {task}\nResult: {result}\n") ``` -## Advanced Features - -### Context Preservation -The `MixtureOfAgents` class maintains context between iterations when running multiple loops. Each subsequent iteration receives the context from previous runs, allowing for more sophisticated and context-aware processing. - -### Asynchronous Processing - -The class implements asynchronous processing internally using Python's `asyncio`, enabling efficient handling of concurrent operations and improved performance for complex workflows. - -### Telemetry and Logging - -Built-in telemetry and logging capabilities help track agent performance and maintain detailed execution records: -- Automatic logging of agent outputs -- Structured data capture using Pydantic models -- JSON-formatted output options -------------------------------------------------- -# File: swarms\structs\model_router.md +# File: swarms/structs/model_router.md # ModelRouter Docs @@ -41075,7 +45195,7 @@ analyses = pipeline.process_documents(documents) -------------------------------------------------- -# File: swarms\structs\multi_agent_collaboration_examples.md +# File: swarms/structs/multi_agent_collaboration_examples.md # Multi-Agent Examples @@ -41322,7 +45442,7 @@ print(out) -------------------------------------------------- -# File: swarms\structs\multi_agent_orchestration.md +# File: swarms/structs/multi_agent_orchestration.md # Multi-Agent Orchestration: Swarms was designed to faciliate the communication between many different and specialized agents from a vast array of other frameworks such as langchain, autogen, crew, and more. @@ -41343,7 +45463,7 @@ In traditional swarm theory, there are many types of swarms usually for very spe -------------------------------------------------- -# File: swarms\structs\multi_agent_router.md +# File: swarms/structs/multi_agent_router.md # MultiAgentRouter Documentation @@ -41679,7 +45799,7 @@ for agent in router.agents.values(): -------------------------------------------------- -# File: swarms\structs\multi_swarm_orchestration.md +# File: swarms/structs/multi_swarm_orchestration.md # Hierarchical Agent Orchestration Architectures @@ -41889,7 +46009,7 @@ This documentation provides a high-level overview of the main hierarchical agent -------------------------------------------------- -# File: swarms\structs\multi_threaded_workflow.md +# File: swarms/structs/multi_threaded_workflow.md # MultiThreadedWorkflow Documentation @@ -42008,7 +46128,7 @@ For more information on threading and concurrent execution in Python, refer to t -------------------------------------------------- -# File: swarms\structs\orchestration_methods.md +# File: swarms/structs/orchestration_methods.md # Multi-Agent Orchestration Methods @@ -42548,7 +46668,7 @@ The multi-agent orchestration methods in Swarms provide powerful frameworks for -------------------------------------------------- -# File: swarms\structs\overview.md +# File: swarms/structs/overview.md # Multi-Agent Architectures Overview @@ -42624,21 +46744,17 @@ For more detailed information about each architecture, please refer to their res -------------------------------------------------- -# File: swarms\structs\round_robin_swarm.md +# File: swarms/structs/round_robin_swarm.md # RoundRobin: Round-Robin Task Execution in a Swarm -## Introduction - The `RoundRobinSwarm` class is designed to manage and execute tasks among multiple agents in a round-robin fashion. This approach ensures that each agent in a swarm receives an equal opportunity to execute tasks, which promotes fairness and efficiency in distributed systems. It is particularly useful in environments where collaborative, sequential task execution is needed among various agents. -## Conceptual Overview - -### What is Round-Robin? +## What is Round-Robin? Round-robin is a scheduling technique commonly used in computing for managing processes in shared systems. It involves assigning a fixed time slot to each process and cycling through all processes in a circular order without prioritization. In the context of swarms of agents, this method ensures equitable distribution of tasks and resource usage among all agents. -### Application in Swarms +## Application in Swarms In swarms, `RoundRobinSwarm` utilizes the round-robin scheduling to manage tasks among agents like software components, autonomous robots, or virtual entities. This strategy is beneficial where tasks are interdependent or require sequential processing. @@ -42656,73 +46772,57 @@ In swarms, `RoundRobinSwarm` utilizes the round-robin scheduling to manage tasks Initializes the swarm with the provided list of agents, verbosity setting, and operational parameters. **Parameters:** -- `agents`: Optional list of agents in the swarm. -- `verbose`: Boolean flag for detailed logging. -- `max_loops`: Maximum number of execution cycles. -- `callback`: Optional function called after each loop. +| Parameter | Type | Description | +|-------------|---------------------|-----------------------------------------------------| +| agents | List[Agent], optional | List of agents in the swarm. | +| verbose | bool | Boolean flag for detailed logging. | +| max_loops | int | Maximum number of execution cycles. | +| callback | Callable, optional | Function called after each loop. | ### `run` Executes a specified task across all agents in a round-robin manner, cycling through each agent repeatedly for the number of specified loops. **Conceptual Behavior:** -- Distribute the task sequentially among all agents starting from the current index. -- Each agent processes the task and potentially modifies it or produces new output. -- After an agent completes its part of the task, the index moves to the next agent. -- This cycle continues until the specified maximum number of loops is completed. -- Optionally, a callback function can be invoked after each loop to handle intermediate results or perform additional actions. + +| Step | Description | +|------|-------------| +| 1 | Distribute the task sequentially among all agents starting from the current index. | +| 2 | Each agent processes the task and potentially modifies it or produces new output. | +| 3 | After an agent completes its part of the task, the index moves to the next agent. | +| 4 | This cycle continues until the specified maximum number of loops is completed. | +| 5 | Optionally, a callback function can be invoked after each loop to handle intermediate results or perform additional actions. | ## Examples -### Example 1: Load Balancing Among Servers In this example, `RoundRobinSwarm` is used to distribute network requests evenly among a group of servers. This is common in scenarios where load balancing is crucial for maintaining system responsiveness and scalability. ```python from swarms import Agent, RoundRobinSwarm -from swarm_models import OpenAIChat - - -# Initialize the LLM -llm = OpenAIChat() # Define sales agents sales_agent1 = Agent( agent_name="Sales Agent 1 - Automation Specialist", system_prompt="You're Sales Agent 1, your purpose is to generate sales for a company by focusing on the benefits of automating accounting processes!", agent_description="Generate sales by focusing on the benefits of automation!", - llm=llm, + model_name="gpt-4.1", max_loops=1, - autosave=True, - dashboard=False, - verbose=True, - streaming_on=True, - context_length=1000, ) sales_agent2 = Agent( agent_name="Sales Agent 2 - Cost Saving Specialist", system_prompt="You're Sales Agent 2, your purpose is to generate sales for a company by emphasizing the cost savings of using swarms of agents!", agent_description="Generate sales by emphasizing cost savings!", - llm=llm, + model_name="gpt-4.1", max_loops=1, - autosave=True, - dashboard=False, - verbose=True, - streaming_on=True, - context_length=1000, ) sales_agent3 = Agent( agent_name="Sales Agent 3 - Efficiency Specialist", system_prompt="You're Sales Agent 3, your purpose is to generate sales for a company by highlighting the efficiency and accuracy of our swarms of agents in accounting processes!", agent_description="Generate sales by highlighting efficiency and accuracy!", - llm=llm, + model_name="gpt-4.1", max_loops=1, - autosave=True, - dashboard=False, - verbose=True, - streaming_on=True, - context_length=1000, ) # Initialize the swarm with sales agents @@ -42731,14 +46831,11 @@ sales_swarm = RoundRobinSwarm(agents=[sales_agent1, sales_agent2, sales_agent3], # Define a sales task task = "Generate a sales email for an accountant firm executive to sell swarms of agents to automate their accounting processes." -# Distribute sales tasks to different agents -for _ in range(5): # Repeat the task 5 times - results = sales_swarm.run(task) - print("Sales generated:", results) +out = sales_swarm.run(task) +print(out) ``` - ## Conclusion The RoundRobinSwarm class provides a robust and flexible framework for managing tasks among multiple agents in a fair and efficient manner. This class is especially useful in environments where tasks need to be distributed evenly among a group of agents, ensuring that all tasks are handled timely and effectively. Through the round-robin algorithm, each agent in the swarm is guaranteed an equal opportunity to contribute to the overall task, promoting efficiency and collaboration. @@ -42746,49 +46843,82 @@ The RoundRobinSwarm class provides a robust and flexible framework for managing -------------------------------------------------- -# File: swarms\structs\sequential_workflow.md +# File: swarms/structs/sequential_workflow.md # SequentialWorkflow Documentation **Overview:** -A Sequential Swarm architecture processes tasks in a linear sequence. Each agent completes its task before passing the result to the next agent in the chain. This architecture ensures orderly processing and is useful when tasks have dependencies. [Learn more here in the docs:](https://docs.swarms.world/en/latest/swarms/structs/agent_rearrange/) +A Sequential Swarm architecture processes tasks in a linear sequence. Each agent completes its task before passing the result to the next agent in the chain. This architecture ensures orderly processing and is useful when tasks have dependencies. The system now includes **sequential awareness** features that allow agents to know about the agents ahead and behind them in the workflow, significantly enhancing coordination and context understanding. [Learn more here in the docs:](https://docs.swarms.world/en/latest/swarms/structs/agent_rearrange/) **Use-Cases:** - Workflows where each step depends on the previous one, such as assembly lines or sequential data processing. - - Scenarios requiring strict order of operations. +- **NEW**: Enhanced workflows where agents need context about their position in the sequence for better coordination. ```mermaid graph TD A[First Agent] --> B[Second Agent] B --> C[Third Agent] C --> D[Fourth Agent] + + style A fill:#e1f5fe + style B fill:#f3e5f5 + style C fill:#e8f5e8 + style D fill:#fff3e0 + + A -.->|"Awareness: None (first)"| A + B -.->|"Awareness: Ahead: A, Behind: C"| B + C -.->|"Awareness: Ahead: B, Behind: D"| C + D -.->|"Awareness: Ahead: C, Behind: None (last)"| D ``` +## **Sequential Awareness Feature** + +The SequentialWorkflow now includes a powerful **sequential awareness** feature that automatically provides each agent with context about their position in the workflow: + +### What Agents Know Automatically + +- **Agent ahead**: The agent that completed their task before them +- **Agent behind**: The agent that will receive their output next +- **Workflow position**: Their step number and role in the sequence + +### Benefits + +1. **Better Coordination**: Agents can reference previous work and prepare output for the next step +2. **Context Understanding**: Each agent knows their role in the larger workflow +3. **Improved Quality**: Output is tailored for the next agent in the sequence +4. **Enhanced Logging**: Better tracking of agent interactions and workflow progress + ## Attributes | Attribute | Type | Description | |------------------|---------------|--------------------------------------------------| | `agents` | `List[Agent]` | The list of agents in the workflow. | | `flow` | `str` | A string representing the order of agents. | -| `agent_rearrange`| `AgentRearrange` | Manages the dynamic execution of agents. | +| `agent_rearrange`| `AgentRearrange` | Manages the dynamic execution of agents with sequential awareness. | +| `team_awareness` | `bool` | **NEW**: Enables sequential awareness features. Defaults to `False`. | +| `time_enabled` | `bool` | **NEW**: Enables timestamps in conversation. Defaults to `False`. | +| `message_id_on` | `bool` | **NEW**: Enables message IDs in conversation. Defaults to `False`. | ## Methods -### `__init__(self, agents: List[Agent] = None, max_loops: int = 1, *args, **kwargs)` +### `__init__(self, agents: List[Agent] = None, max_loops: int = 1, team_awareness: bool = False, time_enabled: bool = False, message_id_on: bool = False, *args, **kwargs)` -The constructor initializes the `SequentialWorkflow` object. +The constructor initializes the `SequentialWorkflow` object with enhanced sequential awareness capabilities. - **Parameters:** - `agents` (`List[Agent]`, optional): The list of agents in the workflow. Defaults to `None`. - `max_loops` (`int`, optional): The maximum number of loops to execute the workflow. Defaults to `1`. + - `team_awareness` (`bool`, optional): **NEW**: Enables sequential awareness features. Defaults to `False`. + - `time_enabled` (`bool`, optional): **NEW**: Enables timestamps in conversation. Defaults to `False`. + - `message_id_on` (`bool`, optional): **NEW**: Enables message IDs in conversation. Defaults to `False`. - `*args`: Variable length argument list. - `**kwargs`: Arbitrary keyword arguments. ### `run(self, task: str) -> str` -Runs the specified task through the agents in the dynamically constructed flow. +Runs the specified task through the agents in the dynamically constructed flow with enhanced sequential awareness. - **Parameters:** - `task` (`str`): The task for the agents to execute. @@ -42796,10 +46926,28 @@ Runs the specified task through the agents in the dynamically constructed flow. - **Returns:** - `str`: The final result after processing through all agents. -## **Usage Example:** +### **NEW: Sequential Awareness Methods** -```python +#### `get_agent_sequential_awareness(self, agent_name: str) -> str` + +Gets the sequential awareness information for a specific agent, showing which agents come before and after in the sequence. +- **Parameters:** + - `agent_name` (`str`): The name of the agent to get awareness for. + +- **Returns:** + - `str`: A string describing the agents ahead and behind in the sequence. + +#### `get_sequential_flow_structure(self) -> str` + +Gets the overall sequential flow structure information showing the complete workflow with relationships between agents. + +- **Returns:** + - `str`: A string describing the complete sequential flow structure. + +## **Usage Example with Sequential Awareness:** + +```python from swarms import Agent, SequentialWorkflow # Initialize agents for individual tasks @@ -42815,45 +46963,180 @@ agent2 = Agent( model_name="gpt-4o", max_loops=1, ) +agent3 = Agent( + agent_name="ICD-10 Code Validator", + system_prompt="Validate and finalize the ICD-10 code recommendations.", + model_name="gpt-4o", + max_loops=1, +) -# Create the Sequential workflow +# Create the Sequential workflow with enhanced awareness workflow = SequentialWorkflow( - agents=[agent1, agent2], max_loops=1, verbose=False + agents=[agent1, agent2, agent3], + max_loops=1, + verbose=False, + team_awareness=True, # Enable sequential awareness + time_enabled=True, # Enable timestamps + message_id_on=True # Enable message IDs ) +# Get workflow structure information +flow_structure = workflow.get_sequential_flow_structure() +print("Workflow Structure:") +print(flow_structure) + +# Get awareness for specific agents +analyzer_awareness = workflow.get_agent_sequential_awareness("ICD-10 Code Analyzer") +summarizer_awareness = workflow.get_agent_sequential_awareness("ICD-10 Code Summarizer") +validator_awareness = workflow.get_agent_sequential_awareness("ICD-10 Code Validator") + +print(f"\nAnalyzer Awareness: {analyzer_awareness}") +print(f"Summarizer Awareness: {summarizer_awareness}") +print(f"Validator Awareness: {validator_awareness}") + # Run the workflow -workflow.run( +result = workflow.run( "Analyze the medical report and provide the appropriate ICD-10 codes." ) +print(f"\nFinal Result: {result}") +``` + +**Expected Output:** +``` +Workflow Structure: +Sequential Flow Structure: +Step 1: ICD-10 Code Analyzer +Step 2: ICD-10 Code Summarizer (follows: ICD-10 Code Analyzer) (leads to: ICD-10 Code Validator) +Step 3: ICD-10 Code Validator (follows: ICD-10 Code Summarizer) +Analyzer Awareness: +Summarizer Awareness: Sequential awareness: Agent ahead: ICD-10 Code Analyzer | Agent behind: ICD-10 Code Validator +Validator Awareness: Sequential awareness: Agent ahead: ICD-10 Code Summarizer ``` -This example initializes a `SequentialWorkflow` with three agents and executes a task, printing the final result. +## **How Sequential Awareness Works** + +### 1. **Automatic Context Injection** +When `team_awareness=True`, the system automatically adds awareness information to each agent's conversation context before they run: -## **Notes:** +- **First Agent**: No awareness info (starts the workflow) +- **Middle Agents**: Receive info about both the agent ahead and behind +- **Last Agent**: Receives info about the agent ahead only -- Logs the task execution process and handles any exceptions that occur during the task execution. +### 2. **Enhanced Agent Prompts** +Each agent receives context like: +``` +Sequential awareness: Agent ahead: ICD-10 Code Analyzer | Agent behind: ICD-10 Code Validator +``` + +### 3. **Improved Coordination** +Agents can now: +- Reference previous work more effectively +- Prepare output specifically for the next agent +- Understand their role in the larger workflow +- Provide better context for subsequent steps + +## **Advanced Usage Examples** + +### **Example 1: Research → Analysis → Report Workflow** +```python +# Create specialized agents +researcher = Agent( + agent_name="Researcher", + system_prompt="Conduct thorough research on the given topic." +) + +analyzer = Agent( + agent_name="Data Analyzer", + system_prompt="Analyze research data and identify key insights." +) + +reporter = Agent( + agent_name="Report Writer", + system_prompt="Write comprehensive reports based on analysis." +) + +# Create workflow with awareness +workflow = SequentialWorkflow( + agents=[researcher, analyzer, reporter], + team_awareness=True, + time_enabled=True +) + +# Run with enhanced coordination +result = workflow.run("Research and analyze the impact of AI on healthcare") +``` + +### **Example 2: Code Review Workflow** +```python +# Create code review agents +linter = Agent( + agent_name="Code Linter", + system_prompt="Check code for syntax errors and style violations." +) + +reviewer = Agent( + agent_name="Code Reviewer", + system_prompt="Review code quality and suggest improvements." +) + +tester = Agent( + agent_name="Code Tester", + system_prompt="Write and run tests for the reviewed code." +) + +# Create workflow +workflow = SequentialWorkflow( + agents=[linter, reviewer, tester], + team_awareness=True +) + +# Run code review process +result = workflow.run("Review and test the authentication module") +``` + +## **Notes:** + +- **Enhanced Logging**: The workflow now logs sequential awareness information for better debugging and monitoring. +- **Automatic Context**: No manual configuration needed - awareness is automatically provided when `team_awareness=True`. +- **Backward Compatibility**: Existing workflows continue to work without changes. +- **Performance**: Sequential awareness adds minimal overhead while significantly improving coordination. ### Logging and Error Handling -The `run` method includes logging to track the execution flow and captures errors to provide detailed information in case of failures. This is crucial for debugging and ensuring smooth operation of the workflow. +The `run` method now includes enhanced logging to track the sequential awareness flow and captures detailed information about agent interactions: + +```bash +2023-05-08 10:30:15.456 | INFO | SequentialWorkflow:run:45 - Starting sequential workflow execution +2023-05-08 10:30:15.457 | INFO | SequentialWorkflow:run:52 - Added sequential awareness for ICD-10 Code Summarizer: Sequential awareness: Agent ahead: ICD-10 Code Analyzer | Agent behind: ICD-10 Code Validator +2023-05-08 10:30:15.458 | INFO | SequentialWorkflow:run:52 - Added sequential awareness for ICD-10 Code Validator: Sequential awareness: Agent ahead: ICD-10 Code Summarizer +``` ## Additional Tips -- Ensure that the agents provided to the `SequentialWorkflow` are properly initialized and configured to handle the tasks they will receive. +- **Enable Team Awareness**: Set `team_awareness=True` to unlock the full potential of sequential coordination. +- **Use Descriptive Agent Names**: Clear agent names make the awareness information more useful. +- **Monitor Logs**: Enhanced logging provides insights into how agents are coordinating. +- **Iterative Improvement**: Use the awareness features to refine agent prompts and improve workflow quality. + +## **Benefits of Sequential Awareness** -- The `max_loops` parameter can be used to control how many times the workflow should be executed, which is useful for iterative processes. +1. **Improved Quality**: Agents produce better output when they understand their context +2. **Better Coordination**: Reduced redundancy and improved handoffs between agents +3. **Enhanced Debugging**: Clear visibility into agent interactions and workflow progress +4. **Scalable Workflows**: Easy to add new agents while maintaining coordination +5. **Professional Workflows**: Mimics real-world team collaboration patterns -- Utilize the logging information to monitor and debug the task execution process. +The SequentialWorkflow with sequential awareness represents a significant advancement in multi-agent coordination, enabling more sophisticated and professional workflows that closely mirror human team collaboration patterns. -------------------------------------------------- -# File: swarms\structs\spreadsheet_swarm.md +# File: swarms/structs/spreadsheet_swarm.md # SpreadSheetSwarm Documentation ---- + ## Class Definition @@ -43042,20 +47325,10 @@ swarm._save_to_csv() ```python import os -from swarms import Agent -from swarm_models import OpenAIChat +from swarms import Agent, SpreadSheetSwarm from swarms.prompts.finance_agent_sys_prompt import ( FINANCIAL_AGENT_SYS_PROMPT, ) -from swarms.structs.spreadsheet_swarm import SpreadSheetSwarm - -# Example usage: -api_key = os.getenv("OPENAI_API_KEY") - -# Model -model = OpenAIChat( - openai_api_key=api_key, model_name="gpt-4o-mini", temperature=0.1 -) # Initialize your agents (assuming the Agent class and model are already defined) @@ -43063,7 +47336,7 @@ agents = [ Agent( agent_name=f"Financial-Analysis-Agent-spreesheet-swarm:{i}", system_prompt=FINANCIAL_AGENT_SYS_PROMPT, - llm=model, + model_name="gpt-4.1", max_loops=1, dynamic_temperature_enabled=True, saved_state_path="finance_agent.json", @@ -43095,9 +47368,7 @@ swarm.run( ```python import os -from swarms import Agent -from swarm_models import OpenAIChat -from swarms.structs.spreadsheet_swarm import SpreadSheetSwarm +from swarms import Agent, SpreadSheetSwarm # Define custom system prompts for QR code generation QR_CODE_AGENT_1_SYS_PROMPT = """ @@ -43108,20 +47379,13 @@ QR_CODE_AGENT_2_SYS_PROMPT = """ You are a Python coding expert. Your task is to write a Python script to generate a QR code for the link: https://github.com/The-Swarm-Corporation/Cookbook. The code should save the QR code as an image file. """ -# Example usage: -api_key = os.getenv("OPENAI_API_KEY") - -# Model -model = OpenAIChat( - openai_api_key=api_key, model_name="gpt-4o-mini", temperature=0.1 -) # Initialize your agents for QR code generation agents = [ Agent( agent_name="QR-Code-Generator-Agent-Luma", system_prompt=QR_CODE_AGENT_1_SYS_PROMPT, - llm=model, + model_name="gpt-4.1", max_loops=1, dynamic_temperature_enabled=True, saved_state_path="qr_code_agent_luma.json", @@ -43131,7 +47395,7 @@ agents = [ Agent( agent_name="QR-Code-Generator-Agent-Cookbook", system_prompt=QR_CODE_AGENT_2_SYS_PROMPT, - llm=model, + model_name="gpt-4.1", max_loops=1, dynamic_temperature_enabled=True, saved_state_path="qr_code_agent_cookbook.json", @@ -43163,9 +47427,7 @@ swarm.run( ```python import os -from swarms import Agent -from swarm_models import OpenAIChat -from swarms.structs.spreadsheet_swarm import SpreadSheetSwarm +from swarms import Agent, SpreadSheetSwarm # Define custom system prompts for each social media platform TWITTER_AGENT_SYS_PROMPT = """ @@ -43197,7 +47459,7 @@ agents = [ Agent( agent_name="Twitter-Marketing-Agent", system_prompt=TWITTER_AGENT_SYS_PROMPT, - llm=model, + model_name="gpt-4.1", max_loops=1, dynamic_temperature_enabled=True, saved_state_path="twitter_agent.json", @@ -43207,7 +47469,7 @@ agents = [ Agent( agent_name="Instagram-Marketing-Agent", system_prompt=INSTAGRAM_AGENT_SYS_PROMPT, - llm=model, + model_name="gpt-4.1", max_loops=1, dynamic_temperature_enabled=True, saved_state_path="instagram_agent.json", @@ -43217,7 +47479,7 @@ agents = [ Agent( agent_name="Facebook-Marketing-Agent", system_prompt=FACEBOOK_AGENT_SYS_PROMPT, - llm=model, + model_name="gpt-4.1", max_loops=1, dynamic_temperature_enabled=True, saved_state_path="facebook_agent.json", @@ -43227,7 +47489,7 @@ agents = [ Agent( agent_name="Email-Marketing-Agent", system_prompt=EMAIL_AGENT_SYS_PROMPT, - llm=model, + model_name="gpt-4.1", max_loops=1, dynamic_temperature_enabled=True, saved_state_path="email_agent.json", @@ -43257,30 +47519,19 @@ swarm.run( ## Additional Information and Tips -- **Thread Synchronization**: When working with multiple agents in a concurrent environment, it's crucial to ensure that access to shared resources is properly synchronized using locks to avoid race conditions. - -- **Autosave Feature**: If you enable the `autosave_on` flag, ensure that the file path provided is correct and writable. This feature is handy for long-running tasks where you want to periodically save the state. - -- **Error Handling** - -: Implementing proper error handling within your agents can prevent the swarm from crashing during execution. Consider catching exceptions in the `run` method and logging errors appropriately. - -- **Custom Agents**: You can extend the `Agent` class to create custom agents that perform specific tasks tailored to your application's needs. +| Tip/Feature | Description | +|------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| **Thread Synchronization** | When working with multiple agents in a concurrent environment, it's crucial to ensure that access to shared resources is properly synchronized using locks to avoid race conditions. | +| **Autosave Feature** | If you enable the `autosave_on` flag, ensure that the file path provided is correct and writable. This feature is handy for long-running tasks where you want to periodically save the state. | +| **Error Handling** | Implementing proper error handling within your agents can prevent the swarm from crashing during execution. Consider catching exceptions in the `run` method and logging errors appropriately. | +| **Custom Agents** | You can extend the `Agent` class to create custom agents that perform specific tasks tailored to your application's needs. | --- -## References and Resources - -- [Python's `queue` module](https://docs.python.org/3/library/queue.html) -- [Python's `threading` module](https://docs.python.org/3/library/threading.html) -- [CSV File Handling in Python](https://docs.python.org/3/library/csv.html) -- [JSON Handling in Python](https://docs.python.org/3/library/json.html) - - -------------------------------------------------- -# File: swarms\structs\swarm_matcher.md +# File: swarms/structs/swarm_matcher.md # SwarmMatcher @@ -43566,7 +47817,7 @@ This approach ensures that the matcher can understand the semantic meaning of ta -------------------------------------------------- -# File: swarms\structs\swarm_network.md +# File: swarms/structs/swarm_network.md # SwarmNetwork [WIP] @@ -44277,12 +48528,14 @@ By following this documentation, users can effectively manage and utilize the `S -------------------------------------------------- -# File: swarms\structs\swarm_rearrange.md +# File: swarms/structs/swarm_rearrange.md # SwarmRearrange Documentation SwarmRearrange is a class for orchestrating multiple swarms in a sequential or parallel flow pattern. It provides thread-safe operations for managing swarm execution, history tracking, and flow validation. +Full Path: `from swarms.structs.swarm_rearrange import SwarmRearrange` + ## Constructor Arguments | Parameter | Type | Default | Description | @@ -44319,7 +48572,9 @@ Executes the swarm arrangement according to the flow pattern. The flow pattern uses arrow notation (`->`) to define execution order: - Sequential: `"SwarmA -> SwarmB -> SwarmC"` + - Parallel: `"SwarmA, SwarmB -> SwarmC"` + - Human intervention: Use `"H"` in the flow ## Examples @@ -44327,25 +48582,12 @@ The flow pattern uses arrow notation (`->`) to define execution order: ### Basic Sequential Flow ```python -from swarms.structs.swarm_rearrange import SwarmRearrange import os -from swarms import Agent, AgentRearrange -from swarm_models import OpenAIChat +from swarms import Agent, AgentRearrange, SwarmRearrange # model = Anthropic(anthropic_api_key=os.getenv("ANTHROPIC_API_KEY")) company = "TGSC" -# Get the OpenAI API key from the environment variable -api_key = os.getenv("GROQ_API_KEY") - -# Model -model = OpenAIChat( - openai_api_base="https://api.groq.com/openai/v1", - openai_api_key=api_key, - model_name="llama-3.1-70b-versatile", - temperature=0.1, -) - # Initialize the Managing Director agent managing_director = Agent( @@ -44360,14 +48602,8 @@ managing_director = Agent( For the current potential acquisition of {company}, direct the tasks for the team to thoroughly analyze all aspects of the company, including its financials, industry position, technology, market potential, and regulatory compliance. Provide guidance and feedback as needed to ensure a rigorous and unbiased assessment. """, - llm=model, + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="managing-director.json", ) # Initialize the Vice President of Finance @@ -44384,14 +48620,8 @@ vp_finance = Agent( Be sure to consider factors such as the sustainability of {company}' business model, the strength of its customer base, and its ability to generate consistent cash flows. Your analysis should be data-driven, objective, and aligned with Blackstone's investment criteria. """, - llm=model, + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="vp-finance.json", ) # Initialize the Industry Analyst @@ -44408,14 +48638,8 @@ industry_analyst = Agent( Your analysis should provide a clear and objective assessment of the attractiveness and future potential of the industrial robotics industry, as well as {company}' positioning within it. Consider both short-term and long-term factors, and provide evidence-based insights to inform the investment decision. """, - llm=model, + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="industry-analyst.json", ) # Initialize the Technology Expert @@ -44432,14 +48656,8 @@ tech_expert = Agent( Your analysis should provide a comprehensive assessment of {company}' technological strengths and weaknesses, as well as the sustainability of its competitive advantages. Consider both the current state of its technology and its future potential in light of industry trends and advancements. """, - llm=model, + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="tech-expert.json", ) # Initialize the Market Researcher @@ -44456,14 +48674,9 @@ market_researcher = Agent( Your analysis should provide a data-driven assessment of the market opportunity for {company} and the feasibility of achieving our investment return targets. Consider both bottom-up and top-down market perspectives, and identify any key sensitivities or assumptions in your projections. """, - llm=model, + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="market-researcher.json", + ) # Initialize the Regulatory Specialist @@ -44480,14 +48693,8 @@ regulatory_specialist = Agent( Your analysis should provide a comprehensive assessment of the regulatory and legal landscape surrounding {company}, and identify any material risks or potential deal-breakers. Consider both the current state and future outlook, and provide practical recommendations to mitigate identified risks. """, - llm=model, + model_name="gpt-4.1", max_loops=1, - dashboard=False, - streaming_on=True, - verbose=True, - stopping_token="", - state_save_file_type="json", - saved_state_path="regulatory-specialist.json", ) # Create a list of agents @@ -44565,122 +48772,30 @@ arrangement = SwarmRearrange( result = arrangement.run("Initial task") ``` -### Complex Multi-Stage Pipeline - -```python -# Define multiple flow patterns -flows = [ - "Collector -> Processor -> Analyzer", - "Analyzer -> ML -> Validator", - "Validator -> Reporter" -] - -# Create arrangements for each flow -pipelines = [ - SwarmRearrange(name=f"Pipeline{i}", swarms=swarms, flow=flow) - for i, flow in enumerate(flows) -] - -# Create master arrangement -master = SwarmRearrange( - name="MasterPipeline", - swarms=pipelines, - flow="Pipeline0 -> Pipeline1 -> Pipeline2" -) - -# Execute complete pipeline -result = master.run("Start analysis") -``` ## Best Practices -1. **Flow Validation**: Always validate flows before execution -2. **Error Handling**: Implement try-catch blocks around run() calls -3. **History Tracking**: Use track_history() for monitoring swarm execution -4. **Resource Management**: Set appropriate max_loops to prevent infinite execution -5. **Logging**: Enable verbose mode during development for detailed logging - -## Error Handling - -The class implements comprehensive error handling: +| Best Practice | Description | +|------------------------|-----------------------------------------------------------------------------------------------| +| **Flow Validation** | Always validate flows before execution | +| **Error Handling** | Implement try-catch blocks around `run()` calls | +| **History Tracking** | Use `track_history()` for monitoring swarm execution | +| **Resource Management**| Set appropriate `max_loops` to prevent infinite execution | +| **Logging** | Enable verbose mode during development for detailed logging | -```python -try: - arrangement = SwarmRearrange(swarms=swarms, flow=flow) - result = arrangement.run(task) -except ValueError as e: - logger.error(f"Flow validation error: {e}") -except Exception as e: - logger.error(f"Execution error: {e}") -``` -------------------------------------------------- -# File: swarms\structs\swarm_router.md +# File: swarms/structs/swarm_router.md # SwarmRouter Documentation -The `SwarmRouter` class is a flexible routing system designed to manage different types of swarms for task execution. It provides a unified interface to interact with various swarm types, including: - -| Swarm Type | Description | -|------------|-------------| -| `AgentRearrange` | Optimizes agent arrangement for task execution | -| `MixtureOfAgents` | Combines multiple agent types for diverse tasks | -| `SpreadSheetSwarm` | Uses spreadsheet-like operations for task management | -| `SequentialWorkflow` | Executes tasks sequentially | -| `ConcurrentWorkflow` | Executes tasks in parallel | -| `GroupChat` | Facilitates communication among agents in a group chat format | -| `MultiAgentRouter` | Routes tasks between multiple agents | -| `AutoSwarmBuilder` | Automatically builds swarm structure | -| `HiearchicalSwarm` | Hierarchical organization of agents | -| `MajorityVoting` | Uses majority voting for decision making | -| `MALT` | Multi-Agent Language Tasks | -| `DeepResearchSwarm` | Specialized for deep research tasks | -| `CouncilAsAJudge` | Council-based judgment system | -| `InteractiveGroupChat` | Interactive group chat with user participation | -| `auto` | Automatically selects best swarm type via embedding search | - -## Classes - -### Document - -A Pydantic model for representing document data. - -| Attribute | Type | Description | -| --- | --- | --- | -| `file_path` | str | Path to the document file. | -| `data` | str | Content of the document. | - -### SwarmLog - -A Pydantic model for capturing log entries. - -| Attribute | Type | Description | -| --- | --- | --- | -| `id` | str | Unique identifier for the log entry. | -| `timestamp` | datetime | Time of log creation. | -| `level` | str | Log level (e.g., "info", "error"). | -| `message` | str | Log message content. | -| `swarm_type` | SwarmType | Type of swarm associated with the log. | -| `task` | str | Task being performed (optional). | -| `metadata` | Dict[str, Any] | Additional metadata (optional). | -| `documents` | List[Document] | List of documents associated with the log. | +The `SwarmRouter` class is a flexible routing system designed to manage different types of swarms for task execution. It provides a unified interface to interact with various swarm types. -### SwarmRouterConfig +Full Path: `from swarms.structs.swarm_router` -Configuration model for SwarmRouter. - -| Attribute | Type | Description | -| --- | --- | --- | -| `name` | str | Name identifier for the SwarmRouter instance | -| `description` | str | Description of the SwarmRouter's purpose | -| `swarm_type` | SwarmType | Type of swarm to use | -| `rearrange_flow` | Optional[str] | Flow configuration string | -| `rules` | Optional[str] | Rules to inject into every agent | -| `multi_agent_collab_prompt` | bool | Whether to enable multi-agent collaboration prompts | -| `task` | str | The task to be executed by the swarm | -### SwarmRouter +## Initialization Parameters Main class for routing tasks to different swarm types. @@ -44729,13 +48844,28 @@ Main class for routing tasks to different swarm types. | `concurrent_batch_run` | `tasks: List[str], *args, **kwargs` | Execute multiple tasks concurrently | -## Installation +## Available Swarm Types + +The `SwarmRouter` supports many various multi-agent architectures for various applications. + +| Swarm Type | Description | +|------------|-------------| +| `AgentRearrange` | Optimizes agent arrangement for task execution | +| `MixtureOfAgents` | Combines multiple agent types for diverse tasks | +| `SpreadSheetSwarm` | Uses spreadsheet-like operations for task management | +| `SequentialWorkflow` | Executes tasks sequentially | +| `ConcurrentWorkflow` | Executes tasks in parallel | +| `GroupChat` | Facilitates communication among agents in a group chat format | +| `MultiAgentRouter` | Routes tasks between multiple agents | +| `AutoSwarmBuilder` | Automatically builds swarm structure | +| `HiearchicalSwarm` | Hierarchical organization of agents | +| `MajorityVoting` | Uses majority voting for decision making | +| `MALT` | Multi-Agent Language Tasks | +| `CouncilAsAJudge` | Council-based judgment system | +| `InteractiveGroupChat` | Interactive group chat with user participation | +| `auto` | Automatically selects best swarm type via embedding search | -To use the SwarmRouter, first install the required dependencies: -```bash -pip install swarms swarm_models -``` ## Basic Usage @@ -44743,20 +48873,6 @@ pip install swarms swarm_models import os from dotenv import load_dotenv from swarms import Agent, SwarmRouter, SwarmType -from swarm_models import OpenAIChat - -load_dotenv() - -# Get the OpenAI API key from the environment variable -api_key = os.getenv("GROQ_API_KEY") - -# Model -model = OpenAIChat( - openai_api_base="https://api.groq.com/openai/v1", - openai_api_key=api_key, - model_name="llama-3.1-70b-versatile", - temperature=0.1, -) # Define specialized system prompts for each agent DATA_EXTRACTOR_PROMPT = """You are a highly specialized private equity agent focused on data extraction from various documents. Your expertise includes: @@ -44779,31 +48895,15 @@ Deliver clear, concise summaries that capture the essence of various documents w data_extractor_agent = Agent( agent_name="Data-Extractor", system_prompt=DATA_EXTRACTOR_PROMPT, - llm=model, + model_name="gpt-4.1", max_loops=1, - autosave=True, - verbose=True, - dynamic_temperature_enabled=True, - saved_state_path="data_extractor_agent.json", - user_name="pe_firm", - retry_attempts=1, - context_length=200000, - output_type="string", ) summarizer_agent = Agent( agent_name="Document-Summarizer", system_prompt=SUMMARIZER_PROMPT, - llm=model, + model_name="gpt-4.1", max_loops=1, - autosave=True, - verbose=True, - dynamic_temperature_enabled=True, - saved_state_path="summarizer_agent.json", - user_name="pe_firm", - retry_attempts=1, - context_length=200000, - output_type="string", ) # Initialize the SwarmRouter @@ -44813,8 +48913,6 @@ router = SwarmRouter( max_loops=1, agents=[data_extractor_agent, summarizer_agent], swarm_type="ConcurrentWorkflow", - autosave=True, - return_json=True, ) # Example usage @@ -44824,10 +48922,6 @@ if __name__ == "__main__": "Where is the best place to find template term sheets for series A startups? Provide links and references" ) print(result) - - # Retrieve and print logs - for log in router.get_logs(): - print(f"{log.timestamp} - {log.level}: {log.message}") ``` ## Advanced Usage @@ -44864,40 +48958,6 @@ auto_router = SwarmRouter( result = auto_router.run("Analyze and summarize the quarterly financial report") ``` -### Loading Agents from CSV - -To load agents from a CSV file: - -```python -csv_router = SwarmRouter( - name="CSVAgentRouter", - load_agents_from_csv=True, - csv_file_path="agents.csv", - swarm_type="SequentialWorkflow" -) - -result = csv_router.run("Process the client data") -``` - -### Using Shared Memory System - -To enable shared memory across agents: - -```python -from swarms.memory import SemanticMemory - -memory_system = SemanticMemory() - -memory_router = SwarmRouter( - name="MemoryRouter", - agents=[agent1, agent2], - shared_memory_system=memory_system, - swarm_type="SequentialWorkflow" -) - -result = memory_router.run("Analyze historical data and make predictions") -``` - ### Injecting Rules to All Agents To inject common rules into all agents: @@ -45075,6 +49135,7 @@ result = voting_router.run("Should we invest in Company X based on the available ``` ### Auto Select (Experimental) + Autonomously selects the right swarm by conducting vector search on your input task or name or description or all 3. ```python @@ -45173,25 +49234,10 @@ router = SwarmRouter( result = router("Analyze the market data") # Equivalent to router.run("Analyze the market data") ``` -### Using the swarm_router Function - -For quick one-off tasks, you can use the swarm_router function: - -```python -from swarms import swarm_router - -result = swarm_router( - name="QuickRouter", - agents=[agent1, agent2], - swarm_type="ConcurrentWorkflow", - task="Analyze the quarterly report" -) -``` - -------------------------------------------------- -# File: swarms\structs\task.md +# File: swarms/structs/task.md # Task Class Documentation @@ -45536,7 +49582,7 @@ print(f"Task 2 context: {task2.history}") -------------------------------------------------- -# File: swarms\structs\taskqueue_swarm.md +# File: swarms/structs/taskqueue_swarm.md # TaskQueueSwarm Documentation @@ -45632,7 +49678,7 @@ Exports the swarm run metadata as a JSON string. -------------------------------------------------- -# File: swarms\structs\various_execution_methods.md +# File: swarms/structs/various_execution_methods.md # Concurrent Agents API Reference @@ -45808,7 +49854,7 @@ Runs agents with system resource monitoring and adaptive batch sizing. -------------------------------------------------- -# File: swarms\structs\yaml_model.md +# File: swarms/structs/yaml_model.md # YamlModel: A Pydantic Model for YAML Data @@ -46062,7 +50108,7 @@ The `YamlModel` class in Pydantic offers a streamlined approach to working with -------------------------------------------------- -# File: swarms\support.md +# File: swarms/support.md # Technical Support @@ -46444,7 +50490,7 @@ Help improve support for everyone: -------------------------------------------------- -# File: swarms\tools\base_tool.md +# File: swarms/tools/base_tool.md # BaseTool Class Documentation @@ -47269,7 +51315,7 @@ The BaseTool class defines several custom exception classes for better error han -------------------------------------------------- -# File: swarms\tools\build_tool.md +# File: swarms/tools/build_tool.md # Swarms Tool Documentation @@ -47814,7 +51860,7 @@ print(response) -------------------------------------------------- -# File: swarms\tools\main.md +# File: swarms/tools/main.md # The Swarms Tool System: Functions, Pydantic BaseModels as Tools, and Radical Customization @@ -48206,7 +52252,7 @@ This guide has covered the fundamental concepts and provided detailed examples t -------------------------------------------------- -# File: swarms\tools\mcp_client_call.md +# File: swarms/tools/mcp_client_call.md # MCP Client Call Reference Documentation @@ -48456,7 +52502,7 @@ The MCP client functions use a retry mechanism with exponential backoff for fail -------------------------------------------------- -# File: swarms\tools\tool_storage.md +# File: swarms/tools/tool_storage.md # ToolStorage @@ -48665,7 +52711,7 @@ The `ToolStorage` module provides a robust solution for managing tool functions -------------------------------------------------- -# File: swarms\tools\tools_examples.md +# File: swarms/tools/tools_examples.md # Swarms Tools Documentation @@ -49274,7 +53320,7 @@ return json.dumps({"result": data}, indent=2) -------------------------------------------------- -# File: swarms\ui\main.md +# File: swarms/ui/main.md # Swarms Chat UI Documentation @@ -49556,7 +53602,796 @@ This documentation is designed to provide clarity, reliability, and comprehensiv -------------------------------------------------- -# File: swarms_cloud\add_agent.md +# File: swarms/utils/agent_loader.md + +# AgentLoader Documentation + +The `AgentLoader` is a comprehensive utility for creating Swarms agents from various file formats including Markdown, YAML, and CSV files. It provides a unified interface for loading agents with support for concurrent processing, configuration overrides, and automatic file type detection. + +## Overview + +The AgentLoader enables you to: + +- Load agents from Markdown files +- Load agents from YAML configuration files +- Load agents from CSV files +- Automatically detect file types and use appropriate loaders +- Process multiple files concurrently for improved performance +- Override default configurations with custom parameters +- Handle various agent configurations and settings + +## Installation + +The AgentLoader is included with the Swarms framework: + +```python +from swarms.structs import AgentLoader +from swarms.utils import load_agent_from_markdown, load_agents_from_markdown +``` + +## Supported File Formats + +### 1. Markdown Files (Claude Code Format) + +The primary format uses YAML frontmatter with markdown content: + +```markdown +--- +name: FinanceAdvisor +description: Expert financial advisor for investment and budgeting guidance +model_name: claude-sonnet-4-20250514 +temperature: 0.7 +max_loops: 1 +mcp_url: http://example.com/mcp # optional +--- + +You are an expert financial advisor with deep knowledge in: +- Investment strategies and portfolio management +- Personal budgeting and financial planning +- Risk assessment and diversification +- Tax optimization strategies +- Retirement planning + +Your approach: +- Provide clear, actionable financial advice +- Consider individual risk tolerance and goals +- Explain complex concepts in simple terms +- Always emphasize the importance of diversification +- Include relevant disclaimers about financial advice + +When analyzing financial situations: +1. Assess current financial position +2. Identify short-term and long-term goals +3. Evaluate risk tolerance +4. Recommend appropriate strategies +5. Suggest specific action steps +``` + +**Schema Fields:** + +| Field | Type | Required | Default | Description | +|-------|------|----------|---------|-------------| +| `name` | string | ✅ Yes | - | Your agent name | +| `description` | string | ✅ Yes | - | Description of the agent's role and capabilities | +| `model_name` | string | ❌ No | "gpt-4.1" | Name of the model to use | +| `temperature` | float | ❌ No | 0.1 | Model temperature (0.0-2.0) | +| `max_loops` | integer | ❌ No | 1 | Maximum reasoning loops | +| `mcp_url` | string | ❌ No | None | MCP server URL if needed | +| `streaming_on` | boolean | ❌ No | False | Enable streaming output | + +### 2. YAML Files + +YAML configuration files for agent definitions: + +```yaml +agents: + - name: "ResearchAgent" + description: "Research and analysis specialist" + model_name: "gpt-4" + temperature: 0.3 + max_loops: 2 + system_prompt: "You are a research specialist..." +``` + +### 3. CSV Files + +CSV files with agent configurations: + +```csv +name,description,model_name,temperature,max_loops +ResearchAgent,Research specialist,gpt-4,0.3,2 +AnalysisAgent,Data analyst,claude-3,0.1,1 +``` + +## Quick Start + +### Loading a Single Agent + +```python +from swarms.structs import AgentLoader + +# Initialize the loader +loader = AgentLoader() + +# Load agent from markdown file +agent = loader.load_agent_from_markdown("finance_advisor.md") + +# Use the agent +response = agent.run( + "I have $10,000 to invest. What's a good strategy for a beginner?" +) +``` + +### Loading Multiple Agents + +```python +from swarms.structs import AgentLoader + +loader = AgentLoader() + +# Load agents from list of files with concurrent processing +agents = loader.load_agents_from_markdown([ + "market_researcher.md", + "financial_analyst.md", + "risk_analyst.md" +], concurrent=True) # Uses all CPU cores for faster loading + +# Use agents in a workflow +from swarms.structs import SequentialWorkflow + +workflow = SequentialWorkflow( + agents=agents, + max_loops=1 +) + +task = "Analyze the AI healthcare market for a $50M investment." +result = workflow.run(task) +``` + +### Automatic File Type Detection + +```python +from swarms.structs import AgentLoader + +loader = AgentLoader() + +# Automatically detect file type and load appropriately +agents = loader.auto("agents.yaml") # YAML file +agents = loader.auto("agents.csv") # CSV file +agents = loader.auto("agents.md") # Markdown file +``` + +## Class-Based Usage + +### AgentLoader Class + +For more advanced usage, use the `AgentLoader` class directly: + +```python +from swarms.structs import AgentLoader + +# Initialize loader +loader = AgentLoader() + +# Load single agent +agent = loader.load_single_agent("path/to/agent.md") + +# Load multiple agents with concurrent processing +agents = loader.load_multiple_agents( + "./agents_directory/", + concurrent=True, # Enable concurrent processing + max_file_size_mb=10.0 # Limit file size for memory safety +) + +# Parse markdown file without creating agent +config = loader.parse_markdown_file("path/to/agent.md") +print(config.name, config.description) +``` + +## Configuration Options + +You can override default configuration when loading agents: + +```python +agent = loader.load_agent_from_markdown( + file_path="agent.md", + max_loops=5, + verbose=True, + dashboard=True, + autosave=False, + context_length=200000, + temperature=0.5 +) +``` + +### Available Configuration Parameters + +| Parameter | Type | Default | Description | +|-----------|------|---------|-------------| +| `max_loops` | int | 1 | Maximum number of reasoning loops | +| `autosave` | bool | False | Enable automatic state saving | +| `dashboard` | bool | False | Enable dashboard monitoring | +| `verbose` | bool | False | Enable verbose logging | +| `dynamic_temperature_enabled` | bool | False | Enable dynamic temperature | +| `saved_state_path` | str | None | Path for saving agent state | +| `user_name` | str | "default_user" | User identifier | +| `retry_attempts` | int | 3 | Number of retry attempts | +| `context_length` | int | 100000 | Maximum context length | +| `return_step_meta` | bool | False | Return step metadata | +| `output_type` | str | "str" | Output format type | +| `auto_generate_prompt` | bool | False | Auto-generate prompts | +| `streaming_on` | bool | False | Enable streaming output | +| `mcp_url` | str | None | MCP server URL if needed | + +## Advanced Features + +### Concurrent Processing + +The AgentLoader utilizes multiple CPU cores for concurrent agent loading: + +```python +from swarms.structs import AgentLoader + +loader = AgentLoader() + +# Automatic concurrent processing for multiple files +agents = loader.load_agents_from_markdown([ + "agent1.md", "agent2.md", "agent3.md", "agent4.md" +]) # concurrent=True by default + +# Manual control over concurrency +agents = loader.load_agents_from_markdown( + "./agents_directory/", + concurrent=True, # Enable concurrent processing + max_file_size_mb=5.0 # Limit file size for memory safety +) + +# Disable concurrency for debugging or single files +agents = loader.load_agents_from_markdown( + ["single_agent.md"], + concurrent=False # Sequential processing +) +``` + +### File Size Validation + +```python +# Set maximum file size to prevent memory issues +agents = loader.load_agents_from_markdown( + "./agents_directory/", + max_file_size_mb=5.0 # Skip files larger than 5MB +) +``` + +### Multiple File Type Support + +```python +from swarms.structs import AgentLoader + +loader = AgentLoader() + +# Load from different file types +yaml_agents = loader.load_agents_from_yaml("agents.yaml") +csv_agents = loader.load_agents_from_csv("agents.csv") +md_agents = loader.load_agents_from_markdown("agents.md") + +# Load from multiple YAML files with different return types +yaml_files = ["agents1.yaml", "agents2.yaml"] +return_types = ["auto", "list"] +agents = loader.load_many_agents_from_yaml(yaml_files, return_types) +``` + +## Complete Examples + +### Example 1: Finance Advisor Agent + +Create a file `finance_advisor.md`: + +```markdown +--- +name: FinanceAdvisor +description: Expert financial advisor for investment and budgeting guidance +model_name: claude-sonnet-4-20250514 +temperature: 0.7 +max_loops: 1 +--- + +You are an expert financial advisor with deep knowledge in: + +- Investment strategies and portfolio management +- Personal budgeting and financial planning +- Risk assessment and diversification +- Tax optimization strategies +- Retirement planning + +Your approach: + +- Provide clear, actionable financial advice +- Consider individual risk tolerance and goals +- Explain complex concepts in simple terms +- Always emphasize the importance of diversification +- Include relevant disclaimers about financial advice + +When analyzing financial situations: + +1. Assess current financial position +2. Identify short-term and long-term goals +3. Evaluate risk tolerance +4. Recommend appropriate strategies +5. Suggest specific action steps +``` + +### Loading and Using the Agent + +```python +from swarms.structs import AgentLoader + +# Load the Finance Advisor agent +loader = AgentLoader() +agent = loader.load_agent_from_markdown("finance_advisor.md") + +# Use the agent for financial advice +response = agent.run( + "I have $10,000 to invest. What's a good strategy for a beginner?" +) +``` + +### Example 2: Multi-Agent Workflow + +```python +from swarms.structs import AgentLoader, SequentialWorkflow + +# Load multiple specialized agents +loader = AgentLoader() +agents = loader.load_agents_from_markdown([ + "market_researcher.md", + "financial_analyst.md", + "risk_analyst.md" +], concurrent=True) + +# Create a sequential workflow +workflow = SequentialWorkflow( + agents=agents, + max_loops=1 +) + +# Execute complex task across multiple agents +task = """ +Analyze the AI healthcare market for a $50M investment opportunity. +Focus on market size, competition, financials, and risks. +""" + +result = workflow.run(task) +``` + +### Example 3: Mixed File Types + +```python +from swarms.structs import AgentLoader + +loader = AgentLoader() + +# Load agents from different file types +markdown_agents = loader.load_agents_from_markdown("./md_agents/") +yaml_agents = loader.load_agents_from_yaml("config.yaml") +csv_agents = loader.load_agents_from_csv("data.csv") + +# Combine all agents +all_agents = markdown_agents + yaml_agents + csv_agents + +print(f"Loaded {len(all_agents)} agents from various sources") +``` + +## Error Handling + +The AgentLoader provides comprehensive error handling: + +```python +from swarms.structs import AgentLoader + +loader = AgentLoader() + +try: + # This will raise FileNotFoundError + agent = loader.load_agent_from_markdown("nonexistent.md") +except FileNotFoundError as e: + print(f"File not found: {e}") + +try: + # This will handle parsing errors gracefully + agents = loader.load_multiple_agents("./invalid_directory/") + print(f"Successfully loaded {len(agents)} agents") +except Exception as e: + print(f"Error loading agents: {e}") +``` + +## Best Practices + +1. **Consistent Naming**: Use clear, descriptive agent names +2. **Detailed Descriptions**: Provide comprehensive role descriptions +3. **Structured Content**: Use clear sections to define agent behavior +4. **Error Handling**: Always wrap agent loading in try-catch blocks +5. **Model Selection**: Choose appropriate models based on agent complexity +6. **Configuration**: Override defaults when specific behavior is needed +7. **File Organization**: Organize agents by domain or function +8. **Memory Management**: Use `max_file_size_mb` for large agent collections + +## API Reference + +### AgentLoader Class + +```python +class AgentLoader: + """ + Loader class for creating Agent objects from various file formats. + + This class provides methods to load agents from Markdown, YAML, and CSV files. + """ + + def __init__(self): + """Initialize the AgentLoader instance.""" + pass + + def load_agents_from_markdown( + self, + file_paths: Union[str, List[str]], + concurrent: bool = True, + max_file_size_mb: float = 10.0, + **kwargs + ) -> List[Agent]: + """ + Load multiple agents from one or more Markdown files. + + Args: + file_paths: Path or list of paths to Markdown file(s) + concurrent: Whether to load files concurrently + max_file_size_mb: Maximum file size in MB to process + **kwargs: Additional keyword arguments passed to the underlying loader + + Returns: + A list of loaded Agent objects + """ + + def load_agent_from_markdown( + self, + file_path: str, + **kwargs + ) -> Agent: + """ + Load a single agent from a Markdown file. + + Args: + file_path: Path to the Markdown file containing the agent definition + **kwargs: Additional keyword arguments passed to the underlying loader + + Returns: + The loaded Agent object + """ + + def load_agents_from_yaml( + self, + yaml_file: str, + return_type: ReturnTypes = "auto", + **kwargs + ) -> List[Agent]: + """ + Load agents from a YAML file. + + Args: + yaml_file: Path to the YAML file containing agent definitions + return_type: The return type for the loader + **kwargs: Additional keyword arguments passed to the underlying loader + + Returns: + A list of loaded Agent objects + """ + + def load_agents_from_csv( + self, + csv_file: str, + **kwargs + ) -> List[Agent]: + """ + Load agents from a CSV file. + + Args: + csv_file: Path to the CSV file containing agent definitions + **kwargs: Additional keyword arguments passed to the underlying loader + + Returns: + A list of loaded Agent objects + """ + + def auto( + self, + file_path: str, + *args, + **kwargs + ): + """ + Automatically load agents from a file based on its extension. + + Args: + file_path: Path to the agent file (Markdown, YAML, or CSV) + *args: Additional positional arguments passed to the underlying loader + **kwargs: Additional keyword arguments passed to the underlying loader + + Returns: + A list of loaded Agent objects + + Raises: + ValueError: If the file type is not supported + """ +``` + +**Method Parameters and Return Types:** + +| Method | Parameters | Type | Required | Default | Return Type | Description | +|--------|------------|------|----------|---------|-------------|-------------| +| `load_agents_from_markdown` | `file_paths` | Union[str, List[str]] | ✅ Yes | - | List[Agent] | File path(s) or directory | +| `load_agents_from_markdown` | `concurrent` | bool | ❌ No | True | List[Agent] | Enable concurrent processing | +| `load_agents_from_markdown` | `max_file_size_mb` | float | ❌ No | 10.0 | List[Agent] | Max file size in MB | +| `load_agents_from_markdown` | `**kwargs` | dict | ❌ No | {} | List[Agent] | Configuration overrides | +| `load_agent_from_markdown` | `file_path` | str | ✅ Yes | - | Agent | Path to markdown file | +| `load_agent_from_markdown` | `**kwargs` | dict | ❌ No | {} | Agent | Configuration overrides | +| `load_agents_from_yaml` | `yaml_file` | str | ✅ Yes | - | List[Agent] | Path to YAML file | +| `load_agents_from_yaml` | `return_type` | ReturnTypes | ❌ No | "auto" | List[Agent] | Return type for loader | +| `load_agents_from_yaml` | `**kwargs` | dict | ❌ No | {} | List[Agent] | Configuration overrides | +| `load_agents_from_csv` | `csv_file` | str | ✅ Yes | - | List[Agent] | Path to CSV file | +| `load_agents_from_csv` | `**kwargs` | dict | ❌ No | {} | List[Agent] | Configuration overrides | +| `auto` | `file_path` | str | ✅ Yes | - | List[Agent] | Path to agent file | +| `auto` | `*args` | tuple | ❌ No | () | List[Agent] | Positional arguments | +| `auto` | `**kwargs` | dict | ❌ No | {} | List[Agent] | Keyword arguments | + +### Convenience Functions + +```python +def load_agent_from_markdown( + file_path: str, + **kwargs +) -> Agent: + """ + Load a single agent from a markdown file using the Claude Code YAML frontmatter format. + + Args: + file_path: Path to the markdown file containing YAML frontmatter + **kwargs: Optional keyword arguments to override agent configuration + + Returns: + Configured Agent instance loaded from the markdown file + """ + +def load_agents_from_markdown( + file_paths: Union[str, List[str]], + concurrent: bool = True, + max_file_size_mb: float = 10.0, + **kwargs +) -> List[Agent]: + """ + Load multiple agents from markdown files using the Claude Code YAML frontmatter format. + + Args: + file_paths: Either a directory path containing markdown files or a list of markdown file paths + concurrent: If True, enables concurrent processing for faster loading + max_file_size_mb: Maximum file size (in MB) for each markdown file + **kwargs: Optional keyword arguments to override agent configuration + + Returns: + List of configured Agent instances loaded from the markdown files + """ +``` + +**Function Parameters:** + +| Function | Parameter | Type | Required | Default | Description | +|----------|-----------|------|----------|---------|-------------| +| `load_agent_from_markdown` | `file_path` | str | ✅ Yes | - | Path to markdown file | +| `load_agent_from_markdown` | `**kwargs` | dict | ❌ No | {} | Configuration overrides | +| `load_agents_from_markdown` | `file_paths` | Union[str, List[str]] | ✅ Yes | - | File path(s) or directory | +| `load_agents_from_markdown` | `concurrent` | bool | ❌ No | True | Enable concurrent processing | +| `load_agents_from_markdown` | `max_file_size_mb` | float | ❌ No | 10.0 | Max file size in MB | +| `load_agents_from_markdown` | `**kwargs` | dict | ❌ No | {} | Configuration overrides | + +### Configuration Model + +```python +class MarkdownAgentConfig(BaseModel): + """Configuration model for agents loaded from Claude Code markdown files.""" + + name: Optional[str] = None + description: Optional[str] = None + model_name: Optional[str] = "gpt-4.1" + temperature: Optional[float] = Field(default=0.1, ge=0.0, le=2.0) + mcp_url: Optional[int] = None + system_prompt: Optional[str] = None + max_loops: Optional[int] = Field(default=1, ge=1) + autosave: Optional[bool] = False + dashboard: Optional[bool] = False + verbose: Optional[bool] = False + dynamic_temperature_enabled: Optional[bool] = False + saved_state_path: Optional[str] = None + user_name: Optional[str] = "default_user" + retry_attempts: Optional[int] = Field(default=3, ge=1) + context_length: Optional[int] = Field(default=100000, ge=1000) + return_step_meta: Optional[bool] = False + output_type: Optional[str] = "str" + auto_generate_prompt: Optional[bool] = False + streaming_on: Optional[bool] = False +``` + +**MarkdownAgentConfig Schema:** + +| Field | Type | Required | Default | Validation | Description | +|-------|------|----------|---------|------------|-------------| +| `name` | Optional[str] | ❌ No | None | - | Agent name | +| `description` | Optional[str] | ❌ No | None | - | Agent description | +| `model_name` | Optional[str] | ❌ No | "gpt-4.1" | - | Model to use | +| `temperature` | Optional[float] | ❌ No | 0.1 | 0.0 ≤ x ≤ 2.0 | Model temperature | +| `mcp_url` | Optional[int] | ❌ No | None | - | MCP server URL | +| `system_prompt` | Optional[str] | ❌ No | None | Non-empty string | System prompt | +| `max_loops` | Optional[int] | ❌ No | 1 | ≥ 1 | Maximum reasoning loops | +| `autosave` | Optional[bool] | ❌ No | False | - | Enable auto-save | +| `dashboard` | Optional[bool] | ❌ No | False | - | Enable dashboard | +| `verbose` | Optional[bool] | ❌ No | False | - | Enable verbose logging | +| `dynamic_temperature_enabled` | Optional[bool] | ❌ No | False | - | Enable dynamic temperature | +| `saved_state_path` | Optional[str] | ❌ No | None | - | State save path | +| `user_name` | Optional[str] | ❌ No | "default_user" | - | User identifier | +| `retry_attempts` | Optional[int] | ❌ No | 3 | ≥ 1 | Retry attempts | +| `context_length` | Optional[int] | ❌ No | 100000 | ≥ 1000 | Context length | +| `return_step_meta` | Optional[bool] | ❌ No | False | - | Return step metadata | +| `output_type` | Optional[str] | ❌ No | "str" | - | Output format | +| `auto_generate_prompt` | Optional[bool] | ❌ No | False | - | Auto-generate prompts | +| `streaming_on` | Optional[bool] | ❌ No | False | - | Enable streaming | + +## Examples Repository + +Find complete working examples in the `examples/utils/agent_loader/` directory: + +### Single Agent Example (`agent_loader_demo.py`) + +```python +from swarms.utils import load_agent_from_markdown + +agent = load_agent_from_markdown("finance_advisor.md") + +agent.run(task="What were the best performing etfs in 2023") +``` + +### Multi-Agent Workflow Example (`multi_agents_loader_demo.py`) + +```python +from swarms.utils import load_agents_from_markdown + +agents = load_agents_from_markdown([ + "market_researcher.md", + "financial_analyst.md", + "risk_analyst.md" +]) + +# Use agents in a workflow +from swarms.structs.sequential_workflow import SequentialWorkflow + +workflow = SequentialWorkflow( + agents=agents, + max_loops=1 +) + +task = """ +Analyze the AI healthcare market for a $50M investment opportunity. +Focus on market size, competition, financials, and risks. +""" + +result = workflow.run(task) +``` + +### Sample Agent Definition (`finance_advisor.md`) + +```markdown +--- +name: FinanceAdvisor +description: Expert financial advisor for investment and budgeting guidance +model_name: claude-sonnet-4-20250514 +temperature: 0.7 +max_loops: 1 +--- + +You are an expert financial advisor with deep knowledge in: + +- Investment strategies and portfolio management +- Personal budgeting and financial planning +- Risk assessment and diversification +- Tax optimization strategies +- Retirement planning + +Your approach: + +- Provide clear, actionable financial advice +- Consider individual risk tolerance and goals +- Explain complex concepts in simple terms +- Always emphasize the importance of diversification +- Include relevant disclaimers about financial advice + +When analyzing financial situations: + +1. Assess current financial position +2. Identify short-term and long-term goals +3. Evaluate risk tolerance +4. Recommend appropriate strategies +5. Suggest specific action steps +``` + +## Performance Considerations + +### Concurrent Processing + +- **Default Behavior**: Uses `os.cpu_count() * 2` worker threads +- **Memory Management**: Automatically validates file sizes before processing +- **Timeout Handling**: 5-minute total timeout, 1-minute per agent timeout +- **Error Recovery**: Continues processing other files if individual files fail + +### File Size Limits + +- **Default Limit**: 10MB maximum file size +- **Configurable**: Adjustable via `max_file_size_mb` parameter +- **Memory Safety**: Prevents memory issues with large agent definitions + +### Resource Optimization + +```python +# For large numbers of agents, consider batch processing +loader = AgentLoader() + +# Process in smaller batches +batch1 = loader.load_agents_from_markdown("./batch1/", concurrent=True) +batch2 = loader.load_agents_from_markdown("./batch2/", concurrent=True) + +# Or limit concurrent workers for resource-constrained environments +agents = loader.load_agents_from_markdown( + "./agents/", + concurrent=True, + max_file_size_mb=5.0 # Smaller files for faster processing +) +``` + +## Troubleshooting + +### Common Issues + +1. **File Not Found**: Ensure file paths are correct and files exist +2. **YAML Parsing Errors**: Check YAML frontmatter syntax in markdown files +3. **Memory Issues**: Reduce `max_file_size_mb` or process files in smaller batches +4. **Timeout Errors**: Check file sizes and network connectivity for remote files +5. **Configuration Errors**: Verify all required fields are present in agent definitions + +### Debug Mode + +```python +import logging +from swarms.structs import AgentLoader + +# Enable debug logging +logging.basicConfig(level=logging.DEBUG) + +loader = AgentLoader() + +# Load with verbose output +agent = loader.load_agent_from_markdown( + "agent.md", + verbose=True +) +``` + +## Support + +For questions and support: + +- GitHub Issues: [https://github.com/kyegomez/swarms/issues](https://github.com/kyegomez/swarms/issues) +- Documentation: [https://docs.swarms.world](https://docs.swarms.world) +- Community: Join our Discord for real-time support + +-------------------------------------------------- + +# File: swarms_cloud/add_agent.md # Publishing an Agent to Agent Marketplace @@ -49617,7 +54452,7 @@ deployment_config: -------------------------------------------------- -# File: swarms_cloud\agent_api.md +# File: swarms_cloud/agent_api.md # Agent API @@ -50230,7 +55065,216 @@ agent_config = { -------------------------------------------------- -# File: swarms_cloud\api_clients.md +# File: swarms_cloud/agent_rearrange.md + +# AgentRearrange + +*Dynamically reorganizes agents to optimize task performance and efficiency* + +**Swarm Type**: `AgentRearrange` + +## Overview + +The AgentRearrange swarm type dynamically reorganizes the workflow between agents based on task requirements and performance metrics. This architecture is particularly useful when the effectiveness of agents depends on their sequence or arrangement, allowing for optimal task distribution and execution flow. + +Key features: +- **Dynamic Reorganization**: Automatically adjusts agent order based on task needs +- **Performance Optimization**: Optimizes workflow for maximum efficiency +- **Adaptive Sequencing**: Learns from execution patterns to improve arrangement +- **Flexible Task Distribution**: Distributes work based on agent capabilities + +## Use Cases + +- Complex workflows where task order matters +- Multi-step processes requiring optimization +- Tasks where agent performance varies by sequence +- Adaptive workflow management systems + +## API Usage + +### Basic AgentRearrange Example + +=== "Shell (curl)" + ```bash + curl -X POST "https://api.swarms.world/v1/swarm/completions" \ + -H "x-api-key: $SWARMS_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Document Processing Rearrange", + "description": "Process documents with dynamic agent reorganization", + "swarm_type": "AgentRearrange", + "task": "Analyze this legal document and extract key insights, then summarize findings and identify action items", + "agents": [ + { + "agent_name": "Document Analyzer", + "description": "Analyzes document content and structure", + "system_prompt": "You are an expert document analyst. Extract key information, themes, and insights from documents.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Legal Expert", + "description": "Provides legal context and interpretation", + "system_prompt": "You are a legal expert. Analyze documents for legal implications, risks, and compliance issues.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + }, + { + "agent_name": "Summarizer", + "description": "Creates concise summaries and action items", + "system_prompt": "You are an expert at creating clear, actionable summaries from complex information.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.4 + } + ], + "rearrange_flow": "Summarizer -> Legal Expert -> Document Analyzer", + "max_loops": 1 + }' + ``` + +=== "Python (requests)" + ```python + import requests + import json + + API_BASE_URL = "https://api.swarms.world" + API_KEY = "your_api_key_here" + + headers = { + "x-api-key": API_KEY, + "Content-Type": "application/json" + } + + swarm_config = { + "name": "Document Processing Rearrange", + "description": "Process documents with dynamic agent reorganization", + "swarm_type": "AgentRearrange", + "task": "Analyze this legal document and extract key insights, then summarize findings and identify action items", + "agents": [ + { + "agent_name": "Document Analyzer", + "description": "Analyzes document content and structure", + "system_prompt": "You are an expert document analyst. Extract key information, themes, and insights from documents.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Legal Expert", + "description": "Provides legal context and interpretation", + "system_prompt": "You are a legal expert. Analyze documents for legal implications, risks, and compliance issues.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + }, + { + "agent_name": "Summarizer", + "description": "Creates concise summaries and action items", + "system_prompt": "You are an expert at creating clear, actionable summaries from complex information.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.4 + } + ], + "rearrange_flow": "Summarizer -> Legal Expert -> Document Analyzer", + "max_loops": 1 + } + + response = requests.post( + f"{API_BASE_URL}/v1/swarm/completions", + headers=headers, + json=swarm_config + ) + + if response.status_code == 200: + result = response.json() + print("AgentRearrange swarm completed successfully!") + print(f"Cost: ${result['metadata']['billing_info']['total_cost']}") + print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds") + print(f"Output: {result['output']}") + else: + print(f"Error: {response.status_code} - {response.text}") + ``` + +**Example Response**: +```json +{ + "job_id": "swarms-Uc8R7UcepLmNNPwcU7JC6YPy5wiI", + "status": "success", + "swarm_name": "Document Processing Rearrange", + "description": "Process documents with dynamic agent reorganization", + "swarm_type": "AgentRearrange", + "output": [ + { + "role": "Summarizer", + "content": "\"Of course! Please provide the legal document you would like me to analyze, and I'll help extract key insights, summarize findings, and identify any action items.\"" + }, + { + "role": "Legal Expert", + "content": "\"\"Absolutely! Please upload or describe the legal document you need assistance with, and I'll provide an analysis that highlights key insights, summarizes the findings, and identifies any action items that may be necessary.\"\"" + }, + { + "role": "Document Analyzer", + "content": "\"Of course! Please provide the legal document you would like me to analyze, and I'll help extract key insights, summarize findings, and identify any action items.\"" + } + ], + "number_of_agents": 3, + "service_tier": "standard", + "execution_time": 7.898931264877319, + "usage": { + "input_tokens": 22, + "output_tokens": 144, + "total_tokens": 166, + "billing_info": { + "cost_breakdown": { + "agent_cost": 0.03, + "input_token_cost": 0.000066, + "output_token_cost": 0.00216, + "token_counts": { + "total_input_tokens": 22, + "total_output_tokens": 144, + "total_tokens": 166 + }, + "num_agents": 3, + "service_tier": "standard", + "night_time_discount_applied": true + }, + "total_cost": 0.032226, + "discount_active": true, + "discount_type": "night_time", + "discount_percentage": 75 + } + } +} +``` + +## Configuration Options + +| Parameter | Type | Description | Default | +|-----------|------|-------------|---------| +| `rearrange_flow` | string | Instructions for how agents should be rearranged | None | +| `agents` | Array | List of agents to be dynamically arranged | Required | +| `max_loops` | integer | Maximum rearrangement iterations | 1 | + +## Best Practices + +- Provide clear `rearrange_flow` instructions for optimal reorganization +- Design agents with complementary but flexible roles +- Use when task complexity requires adaptive sequencing +- Monitor execution patterns to understand rearrangement decisions + +## Related Swarm Types + +- [SequentialWorkflow](sequential_workflow.md) - For fixed sequential processing +- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic swarm construction +- [HierarchicalSwarm](hierarchical_swarm.md) - For structured agent hierarchies + +-------------------------------------------------- + +# File: swarms_cloud/api_clients.md # Swarms API Clients @@ -50477,7 +55521,7 @@ For enterprise customers, we offer additional features and support: -------------------------------------------------- -# File: swarms_cloud\api_pricing.md +# File: swarms_cloud/api_pricing.md # Swarm Agent API Pricing @@ -50680,7 +55724,118 @@ Track your credit usage through our comprehensive logging and reporting features -------------------------------------------------- -# File: swarms_cloud\best_practices.md +# File: swarms_cloud/auto.md + +# Auto + +*Intelligently selects the most effective swarm architecture for a given task* + +**Swarm Type**: `auto` (or `Auto`) + +## Overview + +The Auto swarm type intelligently selects the most effective swarm architecture for a given task based on context analysis and task requirements. This intelligent system evaluates the task description and automatically chooses the optimal swarm type from all available architectures, ensuring maximum efficiency and effectiveness. + +Key features: +- **Intelligent Selection**: Automatically chooses the best swarm type for each task +- **Context Analysis**: Analyzes task requirements to make optimal decisions +- **Adaptive Architecture**: Adapts to different types of problems automatically +- **Zero Configuration**: No manual architecture selection required + +## Use Cases + +- When unsure about which swarm type to use +- General-purpose task automation +- Rapid prototyping and experimentation +- Simplified API usage for non-experts + +## API Usage + + + +## Selection Logic + +The Auto swarm type analyzes various factors to make its selection: + +| Factor | Consideration | +|--------|---------------| +| **Task Complexity** | Simple → Single agent, Complex → Multi-agent | +| **Sequential Dependencies** | Dependencies → SequentialWorkflow | +| **Parallel Opportunities** | Independent subtasks → ConcurrentWorkflow | +| **Collaboration Needs** | Discussion required → GroupChat | +| **Expertise Diversity** | Multiple domains → MixtureOfAgents | +| **Management Needs** | Oversight required → HierarchicalSwarm | +| **Routing Requirements** | Task distribution → MultiAgentRouter | + +## Best Practices + +- Provide detailed task descriptions for better selection +- Use `rules` parameter to guide selection criteria +- Review the selected architecture in response metadata +- Ideal for users new to swarm architectures + +## Related Swarm Types + +Since Auto can select any swarm type, it's related to all architectures: +- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic agent generation +- [SequentialWorkflow](sequential_workflow.md) - Often selected for linear tasks +- [ConcurrentWorkflow](concurrent_workflow.md) - For parallel processing needs +- [MixtureOfAgents](mixture_of_agents.md) - For diverse expertise requirements + +-------------------------------------------------- + +# File: swarms_cloud/auto_swarm_builder.md + +# AutoSwarmBuilder [ Needs an Fix ] + +*Automatically configures optimal swarm architectures based on task requirements* + +**Swarm Type**: `AutoSwarmBuilder` + +## Overview + +The AutoSwarmBuilder automatically configures optimal agent architectures based on task requirements and performance metrics, simplifying swarm creation. This intelligent system analyzes the given task and automatically generates the most suitable agent configuration, eliminating the need for manual swarm design. + +Key features: +- **Intelligent Configuration**: Automatically designs optimal swarm structures +- **Task-Adaptive**: Adapts architecture based on specific task requirements +- **Performance Optimization**: Selects configurations for maximum efficiency +- **Simplified Setup**: Eliminates manual agent configuration complexity + +## Use Cases + +- Quick prototyping and experimentation +- Unknown or complex task requirements +- Automated swarm optimization +- Simplified swarm creation for non-experts + +## API Usage + + +## Configuration Options + +| Parameter | Type | Description | Default | +|-----------|------|-------------|---------| +| `task` | string | Task description for automatic optimization | Required | +| `rules` | string | Additional constraints and guidelines | None | +| `max_loops` | integer | Maximum execution rounds | 1 | + +## Best Practices + +- Provide detailed, specific task descriptions for better optimization +- Use `rules` parameter to guide the automatic configuration +- Ideal for rapid prototyping and experimentation +- Review generated architecture in response metadata + +## Related Swarm Types + +- [Auto](auto.md) - For automatic swarm type selection +- [MixtureOfAgents](mixture_of_agents.md) - Often selected by AutoSwarmBuilder +- [HierarchicalSwarm](hierarchical_swarm.md) - For complex structured tasks + +-------------------------------------------------- + +# File: swarms_cloud/best_practices.md # Swarms API Best Practices Guide @@ -50855,7 +56010,7 @@ Use this framework to select the optimal swarm architecture for your use case: -------------------------------------------------- -# File: swarms_cloud\chinese_api_pricing.md +# File: swarms_cloud/chinese_api_pricing.md # Swarm Agent API 定价文档 @@ -51055,7 +56210,7 @@ Swarm API 采用基于积分的系统: -------------------------------------------------- -# File: swarms_cloud\cloud_run.md +# File: swarms_cloud/cloud_run.md # Hosting Agents on Google Cloud Run @@ -51315,7 +56470,968 @@ By following this comprehensive guide, you can deploy your agents on Google Clou -------------------------------------------------- -# File: swarms_cloud\launch.md +# File: swarms_cloud/cloudflare_workers.md + +# Deploy AI Agents with Swarms API on Cloudflare Workers + +Deploy intelligent AI agents powered by Swarms API on Cloudflare Workers edge network. Build production-ready cron agents that run automatically, fetch real-time data, perform AI analysis, and execute actions across 330+ cities worldwide. + + + +## Overview + +This integration demonstrates how to combine **Swarms API multi-agent intelligence** with **Cloudflare Workers edge computing** to create autonomous AI systems that: + +- ⚡ **Execute automatically** on predefined schedules (cron jobs) +- 📊 **Fetch real-time data** from external APIs (Yahoo Finance, news feeds) +- 🤖 **Perform intelligent analysis** using specialized Swarms AI agents +- 📧 **Take automated actions** (email alerts, reports, notifications) +- 🌍 **Scale globally** on Cloudflare's edge network with sub-100ms latency + +## Repository & Complete Implementation + +For the **complete working implementation** with full source code, detailed setup instructions, and ready-to-deploy examples, visit: + +**🔗 [Swarms-CloudFlare-Deployment Repository](https://github.com/The-Swarm-Corporation/Swarms-CloudFlare-Deployment)** + +This repository provides: +- **Two complete implementations**: JavaScript and Python +- **Production-ready code** with error handling and monitoring +- **Step-by-step deployment guides** for both local and production environments +- **Real-world examples** including stock analysis agents +- **Configuration templates** and environment setup + +## Available Implementations + +The repository provides **two complete implementations** of stock analysis agents: + +### 📂 `stock-agent/` - JavaScript Implementation +The original implementation using **JavaScript/TypeScript** on Cloudflare Workers. + +### 📂 `python-stock-agent/` - Python Implementation +A **Python Workers** implementation using Cloudflare's beta Python runtime with Pyodide. + +## Stock Analysis Agent Features + +Both implementations demonstrate a complete system that: + +1. **Automated Analysis**: Runs stock analysis every 3 hours using Cloudflare Workers cron +2. **Real-time Data**: Fetches market data from Yahoo Finance API (no API key needed) +3. **News Integration**: Collects market news from Financial Modeling Prep API (optional) +4. **Multi-Agent Analysis**: Deploys multiple Swarms AI agents for technical and fundamental analysis +5. **Email Reports**: Sends comprehensive reports via Mailgun +6. **Web Interface**: Provides monitoring dashboard for manual triggers and status tracking + +## Implementation Comparison + +| Feature | JavaScript (`stock-agent/`) | Python (`python-stock-agent/`) | +|---------|----------------------------|--------------------------------| +| **Runtime** | V8 JavaScript Engine | Pyodide Python Runtime | +| **Language** | JavaScript/TypeScript | Python 3.x | +| **Status** | Production Ready | Beta (Python Workers) | +| **Performance** | Optimized V8 execution | Good, with Python stdlib support | +| **Syntax** | `fetch()`, `JSON.stringify()` | `await fetch()`, `json.dumps()` | +| **Error Handling** | `try/catch` | `try/except` | +| **Libraries** | Built-in Web APIs | Python stdlib + select packages | +| **Development** | Mature tooling | Growing ecosystem | + +## Architecture + +``` +┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ +│ Cloudflare │ │ Data Sources │ │ Swarms API │ +│ Workers Runtime │ │ │ │ │ +│ "0 */3 * * *" │───▶│ Yahoo Finance │───▶│ Technical Agent │ +│ JS | Python │ │ News APIs │ │ Fundamental │ +│ scheduled() │ │ Market Data │ │ Agent Analysis │ +│ Global Edge │ │ │ │ │ +└─────────────────┘ └─────────────────┘ └─────────────────┘ +``` + +## Quick Start Guide + +Choose your preferred implementation: + +### Option A: JavaScript Implementation + +```bash +# Clone the repository +git clone https://github.com/The-Swarm-Corporation/Swarms-CloudFlare-Deployment.git +cd Swarms-CloudFlare-Deployment/stock-agent + +# Install dependencies +npm install +``` + +### Option B: Python Implementation + +```bash +# Clone the repository +git clone https://github.com/The-Swarm-Corporation/Swarms-CloudFlare-Deployment.git +cd Swarms-CloudFlare-Deployment/python-stock-agent + +# Install dependencies (Wrangler CLI) +npm install +``` + +### 2. Environment Configuration + +Create a `.dev.vars` file in your chosen directory: + +```env +# Required: Swarms API key +SWARMS_API_KEY=your-swarms-api-key-here + +# Optional: Market news (free tier available) +FMP_API_KEY=your-fmp-api-key + +# Optional: Email notifications +MAILGUN_API_KEY=your-mailgun-api-key +MAILGUN_DOMAIN=your-domain.com +RECIPIENT_EMAIL=your-email@example.com +``` + +### 3. Cron Schedule Configuration + +The cron schedule is configured in `wrangler.jsonc`: + +```jsonc +{ + "triggers": { + "crons": [ + "0 */3 * * *" // Every 3 hours + ] + } +} +``` + +Common cron patterns: +- `"0 9 * * 1-5"` - 9 AM weekdays only +- `"0 */6 * * *"` - Every 6 hours +- `"0 0 * * *"` - Daily at midnight + +### 4. Local Development + +```bash +# Start local development server +npm run dev + +# Visit http://localhost:8787 to test +``` + +### 5. Deploy to Cloudflare Workers + +```bash +# Deploy to production +npm run deploy + +# Your agent will be live at: https://stock-agent.your-subdomain.workers.dev +``` + +## API Integration Details + +### Swarms API Agents + +The stock agent uses two specialized AI agents: + +1. **Technical Analyst Agent**: + - Calculates technical indicators (RSI, MACD, Moving Averages) + - Identifies support/resistance levels + - Provides trading signals and price targets + +2. **Fundamental Analyst Agent**: + - Analyzes market conditions and sentiment + - Evaluates news and economic indicators + - Provides investment recommendations + +### Data Sources + +- **Yahoo Finance API**: Free real-time stock data (no API key required) +- **Financial Modeling Prep**: Market news and additional data (free tier: 250 requests/day) +- **Mailgun**: Email delivery service (free tier: 5,000 emails/month) + +## Features + +### Web Interface +- Real-time status monitoring +- Manual analysis triggers +- Progress tracking with visual feedback +- Analysis results display + +### Automated Execution +- Scheduled cron job execution +- Error handling and recovery +- Cost tracking and monitoring +- Email report generation + +### Production Ready +- Comprehensive error handling +- Timeout protection +- Rate limiting compliance +- Security best practices + +## Configuration Examples + +### Custom Stock Symbols + +Edit the symbols array in `src/index.js`: + +```javascript +const symbols = ['SPY', 'QQQ', 'AAPL', 'MSFT', 'TSLA', 'NVDA', 'AMZN', 'GOOGL']; +``` + +### Custom Swarms Agents + +Modify the agent configuration: + +```javascript +const swarmConfig = { + agents: [ + { + agent_name: "Risk Assessment Agent", + system_prompt: "Analyze portfolio risk and provide recommendations...", + model_name: "gpt-4o-mini", + max_tokens: 2000, + temperature: 0.1 + } + ] +}; +``` + +## Cost Optimization + +- **Cloudflare Workers**: Free tier includes 100,000 requests/day +- **Swarms API**: Monitor usage in dashboard, use gpt-4o-mini for cost efficiency +- **External APIs**: Leverage free tiers and implement intelligent caching + +## Security & Best Practices + +- Store API keys as Cloudflare Workers secrets +- Implement request validation and rate limiting +- Audit AI decisions and maintain compliance logs +- Use HTTPS for all external API calls + +## Monitoring & Observability + +- Cloudflare Workers analytics dashboard +- Real-time performance metrics +- Error tracking and alerting +- Cost monitoring and optimization + +## Troubleshooting + +### Common Issues + +1. **API Key Errors**: Verify environment variables are set correctly +2. **Cron Not Triggering**: Check cron syntax and Cloudflare Workers limits +3. **Email Not Sending**: Verify Mailgun configuration and domain setup +4. **Data Fetch Failures**: Check external API status and rate limits + +### Debug Mode + +Enable detailed logging by setting: +```javascript +console.log('Debug mode enabled'); +``` + +## Additional Resources + +- [Cloudflare Workers Documentation](https://developers.cloudflare.com/workers/) +- [Swarms API Documentation](https://docs.swarms.world/) +- [Cron Expression Generator](https://crontab.guru/) +- [Financial Modeling Prep API](https://financialmodelingprep.com/developer/docs) + + + +-------------------------------------------------- + +# File: swarms_cloud/concurrent_workflow.md + +# ConcurrentWorkflow + +*Runs independent tasks in parallel for faster processing* + +**Swarm Type**: `ConcurrentWorkflow` + +## Overview + +The ConcurrentWorkflow swarm type runs independent tasks in parallel, significantly reducing processing time for complex operations. This architecture is ideal for tasks that can be processed simultaneously without dependencies, allowing multiple agents to work on different aspects of a problem at the same time. + +Key features: +- **Parallel Execution**: Multiple agents work simultaneously +- **Reduced Processing Time**: Faster completion through parallelization +- **Independent Tasks**: Agents work on separate, non-dependent subtasks +- **Scalable Performance**: Performance scales with the number of agents + +## Use Cases + +- Independent data analysis tasks +- Parallel content generation +- Multi-source research projects +- Distributed problem solving + +## API Usage + +### Basic ConcurrentWorkflow Example + +=== "Shell (curl)" + ```bash + curl -X POST "https://api.swarms.world/v1/swarm/completions" \ + -H "x-api-key: $SWARMS_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Market Research Concurrent", + "description": "Parallel market research across different sectors", + "swarm_type": "ConcurrentWorkflow", + "task": "Research and analyze market opportunities in AI, healthcare, fintech, and e-commerce sectors", + "agents": [ + { + "agent_name": "AI Market Analyst", + "description": "Analyzes AI market trends and opportunities", + "system_prompt": "You are an AI market analyst. Focus on artificial intelligence market trends, opportunities, key players, and growth projections.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Healthcare Market Analyst", + "description": "Analyzes healthcare market trends", + "system_prompt": "You are a healthcare market analyst. Focus on healthcare market trends, digital health opportunities, regulatory landscape, and growth areas.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Fintech Market Analyst", + "description": "Analyzes fintech market opportunities", + "system_prompt": "You are a fintech market analyst. Focus on financial technology trends, digital payment systems, blockchain opportunities, and regulatory developments.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "E-commerce Market Analyst", + "description": "Analyzes e-commerce market trends", + "system_prompt": "You are an e-commerce market analyst. Focus on online retail trends, marketplace opportunities, consumer behavior, and emerging platforms.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + } + ], + "max_loops": 1 + }' + ``` + +=== "Python (requests)" + ```python + import requests + import json + + API_BASE_URL = "https://api.swarms.world" + API_KEY = "your_api_key_here" + + headers = { + "x-api-key": API_KEY, + "Content-Type": "application/json" + } + + swarm_config = { + "name": "Market Research Concurrent", + "description": "Parallel market research across different sectors", + "swarm_type": "ConcurrentWorkflow", + "task": "Research and analyze market opportunities in AI, healthcare, fintech, and e-commerce sectors", + "agents": [ + { + "agent_name": "AI Market Analyst", + "description": "Analyzes AI market trends and opportunities", + "system_prompt": "You are an AI market analyst. Focus on artificial intelligence market trends, opportunities, key players, and growth projections.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Healthcare Market Analyst", + "description": "Analyzes healthcare market trends", + "system_prompt": "You are a healthcare market analyst. Focus on healthcare market trends, digital health opportunities, regulatory landscape, and growth areas.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Fintech Market Analyst", + "description": "Analyzes fintech market opportunities", + "system_prompt": "You are a fintech market analyst. Focus on financial technology trends, digital payment systems, blockchain opportunities, and regulatory developments.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "E-commerce Market Analyst", + "description": "Analyzes e-commerce market trends", + "system_prompt": "You are an e-commerce market analyst. Focus on online retail trends, marketplace opportunities, consumer behavior, and emerging platforms.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + } + ], + "max_loops": 1 + } + + response = requests.post( + f"{API_BASE_URL}/v1/swarm/completions", + headers=headers, + json=swarm_config + ) + + if response.status_code == 200: + result = response.json() + print("ConcurrentWorkflow swarm completed successfully!") + print(f"Cost: ${result['metadata']['billing_info']['total_cost']}") + print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds") + print(f"Parallel results: {result['output']}") + else: + print(f"Error: {response.status_code} - {response.text}") + ``` + +**Example Response**: +```json +{ + "job_id": "swarms-S17nZFDesmLHxCRoeyF3NVYvPaXk", + "status": "success", + "swarm_name": "Market Research Concurrent", + "description": "Parallel market research across different sectors", + "swarm_type": "ConcurrentWorkflow", + "output": [ + { + "role": "E-commerce Market Analyst", + "content": "To analyze market opportunities in the AI, healthcare, fintech, and e-commerce sectors, we can break down each sector's current trends, consumer behavior, and emerging platforms. Here's an overview of each sector with a focus on e-commerce....." + }, + { + "role": "AI Market Analyst", + "content": "The artificial intelligence (AI) landscape presents numerous opportunities across various sectors, particularly in healthcare, fintech, and e-commerce. Here's a detailed analysis of each sector:\n\n### Healthcare....." + }, + { + "role": "Healthcare Market Analyst", + "content": "As a Healthcare Market Analyst, I will focus on analyzing market opportunities within the healthcare sector, particularly in the realm of AI and digital health. The intersection of healthcare with fintech and e-commerce also presents unique opportunities. Here's an overview of key trends and growth areas:...." + }, + { + "role": "Fintech Market Analyst", + "content": "Certainly! Let's break down the market opportunities in the fintech sector, focusing on financial technology trends, digital payment systems, blockchain opportunities, and regulatory developments:\n\n### 1. Financial Technology Trends....." + } + ], + "number_of_agents": 4, + "service_tier": "standard", + "execution_time": 23.360230922698975, + "usage": { + "input_tokens": 35, + "output_tokens": 2787, + "total_tokens": 2822, + "billing_info": { + "cost_breakdown": { + "agent_cost": 0.04, + "input_token_cost": 0.000105, + "output_token_cost": 0.041805, + "token_counts": { + "total_input_tokens": 35, + "total_output_tokens": 2787, + "total_tokens": 2822 + }, + "num_agents": 4, + "service_tier": "standard", + "night_time_discount_applied": true + }, + "total_cost": 0.08191, + "discount_active": true, + "discount_type": "night_time", + "discount_percentage": 75 + } + } +} +``` + +## Best Practices + +- Design independent tasks that don't require sequential dependencies +- Use for tasks that can be parallelized effectively +- Ensure agents have distinct, non-overlapping responsibilities +- Ideal for time-sensitive analysis requiring multiple perspectives + +## Related Swarm Types + +- [SequentialWorkflow](sequential_workflow.md) - For ordered execution +- [MixtureOfAgents](mixture_of_agents.md) - For collaborative analysis +- [MultiAgentRouter](multi_agent_router.md) - For intelligent task distribution + +-------------------------------------------------- + +# File: swarms_cloud/group_chat.md + +# GroupChat + +*Enables dynamic collaboration through chat-based interaction* + +**Swarm Type**: `GroupChat` + +## Overview + +The GroupChat swarm type enables dynamic collaboration between agents through a chat-based interface, facilitating real-time information sharing and decision-making. Agents participate in a conversational workflow where they can build upon each other's contributions, debate ideas, and reach consensus through natural dialogue. + +Key features: +- **Interactive Dialogue**: Agents communicate through natural conversation +- **Dynamic Collaboration**: Real-time information sharing and building upon ideas +- **Consensus Building**: Agents can debate and reach decisions collectively +- **Flexible Participation**: Agents can contribute when relevant to the discussion + +## Use Cases + +- Brainstorming and ideation sessions +- Multi-perspective problem analysis +- Collaborative decision-making processes +- Creative content development + +## API Usage + +### Basic GroupChat Example + +=== "Shell (curl)" + ```bash + curl -X POST "https://api.swarms.world/v1/swarm/completions" \ + -H "x-api-key: $SWARMS_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Product Strategy Discussion", + "description": "Collaborative chat to develop product strategy", + "swarm_type": "GroupChat", + "task": "Discuss and develop a go-to-market strategy for a new AI-powered productivity tool targeting small businesses", + "agents": [ + { + "agent_name": "Product Manager", + "description": "Leads product strategy and development", + "system_prompt": "You are a senior product manager. Focus on product positioning, features, user needs, and market fit. Ask probing questions and build on others ideas.", + "model_name": "gpt-4o", + "max_loops": 3, + }, + { + "agent_name": "Marketing Strategist", + "description": "Develops marketing and positioning strategy", + "system_prompt": "You are a marketing strategist. Focus on target audience, messaging, channels, and competitive positioning. Contribute marketing insights to the discussion.", + "model_name": "gpt-4o", + "max_loops": 3, + }, + { + "agent_name": "Sales Director", + "description": "Provides sales and customer perspective", + "system_prompt": "You are a sales director with small business experience. Focus on pricing, sales process, customer objections, and market adoption. Share practical sales insights.", + "model_name": "gpt-4o", + "max_loops": 3, + }, + { + "agent_name": "UX Researcher", + "description": "Represents user experience and research insights", + "system_prompt": "You are a UX researcher specializing in small business tools. Focus on user behavior, usability, adoption barriers, and design considerations.", + "model_name": "gpt-4o", + "max_loops": 3, + } + ], + "max_loops": 3 + }' + ``` + +=== "Python (requests)" + ```python + import requests + import json + + API_BASE_URL = "https://api.swarms.world" + API_KEY = "your_api_key_here" + + headers = { + "x-api-key": API_KEY, + "Content-Type": "application/json" + } + + swarm_config = { + "name": "Product Strategy Discussion", + "description": "Collaborative chat to develop product strategy", + "swarm_type": "GroupChat", + "task": "Discuss and develop a go-to-market strategy for a new AI-powered productivity tool targeting small businesses", + "agents": [ + { + "agent_name": "Product Manager", + "description": "Leads product strategy and development", + "system_prompt": "You are a senior product manager. Focus on product positioning, features, user needs, and market fit. Ask probing questions and build on others ideas.", + "model_name": "gpt-4o", + "max_loops": 3, + }, + { + "agent_name": "Marketing Strategist", + "description": "Develops marketing and positioning strategy", + "system_prompt": "You are a marketing strategist. Focus on target audience, messaging, channels, and competitive positioning. Contribute marketing insights to the discussion.", + "model_name": "gpt-4o", + "max_loops": 3, + }, + { + "agent_name": "Sales Director", + "description": "Provides sales and customer perspective", + "system_prompt": "You are a sales director with small business experience. Focus on pricing, sales process, customer objections, and market adoption. Share practical sales insights.", + "model_name": "gpt-4o", + "max_loops": 3, + }, + { + "agent_name": "UX Researcher", + "description": "Represents user experience and research insights", + "system_prompt": "You are a UX researcher specializing in small business tools. Focus on user behavior, usability, adoption barriers, and design considerations.", + "model_name": "gpt-4o", + "max_loops": 3, + } + ], + "max_loops": 3 + } + + response = requests.post( + f"{API_BASE_URL}/v1/swarm/completions", + headers=headers, + json=swarm_config + ) + + if response.status_code == 200: + result = response.json() + print("GroupChat swarm completed successfully!") + print(f"Cost: ${result['metadata']['billing_info']['total_cost']}") + print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds") + print(f"Chat discussion: {result['output']}") + else: + print(f"Error: {response.status_code} - {response.text}") + ``` + +**Example Response**: +```json +{ + "job_id": "swarms-2COVtf3k0Fz7jU1BOOHF3b5nuL2x", + "status": "success", + "swarm_name": "Product Strategy Discussion", + "description": "Collaborative chat to develop product strategy", + "swarm_type": "GroupChat", + "output": "User: \n\nSystem: \n Group Chat Name: Product Strategy Discussion\nGroup Chat Description: Collaborative chat to develop product strategy\n Agents in your Group Chat: Available Agents for Team: None\n\n\n\n[Agent 1]\nName: Product Manager\nDescription: Leads product strategy and development\nRole.....", + "number_of_agents": 4, + "service_tier": "standard", + "execution_time": 47.36732482910156, + "usage": { + "input_tokens": 30, + "output_tokens": 1633, + "total_tokens": 1663, + "billing_info": { + "cost_breakdown": { + "agent_cost": 0.04, + "input_token_cost": 0.00009, + "output_token_cost": 0.024495, + "token_counts": { + "total_input_tokens": 30, + "total_output_tokens": 1633, + "total_tokens": 1663 + }, + "num_agents": 4, + "service_tier": "standard", + "night_time_discount_applied": false + }, + "total_cost": 0.064585, + "discount_active": false, + "discount_type": "none", + "discount_percentage": 0 + } + } +} +``` + +## Best Practices + +- Set clear discussion goals and objectives +- Use diverse agent personalities for richer dialogue +- Allow multiple conversation rounds for idea development +- Encourage agents to build upon each other's contributions + +## Related Swarm Types + +- [MixtureOfAgents](mixture_of_agents.md) - For complementary expertise +- [MajorityVoting](majority_voting.md) - For consensus decision-making +- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic discussion setup + +-------------------------------------------------- + +# File: swarms_cloud/hierarchical_swarm.md + +# HiearchicalSwarm + +*Implements structured, multi-level task management with clear authority* + +**Swarm Type**: `HiearchicalSwarm` + +## Overview + +The HiearchicalSwarm implements a structured, multi-level approach to task management with clear lines of authority and delegation. This architecture organizes agents in a hierarchical structure where manager agents coordinate and oversee worker agents, enabling efficient task distribution and quality control. + +Key features: +- **Structured Hierarchy**: Clear organizational structure with managers and workers +- **Delegated Authority**: Manager agents distribute tasks to specialized workers +- **Quality Oversight**: Multi-level review and validation processes +- **Scalable Organization**: Efficient coordination of large agent teams + +## Use Cases + +- Complex projects requiring management oversight +- Large-scale content production workflows +- Multi-stage validation and review processes +- Enterprise-level task coordination + +## API Usage + +### Basic HiearchicalSwarm Example + +=== "Shell (curl)" + ```bash + curl -X POST "https://api.swarms.world/v1/swarm/completions" \ + -H "x-api-key: $SWARMS_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Market Research ", + "description": "Parallel market research across different sectors", + "swarm_type": "HiearchicalSwarm", + "task": "Research and analyze market opportunities in AI, healthcare, fintech, and e-commerce sectors", + "agents": [ + { + "agent_name": "AI Market Analyst", + "description": "Analyzes AI market trends and opportunities", + "system_prompt": "You are an AI market analyst. Focus on artificial intelligence market trends, opportunities, key players, and growth projections.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Healthcare Market Analyst", + "description": "Analyzes healthcare market trends", + "system_prompt": "You are a healthcare market analyst. Focus on healthcare market trends, digital health opportunities, regulatory landscape, and growth areas.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Fintech Market Analyst", + "description": "Analyzes fintech market opportunities", + "system_prompt": "You are a fintech market analyst. Focus on financial technology trends, digital payment systems, blockchain opportunities, and regulatory developments.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "E-commerce Market Analyst", + "description": "Analyzes e-commerce market trends", + "system_prompt": "You are an e-commerce market analyst. Focus on online retail trends, marketplace opportunities, consumer behavior, and emerging platforms.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + } + ], + "max_loops": 1 + }' + ``` + +=== "Python (requests)" + ```python + import requests + import json + + API_BASE_URL = "https://api.swarms.world" + API_KEY = "your_api_key_here" + + headers = { + "x-api-key": API_KEY, + "Content-Type": "application/json" + } + + swarm_config = { + "name": "Market Research ", + "description": "Parallel market research across different sectors", + "swarm_type": "HiearchicalSwarm", + "task": "Research and analyze market opportunities in AI, healthcare, fintech, and e-commerce sectors", + "agents": [ + { + "agent_name": "AI Market Analyst", + "description": "Analyzes AI market trends and opportunities", + "system_prompt": "You are an AI market analyst. Focus on artificial intelligence market trends, opportunities, key players, and growth projections.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Healthcare Market Analyst", + "description": "Analyzes healthcare market trends", + "system_prompt": "You are a healthcare market analyst. Focus on healthcare market trends, digital health opportunities, regulatory landscape, and growth areas.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Fintech Market Analyst", + "description": "Analyzes fintech market opportunities", + "system_prompt": "You are a fintech market analyst. Focus on financial technology trends, digital payment systems, blockchain opportunities, and regulatory developments.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "E-commerce Market Analyst", + "description": "Analyzes e-commerce market trends", + "system_prompt": "You are an e-commerce market analyst. Focus on online retail trends, marketplace opportunities, consumer behavior, and emerging platforms.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + } + ], + "max_loops": 1 + } + + response = requests.post( + f"{API_BASE_URL}/v1/swarm/completions", + headers=headers, + json=swarm_config + ) + + if response.status_code == 200: + result = response.json() + print("HiearchicalSwarm completed successfully!") + print(f"Cost: ${result['metadata']['billing_info']['total_cost']}") + print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds") + print(f"Project plan: {result['output']}") + else: + print(f"Error: {response.status_code} - {response.text}") + ``` + +**Example Response**: +```json +{ + "job_id": "swarms-JIrcIAfs2d75xrXGaAL94uWyYJ8V", + "status": "success", + "swarm_name": "Market Research Auto", + "description": "Parallel market research across different sectors", + "swarm_type": "HiearchicalSwarm", + "output": [ + { + "role": "System", + "content": "These are the agents in your team. Each agent has a specific role and expertise to contribute to the team's objectives.\nTotal Agents: 4\n\nBelow is a summary of your team members and their primary responsibilities:\n| Agent Name | Description |\n|------------|-------------|\n| AI Market Analyst | Analyzes AI market trends and opportunities |\n| Healthcare Market Analyst | Analyzes healthcare market trends |\n| Fintech Market Analyst | Analyzes fintech market opportunities |\n| E-commerce Market Analyst | Analyzes e-commerce market trends |\n\nEach agent is designed to handle tasks within their area of expertise. Collaborate effectively by assigning tasks according to these roles." + }, + { + "role": "Director", + "content": [ + { + "role": "Director", + "content": [ + { + "function": { + "arguments": "{\"plan\":\"Conduct a comprehensive analysis of market opportunities in the AI, healthcare, fintech, and e-commerce sectors. Each market analyst will focus on their respective sector, gathering data on current trends, growth opportunities, and potential challenges. The findings will be compiled into a report for strategic decision-making.\",\"orders\":[{\"agent_name\":\"AI Market Analyst\",\"task\":\"Research current trends in the AI market, identify growth opportunities, and analyze potential challenges.\"},{\"agent_name\":\"Healthcare Market Analyst\",\"task\":\"Analyze the healthcare market for emerging trends, growth opportunities, and possible challenges.\"},{\"agent_name\":\"Fintech Market Analyst\",\"task\":\"Investigate the fintech sector for current trends, identify opportunities for growth, and assess challenges.\"},{\"agent_name\":\"E-commerce Market Analyst\",\"task\":\"Examine e-commerce market trends, identify growth opportunities, and analyze potential challenges.\"}]}", + "name": "ModelMetaclass" + }, + "id": "call_GxiyzIRb2oGQXokbbkeaeVry", + "type": "function" + } + ] + } + ] + }, + { + "role": "AI Market Analyst", + "content": "### AI Market Analysis: Trends, Opportunities, and Challenges\n\n#### Current Trends in the AI Market:\n\n1. **Increased Adoption Across Industries**..." + }, + { + "role": "Healthcare Market Analyst", + "content": "### Healthcare Market Analysis: Trends, Opportunities, and Challenges\n\n#### Current Trends in the Healthcare Market:\n\n1. **Telehealth Expansion**..." + }, + { + "role": "Fintech Market Analyst", + "content": "### Fintech Market Analysis: Trends, Opportunities, and Challenges\n\n#### Current Trends in the Fintech Market:\n\n1. **Digital Payments Proliferation**...." + }, + { + "role": "E-commerce Market Analyst", + "content": "### E-commerce Market Analysis: Trends, Opportunities, and Challenges\n\n#### Current Trends in the E-commerce Market:\n\n1. **Omnichannel Retailing**...." + }, + { + "role": "Director", + "content": "### Feedback for Worker Agents\n\n#### AI Market Analyst\n\n**Strengths:**\n- Comprehensive coverage of current trends, growth opportunities, and challenges in the AI market.\n- Clear categorization of insights, making it easy to follow and understand.\n\n**Weaknesses....." + }, + { + "role": "System", + "content": "--- Loop 1/1 completed ---" + } + ], + "number_of_agents": 4, + "service_tier": "standard", + "execution_time": 94.07934331893921, + "usage": { + "input_tokens": 35, + "output_tokens": 3827, + "total_tokens": 3862, + "billing_info": { + "cost_breakdown": { + "agent_cost": 0.04, + "input_token_cost": 0.000105, + "output_token_cost": 0.057405, + "token_counts": { + "total_input_tokens": 35, + "total_output_tokens": 3827, + "total_tokens": 3862 + }, + "num_agents": 4, + "service_tier": "standard", + "night_time_discount_applied": false + }, + "total_cost": 0.09751, + "discount_active": false, + "discount_type": "none", + "discount_percentage": 0 + } + } +} +``` + +## Configuration Options + +| Parameter | Type | Description | Default | +|-----------|------|-------------|---------| +| `role` | string | Agent role: "manager" or "worker" | "worker" | +| `agents` | Array | Mix of manager and worker agents | Required | +| `max_loops` | integer | Coordination rounds for managers | 1 | + +## Best Practices + +- Clearly define manager and worker roles using the `role` parameter +- Give managers higher `max_loops` for coordination activities +- Design worker agents with specialized, focused responsibilities +- Use for complex projects requiring oversight and coordination + +## Related Swarm Types + +- [SequentialWorkflow](sequential_workflow.md) - For linear task progression +- [MultiAgentRouter](multi_agent_router.md) - For intelligent task routing +- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic hierarchy creation + +-------------------------------------------------- + +# File: swarms_cloud/index.md + + + +-------------------------------------------------- + +# File: swarms_cloud/launch.md # Swarms Cloud API Client Documentation @@ -51689,7 +57805,261 @@ The client is not thread-safe by default. For concurrent usage, create separate -------------------------------------------------- -# File: swarms_cloud\mcp.md +# File: swarms_cloud/majority_voting.md + +# MajorityVoting + +*Implements robust decision-making through consensus and voting* + +**Swarm Type**: `MajorityVoting` + +## Overview + +The MajorityVoting swarm type implements robust decision-making through consensus mechanisms, ideal for tasks requiring collective intelligence or verification. Multiple agents independently analyze the same problem and vote on the best solution, ensuring high-quality, well-validated outcomes through democratic consensus. + +Key features: +- **Consensus-Based Decisions**: Multiple agents vote on the best solution +- **Quality Assurance**: Reduces individual agent bias through collective input +- **Democratic Process**: Fair and transparent decision-making mechanism +- **Robust Validation**: Multiple perspectives ensure thorough analysis + +## Use Cases + +- Critical decision-making requiring validation +- Quality assurance and verification tasks +- Complex problem solving with multiple viable solutions +- Risk assessment and evaluation scenarios + +## API Usage + +### Basic MajorityVoting Example + +=== "Shell (curl)" + ```bash + curl -X POST "https://api.swarms.world/v1/swarm/completions" \ + -H "x-api-key: $SWARMS_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Investment Decision Voting", + "description": "Multiple financial experts vote on investment recommendations", + "swarm_type": "MajorityVoting", + "task": "Evaluate whether to invest $1M in a renewable energy startup. Consider market potential, financial projections, team strength, and competitive landscape.", + "agents": [ + { + "agent_name": "Growth Investor", + "description": "Focuses on growth potential and market opportunity", + "system_prompt": "You are a growth-focused venture capitalist. Evaluate investments based on market size, scalability, and growth potential. Provide a recommendation with confidence score.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Financial Analyst", + "description": "Analyzes financial metrics and projections", + "system_prompt": "You are a financial analyst specializing in startups. Evaluate financial projections, revenue models, and unit economics. Provide a recommendation with confidence score.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + }, + { + "agent_name": "Technical Due Diligence", + "description": "Evaluates technology and product viability", + "system_prompt": "You are a technical due diligence expert. Assess technology viability, intellectual property, product-market fit, and technical risks. Provide a recommendation with confidence score.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Market Analyst", + "description": "Analyzes market conditions and competition", + "system_prompt": "You are a market research analyst. Evaluate market dynamics, competitive landscape, regulatory environment, and market timing. Provide a recommendation with confidence score.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Risk Assessor", + "description": "Identifies and evaluates investment risks", + "system_prompt": "You are a risk assessment specialist. Identify potential risks, evaluate mitigation strategies, and assess overall risk profile. Provide a recommendation with confidence score.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + } + ], + "max_loops": 1 + }' + ``` + +=== "Python (requests)" + ```python + import requests + import json + + API_BASE_URL = "https://api.swarms.world" + API_KEY = "your_api_key_here" + + headers = { + "x-api-key": API_KEY, + "Content-Type": "application/json" + } + + swarm_config = { + "name": "Investment Decision Voting", + "description": "Multiple financial experts vote on investment recommendations", + "swarm_type": "MajorityVoting", + "task": "Evaluate whether to invest $1M in a renewable energy startup. Consider market potential, financial projections, team strength, and competitive landscape.", + "agents": [ + { + "agent_name": "Growth Investor", + "description": "Focuses on growth potential and market opportunity", + "system_prompt": "You are a growth-focused venture capitalist. Evaluate investments based on market size, scalability, and growth potential. Provide a recommendation with confidence score.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Financial Analyst", + "description": "Analyzes financial metrics and projections", + "system_prompt": "You are a financial analyst specializing in startups. Evaluate financial projections, revenue models, and unit economics. Provide a recommendation with confidence score.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + }, + { + "agent_name": "Technical Due Diligence", + "description": "Evaluates technology and product viability", + "system_prompt": "You are a technical due diligence expert. Assess technology viability, intellectual property, product-market fit, and technical risks. Provide a recommendation with confidence score.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Market Analyst", + "description": "Analyzes market conditions and competition", + "system_prompt": "You are a market research analyst. Evaluate market dynamics, competitive landscape, regulatory environment, and market timing. Provide a recommendation with confidence score.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Risk Assessor", + "description": "Identifies and evaluates investment risks", + "system_prompt": "You are a risk assessment specialist. Identify potential risks, evaluate mitigation strategies, and assess overall risk profile. Provide a recommendation with confidence score.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + } + ], + "max_loops": 1 + } + + response = requests.post( + f"{API_BASE_URL}/v1/swarm/completions", + headers=headers, + json=swarm_config + ) + + if response.status_code == 200: + result = response.json() + print("MajorityVoting completed successfully!") + print(f"Final decision: {result['output']['consensus_decision']}") + print(f"Vote breakdown: {result['metadata']['vote_breakdown']}") + print(f"Cost: ${result['metadata']['billing_info']['total_cost']}") + print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds") + else: + print(f"Error: {response.status_code} - {response.text}") + ``` + +**Example Response**: +```json +{ + "job_id": "swarms-1WFsSJU2KcvY11lxRMjdQNWFHArI", + "status": "success", + "swarm_name": "Investment Decision Voting", + "description": "Multiple financial experts vote on investment recommendations", + "swarm_type": "MajorityVoting", + "output": [ + { + "role": "Financial Analyst", + "content": [ + "To evaluate the potential investment in a renewable energy startup, we will assess the technology viability, intellectual property, product-market fit, and technical risks, along with the additional factors of market ....." + ] + }, + { + "role": "Technical Due Diligence", + "content": [ + "To evaluate the potential investment in a renewable energy startup, we will analyze the relevant market dynamics, competitive landscape, regulatory environment, and market timing. Here's the breakdown of the assessment......." + ] + }, + { + "role": "Market Analyst", + "content": [ + "To evaluate the potential investment in a renewable energy startup, let's break down the key factors:\n\n1. **Market Potential........" + ] + }, + { + "role": "Growth Investor", + "content": [ + "To evaluate the potential investment in a renewable energy startup, we need to assess various risk factors and mitigation strategies across several key areas: market potential, financial projections, team strength, and competitive landscape.\n\n### 1. Market Potential\n**Risks:**\n- **Regulatory Changes................" + ] + }, + { + "role": "Risk Assessor", + "content": [ + "To provide a comprehensive evaluation of whether to invest $1M in the renewable energy startup, let's break down the key areas.........." + ] + }, + { + "role": "Risk Assessor", + "content": "To evaluate the potential investment in a renewable energy startup, we need to assess various risk factors and mitigation strategies across several key areas....." + } + ], + "number_of_agents": 5, + "service_tier": "standard", + "execution_time": 61.74853563308716, + "usage": { + "input_tokens": 39, + "output_tokens": 8468, + "total_tokens": 8507, + "billing_info": { + "cost_breakdown": { + "agent_cost": 0.05, + "input_token_cost": 0.000117, + "output_token_cost": 0.12702, + "token_counts": { + "total_input_tokens": 39, + "total_output_tokens": 8468, + "total_tokens": 8507 + }, + "num_agents": 5, + "service_tier": "standard", + "night_time_discount_applied": false + }, + "total_cost": 0.177137, + "discount_active": false, + "discount_type": "none", + "discount_percentage": 0 + } + } +} +``` + +## Best Practices + +- Use odd numbers of agents to avoid tie votes +- Design agents with different perspectives for robust evaluation +- Include confidence scores in agent prompts for weighted decisions +- Ideal for high-stakes decisions requiring validation + +## Related Swarm Types + +- [GroupChat](group_chat.md) - For discussion-based consensus +- [MixtureOfAgents](mixture_of_agents.md) - For diverse expertise collaboration +- [HierarchicalSwarm](hierarchical_swarm.md) - For structured decision-making + +-------------------------------------------------- + +# File: swarms_cloud/mcp.md # Swarms API as MCP @@ -52036,7 +58406,7 @@ if __name__ == "__main__": -------------------------------------------------- -# File: swarms_cloud\mcs_api.md +# File: swarms_cloud/mcs_api.md # Medical Coder Swarm API Documentation @@ -52768,7 +59138,478 @@ with MCSClient() as client: -------------------------------------------------- -# File: swarms_cloud\phala_deploy.md +# File: swarms_cloud/migration.md + +# Swarms API Documentation Has Moved 🚀 + +We are excited to announce that the documentation for the Swarms API has been migrated to a brand new platform: [docs.swarms.ai](https://docs.swarms.ai). + +Our new documentation site offers a more beautiful, user-friendly, and streamlined experience for developers and users alike. You’ll find improved navigation, clearer guides, and up-to-date references for all Swarms Cloud API features. + +**What’s new at [docs.swarms.ai](https://docs.swarms.ai)?** + +- Modern, easy-to-navigate interface + +- Comprehensive API reference and usage examples + +- Quickstart guides and best practices + +- Regular updates and new content + +- Enhanced search and accessibility + +If you have previously bookmarked or referenced the old documentation, please update your links to point to the new site. All future updates, new features, and support resources will be available exclusively at [docs.swarms.ai](https://docs.swarms.ai). + +Thank you for being part of the Swarms community! If you have any questions or feedback about the new documentation, feel free to reach out via our [Discord](https://discord.gg/EamjgSaEQf) or [GitHub](https://github.com/kyegomez/swarms). + +Happy building with Swarms! + +-------------------------------------------------- + +# File: swarms_cloud/mixture_of_agents.md + +# MixtureOfAgents + +*Builds diverse teams of specialized agents for complex problem solving* + +**Swarm Type**: `MixtureOfAgents` + +## Overview + +The MixtureOfAgents swarm type combines multiple agent types with different specializations to tackle diverse aspects of complex problems. Each agent contributes unique skills and perspectives, making this architecture ideal for tasks requiring multiple types of expertise working in harmony. + +Key features: +- **Diverse Expertise**: Combines agents with different specializations +- **Collaborative Problem Solving**: Agents work together leveraging their unique strengths +- **Comprehensive Coverage**: Ensures all aspects of complex tasks are addressed +- **Balanced Perspectives**: Multiple viewpoints for robust decision-making + +## Use Cases + +- Complex research projects requiring multiple disciplines +- Business analysis needing various functional perspectives +- Content creation requiring different expertise areas +- Strategic planning with multiple considerations + +## API Usage + +### Basic MixtureOfAgents Example + +=== "Shell (curl)" + ```bash + curl -X POST "https://api.swarms.world/v1/swarm/completions" \ + -H "x-api-key: $SWARMS_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Business Strategy Mixture", + "description": "Diverse team analyzing business strategy from multiple perspectives", + "swarm_type": "MixtureOfAgents", + "task": "Develop a comprehensive market entry strategy for a new AI product in the healthcare sector", + "agents": [ + { + "agent_name": "Market Research Analyst", + "description": "Analyzes market trends and opportunities", + "system_prompt": "You are a market research expert specializing in healthcare technology. Analyze market size, trends, and competitive landscape.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Financial Analyst", + "description": "Evaluates financial viability and projections", + "system_prompt": "You are a financial analyst expert. Assess financial implications, ROI, and cost structures for business strategies.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + }, + { + "agent_name": "Regulatory Expert", + "description": "Analyzes compliance and regulatory requirements", + "system_prompt": "You are a healthcare regulatory expert. Analyze compliance requirements, regulatory pathways, and potential barriers.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.1 + }, + { + "agent_name": "Technology Strategist", + "description": "Evaluates technical feasibility and strategy", + "system_prompt": "You are a technology strategy expert. Assess technical requirements, implementation challenges, and scalability.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + } + ], + "max_loops": 1 + }' + ``` + +=== "Python (requests)" + ```python + import requests + import json + + API_BASE_URL = "https://api.swarms.world" + API_KEY = "your_api_key_here" + + headers = { + "x-api-key": API_KEY, + "Content-Type": "application/json" + } + + swarm_config = { + "name": "Business Strategy Mixture", + "description": "Diverse team analyzing business strategy from multiple perspectives", + "swarm_type": "MixtureOfAgents", + "task": "Develop a comprehensive market entry strategy for a new AI product in the healthcare sector", + "agents": [ + { + "agent_name": "Market Research Analyst", + "description": "Analyzes market trends and opportunities", + "system_prompt": "You are a market research expert specializing in healthcare technology. Analyze market size, trends, and competitive landscape.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Financial Analyst", + "description": "Evaluates financial viability and projections", + "system_prompt": "You are a financial analyst expert. Assess financial implications, ROI, and cost structures for business strategies.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + }, + { + "agent_name": "Regulatory Expert", + "description": "Analyzes compliance and regulatory requirements", + "system_prompt": "You are a healthcare regulatory expert. Analyze compliance requirements, regulatory pathways, and potential barriers.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.1 + }, + { + "agent_name": "Technology Strategist", + "description": "Evaluates technical feasibility and strategy", + "system_prompt": "You are a technology strategy expert. Assess technical requirements, implementation challenges, and scalability.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + } + ], + "max_loops": 1 + } + + response = requests.post( + f"{API_BASE_URL}/v1/swarm/completions", + headers=headers, + json=swarm_config + ) + + if response.status_code == 200: + result = response.json() + print("MixtureOfAgents swarm completed successfully!") + print(f"Cost: ${result['metadata']['billing_info']['total_cost']}") + print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds") + print(f"Output: {result['output']}") + else: + print(f"Error: {response.status_code} - {response.text}") + ``` + +**Example Response**: +```json +{ + "job_id": "swarms-kBZaJg1uGTkRbLCAsGztL2jrp5Mj", + "status": "success", + "swarm_name": "Business Strategy Mixture", + "description": "Diverse team analyzing business strategy from multiple perspectives", + "swarm_type": "MixtureOfAgents", + "output": [ + { + "role": "System", + "content": "Team Name: Business Strategy Mixture\nTeam Description: Diverse team analyzing business strategy from multiple perspectives\nThese are the agents in your team. Each agent has a specific role and expertise to contribute to the team's objectives.\nTotal Agents: 4\n\nBelow is a summary of your team members and their primary responsibilities:\n| Agent Name | Description |\n|------------|-------------|\n| Market Research Analyst | Analyzes market trends and opportunities |\n| Financial Analyst | Evaluates financial viability and projections |\n| Regulatory Expert | Analyzes compliance and regulatory requirements |\n| Technology Strategist | Evaluates technical feasibility and strategy |\n\nEach agent is designed to handle tasks within their area of expertise. Collaborate effectively by assigning tasks according to these roles." + }, + { + "role": "Market Research Analyst", + "content": "To develop a comprehensive market entry strategy for a new AI product in the healthcare sector, we will leverage the expertise of each team member to cover all critical aspects of the strategy. Here's how each agent will contribute......." + }, + { + "role": "Technology Strategist", + "content": "To develop a comprehensive market entry strategy for a new AI product in the healthcare sector, we'll need to collaborate effectively with the team, leveraging each member's expertise. Here's how each agent can contribute to the strategy, along with a focus on the technical requirements, implementation challenges, and scalability from the technology strategist's perspective....." + }, + { + "role": "Financial Analyst", + "content": "Developing a comprehensive market entry strategy for a new AI product in the healthcare sector involves a multidisciplinary approach. Each agent in the Business Strategy Mixture team will play a crucial role in ensuring a successful market entry. Here's how the team can collaborate........" + }, + { + "role": "Regulatory Expert", + "content": "To develop a comprehensive market entry strategy for a new AI product in the healthcare sector, we need to leverage the expertise of each agent in the Business Strategy Mixture team. Below is an outline of how each team member can contribute to this strategy......" + }, + { + "role": "Aggregator Agent", + "content": "As the Aggregator Agent, I've observed and analyzed the responses from the Business Strategy Mixture team regarding the development of a comprehensive market entry strategy for a new AI product in the healthcare sector. Here's a summary of the key points ......" + } + ], + "number_of_agents": 4, + "service_tier": "standard", + "execution_time": 30.230480670928955, + "usage": { + "input_tokens": 30, + "output_tokens": 3401, + "total_tokens": 3431, + "billing_info": { + "cost_breakdown": { + "agent_cost": 0.04, + "input_token_cost": 0.00009, + "output_token_cost": 0.051015, + "token_counts": { + "total_input_tokens": 30, + "total_output_tokens": 3401, + "total_tokens": 3431 + }, + "num_agents": 4, + "service_tier": "standard", + "night_time_discount_applied": true + }, + "total_cost": 0.091105, + "discount_active": true, + "discount_type": "night_time", + "discount_percentage": 75 + } + } +} +``` + +## Best Practices + +- Select agents with complementary and diverse expertise +- Ensure each agent has a clear, specialized role +- Use for complex problems requiring multiple perspectives +- Design tasks that benefit from collaborative analysis + +## Related Swarm Types + +- [ConcurrentWorkflow](concurrent_workflow.md) - For parallel task execution +- [GroupChat](group_chat.md) - For collaborative discussion +- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic team assembly + +-------------------------------------------------- + +# File: swarms_cloud/multi_agent_router.md + +# MultiAgentRouter + +*Intelligent task dispatcher distributing work based on agent capabilities* + +**Swarm Type**: `MultiAgentRouter` + +## Overview + +The MultiAgentRouter acts as an intelligent task dispatcher, distributing work across agents based on their capabilities and current workload. This architecture analyzes incoming tasks and automatically routes them to the most suitable agents, optimizing both efficiency and quality of outcomes. + +Key features: +- **Intelligent Routing**: Automatically assigns tasks to best-suited agents +- **Capability Matching**: Matches task requirements with agent specializations +- **Load Balancing**: Distributes workload efficiently across available agents +- **Dynamic Assignment**: Adapts routing based on agent performance and availability + +## Use Cases + +- Customer service request routing +- Content categorization and processing +- Technical support ticket assignment +- Multi-domain question answering + +## API Usage + +### Basic MultiAgentRouter Example + +=== "Shell (curl)" + ```bash + curl -X POST "https://api.swarms.world/v1/swarm/completions" \ + -H "x-api-key: $SWARMS_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Customer Support Router", + "description": "Route customer inquiries to specialized support agents", + "swarm_type": "MultiAgentRouter", + "task": "Handle multiple customer inquiries: 1) Billing question about overcharge, 2) Technical issue with mobile app login, 3) Product recommendation for enterprise client, 4) Return policy question", + "agents": [ + { + "agent_name": "Billing Specialist", + "description": "Handles billing, payments, and account issues", + "system_prompt": "You are a billing specialist. Handle all billing inquiries, payment issues, refunds, and account-related questions with empathy and accuracy.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Technical Support", + "description": "Resolves technical issues and troubleshooting", + "system_prompt": "You are a technical support specialist. Diagnose and resolve technical issues, provide step-by-step troubleshooting, and escalate complex problems.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + }, + { + "agent_name": "Sales Consultant", + "description": "Provides product recommendations and sales support", + "system_prompt": "You are a sales consultant. Provide product recommendations, explain features and benefits, and help customers find the right solutions.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.4 + }, + { + "agent_name": "Policy Advisor", + "description": "Explains company policies and procedures", + "system_prompt": "You are a policy advisor. Explain company policies, terms of service, return procedures, and compliance requirements clearly.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.1 + } + ], + "max_loops": 1 + }' + ``` + +=== "Python (requests)" + ```python + import requests + import json + + API_BASE_URL = "https://api.swarms.world" + API_KEY = "your_api_key_here" + + headers = { + "x-api-key": API_KEY, + "Content-Type": "application/json" + } + + swarm_config = { + "name": "Customer Support Router", + "description": "Route customer inquiries to specialized support agents", + "swarm_type": "MultiAgentRouter", + "task": "Handle multiple customer inquiries: 1) Billing question about overcharge, 2) Technical issue with mobile app login, 3) Product recommendation for enterprise client, 4) Return policy question", + "agents": [ + { + "agent_name": "Billing Specialist", + "description": "Handles billing, payments, and account issues", + "system_prompt": "You are a billing specialist. Handle all billing inquiries, payment issues, refunds, and account-related questions with empathy and accuracy.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Technical Support", + "description": "Resolves technical issues and troubleshooting", + "system_prompt": "You are a technical support specialist. Diagnose and resolve technical issues, provide step-by-step troubleshooting, and escalate complex problems.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + }, + { + "agent_name": "Sales Consultant", + "description": "Provides product recommendations and sales support", + "system_prompt": "You are a sales consultant. Provide product recommendations, explain features and benefits, and help customers find the right solutions.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.4 + }, + { + "agent_name": "Policy Advisor", + "description": "Explains company policies and procedures", + "system_prompt": "You are a policy advisor. Explain company policies, terms of service, return procedures, and compliance requirements clearly.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.1 + } + ], + "max_loops": 1 + } + + response = requests.post( + f"{API_BASE_URL}/v1/swarm/completions", + headers=headers, + json=swarm_config + ) + + if response.status_code == 200: + result = response.json() + print("MultiAgentRouter completed successfully!") + print(f"Routing decisions: {result['metadata']['routing_decisions']}") + print(f"Cost: ${result['metadata']['billing_info']['total_cost']}") + print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds") + print(f"Customer responses: {result['output']}") + else: + print(f"Error: {response.status_code} - {response.text}") + ``` + +**Example Response**: +```json +{ + "job_id": "swarms-OvOZHubprE3thzLmRdNBZAxA6om4", + "status": "success", + "swarm_name": "Customer Support Router", + "description": "Route customer inquiries to specialized support agents", + "swarm_type": "MultiAgentRouter", + "output": [ + { + "role": "user", + "content": "Handle multiple customer inquiries: 1) Billing question about overcharge, 2) Technical issue with mobile app login, 3) Product recommendation for enterprise client, 4) Return policy question" + }, + { + "role": "Agent Router", + "content": "selected_agent='Billing Specialist' reasoning='The task involves multiple inquiries, but the first one is about a billing question regarding an overcharge. Billing issues often require immediate attention to ensure customer satisfaction and prevent further complications. Therefore, the Billing Specialist is the most appropriate agent to handle this task. They can address the billing question directly and potentially coordinate with other agents for the remaining inquiries.' modified_task='Billing question about overcharge'" + }, + { + "role": "Billing Specialist", + "content": "Of course, I'd be happy to help you with your billing question regarding an overcharge. Could you please provide me with more details about the charge in question, such as the date it occurred and the amount? This information will help me look into your account and resolve the issue as quickly as possible." + } + ], + "number_of_agents": 4, + "service_tier": "standard", + "execution_time": 7.800086975097656, + "usage": { + "input_tokens": 28, + "output_tokens": 221, + "total_tokens": 249, + "billing_info": { + "cost_breakdown": { + "agent_cost": 0.04, + "input_token_cost": 0.000084, + "output_token_cost": 0.003315, + "token_counts": { + "total_input_tokens": 28, + "total_output_tokens": 221, + "total_tokens": 249 + }, + "num_agents": 4, + "service_tier": "standard", + "night_time_discount_applied": true + }, + "total_cost": 0.043399, + "discount_active": true, + "discount_type": "night_time", + "discount_percentage": 75 + } + } +} +``` + +## Best Practices + +- Define agents with clear, distinct specializations +- Use descriptive agent names and descriptions for better routing +- Ideal for handling diverse task types that require different expertise +- Monitor routing decisions to optimize agent configurations + +## Related Swarm Types + +- [HierarchicalSwarm](hierarchical_swarm.md) - For structured task management +- [ConcurrentWorkflow](concurrent_workflow.md) - For parallel task processing +- [AutoSwarmBuilder](auto_swarm_builder.md) - For automatic routing setup + +-------------------------------------------------- + +# File: swarms_cloud/phala_deploy.md # 🔐 Swarms x Phala Deployment Guide @@ -52841,7 +59682,7 @@ For more comprehensive documentation and examples, visit our [Official Documenta -------------------------------------------------- -# File: swarms_cloud\production_deployment.md +# File: swarms_cloud/production_deployment.md # Enterprise Guide to High-Performance Multi-Agent LLM Deployments ------- @@ -53165,7 +60006,7 @@ In the rapidly evolving landscape of artificial intelligence and natural languag -------------------------------------------------- -# File: swarms_cloud\python_client.md +# File: swarms_cloud/python_client.md # Swarms Cloud API Client Documentation @@ -53950,7 +60791,7 @@ This example creates a sequential workflow swarm with three agents to research q -------------------------------------------------- -# File: swarms_cloud\quickstart.md +# File: swarms_cloud/quickstart.md # Swarms Quickstart Guide @@ -55120,7 +61961,7 @@ Join our community of agent engineers and researchers for technical support, cut -------------------------------------------------- -# File: swarms_cloud\rate_limits.md +# File: swarms_cloud/rate_limits.md # Swarms API Rate Limits @@ -55321,7 +62162,7 @@ Visit [Swarms Platform Account](https://swarms.world/platform/account) to upgrad -------------------------------------------------- -# File: swarms_cloud\rust_client.md +# File: swarms_cloud/rust_client.md # Swarms Client - Production Grade Rust SDK @@ -55558,7 +62399,6 @@ Available swarm types for different execution patterns. | `Auto` | Automatically selects the best swarm type | | `MajorityVoting` | Agents vote on decisions | | `Malt` | Multi-Agent Language Tasks | -| `DeepResearchSwarm` | Specialized for deep research tasks | ## Detailed Examples @@ -56059,7 +62899,226 @@ This project is licensed under the MIT License - see the LICENSE file for detail -------------------------------------------------- -# File: swarms_cloud\subscription_tiers.md +# File: swarms_cloud/sequential_workflow.md + +# SequentialWorkflow + +*Executes tasks in a strict, predefined order for step-by-step processing* + +**Swarm Type**: `SequentialWorkflow` + +## Overview + +The SequentialWorkflow swarm type executes tasks in a strict, predefined order where each step depends on the completion of the previous one. This architecture is perfect for workflows that require a linear progression of tasks, ensuring that each agent builds upon the work of the previous agent. + +Key features: +- **Ordered Execution**: Agents execute in a specific, predefined sequence +- **Step Dependencies**: Each step builds upon previous results +- **Predictable Flow**: Clear, linear progression through the workflow +- **Quality Control**: Each agent can validate and enhance previous work + +## Use Cases + +- Document processing pipelines +- Multi-stage analysis workflows +- Content creation and editing processes +- Data transformation and validation pipelines + +## API Usage + +### Basic SequentialWorkflow Example + +=== "Shell (curl)" + ```bash + curl -X POST "https://api.swarms.world/v1/swarm/completions" \ + -H "x-api-key: $SWARMS_API_KEY" \ + -H "Content-Type: application/json" \ + -d '{ + "name": "Content Creation Pipeline", + "description": "Sequential content creation from research to final output", + "swarm_type": "SequentialWorkflow", + "task": "Create a comprehensive blog post about the future of renewable energy", + "agents": [ + { + "agent_name": "Research Specialist", + "description": "Conducts thorough research on the topic", + "system_prompt": "You are a research specialist. Gather comprehensive, accurate information on the given topic from reliable sources.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Content Writer", + "description": "Creates engaging written content", + "system_prompt": "You are a skilled content writer. Transform research into engaging, well-structured articles that are informative and readable.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.6 + }, + { + "agent_name": "Editor", + "description": "Reviews and polishes the content", + "system_prompt": "You are a professional editor. Review content for clarity, grammar, flow, and overall quality. Make improvements while maintaining the author's voice.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.4 + }, + { + "agent_name": "SEO Optimizer", + "description": "Optimizes content for search engines", + "system_prompt": "You are an SEO expert. Optimize content for search engines while maintaining readability and quality.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + } + ], + "max_loops": 1 + }' + ``` + +=== "Python (requests)" + ```python + import requests + import json + + API_BASE_URL = "https://api.swarms.world" + API_KEY = "your_api_key_here" + + headers = { + "x-api-key": API_KEY, + "Content-Type": "application/json" + } + + swarm_config = { + "name": "Content Creation Pipeline", + "description": "Sequential content creation from research to final output", + "swarm_type": "SequentialWorkflow", + "task": "Create a comprehensive blog post about the future of renewable energy", + "agents": [ + { + "agent_name": "Research Specialist", + "description": "Conducts thorough research on the topic", + "system_prompt": "You are a research specialist. Gather comprehensive, accurate information on the given topic from reliable sources.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.3 + }, + { + "agent_name": "Content Writer", + "description": "Creates engaging written content", + "system_prompt": "You are a skilled content writer. Transform research into engaging, well-structured articles that are informative and readable.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.6 + }, + { + "agent_name": "Editor", + "description": "Reviews and polishes the content", + "system_prompt": "You are a professional editor. Review content for clarity, grammar, flow, and overall quality. Make improvements while maintaining the author's voice.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.4 + }, + { + "agent_name": "SEO Optimizer", + "description": "Optimizes content for search engines", + "system_prompt": "You are an SEO expert. Optimize content for search engines while maintaining readability and quality.", + "model_name": "gpt-4o", + "max_loops": 1, + "temperature": 0.2 + } + ], + "max_loops": 1 + } + + response = requests.post( + f"{API_BASE_URL}/v1/swarm/completions", + headers=headers, + json=swarm_config + ) + + if response.status_code == 200: + result = response.json() + print("SequentialWorkflow swarm completed successfully!") + print(f"Cost: ${result['metadata']['billing_info']['total_cost']}") + print(f"Execution time: {result['metadata']['execution_time_seconds']} seconds") + print(f"Final output: {result['output']}") + else: + print(f"Error: {response.status_code} - {response.text}") + ``` + +**Example Response**: +```json +{ + "job_id": "swarms-pbM8wqUwxq8afGeROV2A4xAcncd1", + "status": "success", + "swarm_name": "Content Creation Pipeline", + "description": "Sequential content creation from research to final output", + "swarm_type": "SequentialWorkflow", + "output": [ + { + "role": "Research Specialist", + "content": "\"**Title: The Future of Renewable Energy: Charting a Sustainable Path Forward**\n\nAs we navigate the complexities of the 21st century, the transition to renewable energy stands out as a critical endeavor to ensure a sustainable future......" + }, + { + "role": "SEO Optimizer", + "content": "\"**Title: The Future of Renewable Energy: Charting a Sustainable Path Forward**\n\nThe transition to renewable energy is crucial as we face the challenges of the 21st century, including climate change and dwindling fossil fuel resources......." + }, + { + "role": "Editor", + "content": "\"**Title: The Future of Renewable Energy: Charting a Sustainable Path Forward**\n\nAs we confront the challenges of the 21st century, transitioning to renewable energy is essential for a sustainable future. With climate change concerns escalating and fossil fuel reserves depleting, renewable energy is not just an option but a necessity...." + }, + { + "role": "Content Writer", + "content": "\"**Title: The Future of Renewable Energy: Charting a Sustainable Path Forward**\n\nAs we face the multifaceted challenges of the 21st century, transitioning to renewable energy emerges as not just an option but an essential step toward a sustainable future...." + } + ], + "number_of_agents": 4, + "service_tier": "standard", + "execution_time": 72.23084282875061, + "usage": { + "input_tokens": 28, + "output_tokens": 3012, + "total_tokens": 3040, + "billing_info": { + "cost_breakdown": { + "agent_cost": 0.04, + "input_token_cost": 0.000084, + "output_token_cost": 0.04518, + "token_counts": { + "total_input_tokens": 28, + "total_output_tokens": 3012, + "total_tokens": 3040 + }, + "num_agents": 4, + "service_tier": "standard", + "night_time_discount_applied": true + }, + "total_cost": 0.085264, + "discount_active": true, + "discount_type": "night_time", + "discount_percentage": 75 + } + } +} +``` + +## Best Practices + +- Design agents with clear, sequential dependencies +- Ensure each agent builds meaningfully on the previous work +- Use for linear workflows where order matters +- Validate outputs at each step before proceeding + +## Related Swarm Types + +- [ConcurrentWorkflow](concurrent_workflow.md) - For parallel execution +- [AgentRearrange](agent_rearrange.md) - For dynamic sequencing +- [HierarchicalSwarm](hierarchical_swarm.md) - For structured workflows + +-------------------------------------------------- + +# File: swarms_cloud/subscription_tiers.md # Swarms Cloud Subscription Tiers @@ -56193,43 +63252,41 @@ This project is licensed under the MIT License - see the LICENSE file for detail -------------------------------------------------- -# File: swarms_cloud\swarm_types.md +# File: swarms_cloud/swarm_types.md # Multi-Agent Architectures -Each multi-agent architecture type is designed for specific use cases and can be combined to create powerful multi-agent systems. Here's a comprehensive overview of each available swarm: +Each multi-agent architecture type is designed for specific use cases and can be combined to create powerful multi-agent systems. Below is an overview of each available swarm type: | Swarm Type | Description | Learn More | -|---------------------|------------------------------------------------------------------------------|------------| -| AgentRearrange | Dynamically reorganizes agents to optimize task performance and efficiency. Optimizes agent performance by dynamically adjusting their roles and positions within the workflow. This architecture is particularly useful when the effectiveness of agents depends on their sequence or arrangement. | [Learn More](/swarms/structs/agent_rearrange) | -| MixtureOfAgents | Creates diverse teams of specialized agents, each bringing unique capabilities to solve complex problems. Each agent contributes unique skills to achieve the overall goal, making it excel at tasks requiring multiple types of expertise or processing. | [Learn More](/swarms/structs/moa) | -| SpreadSheetSwarm | Provides a structured approach to data management and operations, making it ideal for tasks involving data analysis, transformation, and systematic processing in a spreadsheet-like structure. | [Learn More](/swarms/structs/spreadsheet_swarm) | -| SequentialWorkflow | Ensures strict process control by executing tasks in a predefined order. Perfect for workflows where each step depends on the completion of previous steps. | [Learn More](/swarms/structs/sequential_workflow) | -| ConcurrentWorkflow | Maximizes efficiency by running independent tasks in parallel, significantly reducing overall processing time for complex operations. Ideal for independent tasks that can be processed simultaneously. | [Learn More](/swarms/structs/concurrentworkflow) | -| GroupChat | Enables dynamic collaboration between agents through a chat-based interface, facilitating real-time information sharing and decision-making. | [Learn More](/swarms/structs/group_chat) | -| MultiAgentRouter | Acts as an intelligent task dispatcher, ensuring optimal distribution of work across available agents based on their capabilities and current workload. | [Learn More](/swarms/structs/multi_agent_router) | -| AutoSwarmBuilder | Simplifies swarm creation by automatically configuring agent architectures based on task requirements and performance metrics. | [Learn More](/swarms/structs/auto_swarm_builder) | -| HiearchicalSwarm | Implements a structured approach to task management, with clear lines of authority and delegation across multiple agent levels. | [Learn More](/swarms/structs/multi_swarm_orchestration) | -| auto | Provides intelligent swarm selection based on context, automatically choosing the most effective architecture for given tasks. | [Learn More](/swarms/concept/how_to_choose_swarms) | -| MajorityVoting | Implements robust decision-making through consensus, particularly useful for tasks requiring collective intelligence or verification. | [Learn More](/swarms/structs/majorityvoting) | -| MALT | Specialized framework for language-based tasks, optimizing agent collaboration for complex language processing operations. | [Learn More](/swarms/structs/malt) | +|----------------------|------------------------------------------------------------------------------|------------| +| AgentRearrange | Dynamically reorganizes agents to optimize task performance and efficiency. Useful when agent effectiveness depends on their sequence or arrangement. | [Learn More](agent_rearrange.md) | +| MixtureOfAgents | Builds diverse teams of specialized agents, each contributing unique skills to solve complex problems. Excels at tasks requiring multiple types of expertise. | [Learn More](mixture_of_agents.md) | +| SequentialWorkflow | Executes tasks in a strict, predefined order. Perfect for workflows where each step depends on the completion of the previous one. | [Learn More](sequential_workflow.md) | +| ConcurrentWorkflow | Runs independent tasks in parallel, significantly reducing processing time for complex operations. Ideal for tasks that can be processed simultaneously. | [Learn More](concurrent_workflow.md) | +| GroupChat | Enables dynamic collaboration between agents through a chat-based interface, facilitating real-time information sharing and decision-making. | [Learn More](group_chat.md) | +| HierarchicalSwarm | Implements a structured, multi-level approach to task management, with clear lines of authority and delegation. | [Learn More](hierarchical_swarm.md) | +| MultiAgentRouter | Acts as an intelligent task dispatcher, distributing work across agents based on their capabilities and current workload. | [Learn More](multi_agent_router.md) | +| MajorityVoting | Implements robust decision-making through consensus, ideal for tasks requiring collective intelligence or verification. | [Learn More](majority_voting.md) | + + + + # Learn More -To learn more about Swarms architecture and how different swarm types work together, visit our comprehensive guides: +To explore Swarms architecture and how different swarm types work together, check out our comprehensive guides: - [Introduction to Multi-Agent Architectures](/swarms/concept/swarm_architectures) - - [How to Choose the Right Multi-Agent Architecture](/swarms/concept/how_to_choose_swarms) - - [Framework Architecture Overview](/swarms/concept/framework_architecture) - - [Building Custom Swarms](/swarms/structs/custom_swarm) -------------------------------------------------- -# File: swarms_cloud\swarms_api.md +# File: swarms_cloud/swarms_api.md # Swarms API Documentation @@ -57505,7 +64562,7 @@ Error responses include a detailed message explaining the issue: -------------------------------------------------- -# File: swarms_cloud\swarms_api_tools.md +# File: swarms_cloud/swarms_api_tools.md # Swarms API with Tools Guide @@ -57892,7 +64949,7 @@ if __name__ == "__main__": -------------------------------------------------- -# File: swarms_memory\chromadb.md +# File: swarms_memory/chromadb.md # ChromaDB Documentation @@ -58038,7 +65095,7 @@ By following this documentation, users can effectively utilize the ChromaDB modu -------------------------------------------------- -# File: swarms_memory\faiss.md +# File: swarms_memory/faiss.md # FAISSDB: Documentation @@ -58275,7 +65332,7 @@ By following this documentation, users can effectively utilize the `FAISSDB` cla -------------------------------------------------- -# File: swarms_memory\index.md +# File: swarms_memory/index.md # Announcing the Release of Swarms-Memory Package: Your Gateway to Efficient RAG Systems @@ -58453,7 +65510,7 @@ For more detailed usage examples and documentation, visit our [GitHub repository -------------------------------------------------- -# File: swarms_memory\pinecone.md +# File: swarms_memory/pinecone.md # PineconeMemory Documentation @@ -58637,7 +65694,7 @@ This concludes the detailed documentation for the `PineconeMemory` class. The cl -------------------------------------------------- -# File: swarms_platform\account_management.md +# File: swarms_platform/account_management.md # Swarms Platform Account Management Documentation @@ -58832,7 +65889,7 @@ For further assistance or to learn more about managing your account on the Swarm -------------------------------------------------- -# File: swarms_platform\agents\agents_api.md +# File: swarms_platform/agents/agents_api.md # Agents API Documentation @@ -59054,7 +66111,7 @@ The response will be a JSON object containing the result of the operation. Examp -------------------------------------------------- -# File: swarms_platform\agents\edit_agent.md +# File: swarms_platform/agents/edit_agent.md # Endpoint: Edit Agent @@ -59311,7 +66368,7 @@ This comprehensive documentation provides all the necessary information to effec -------------------------------------------------- -# File: swarms_platform\agents\fetch_agents.md +# File: swarms_platform/agents/fetch_agents.md # Documentation for `getAllAgents` API Endpoint @@ -59731,7 +66788,7 @@ This documentation provides a comprehensive guide to the `getAllAgents` API endp -------------------------------------------------- -# File: swarms_platform\apikeys.md +# File: swarms_platform/apikeys.md # Swarms Platform API Keys Documentation @@ -59823,7 +66880,7 @@ For any further questions or issues regarding API key management, please refer t -------------------------------------------------- -# File: swarms_platform\apps_page.md +# File: swarms_platform/apps_page.md # Swarms Marketplace Apps Documentation @@ -60012,7 +67069,7 @@ The Apps page puts you in complete control of your Swarms experience, ensuring y -------------------------------------------------- -# File: swarms_platform\index.md +# File: swarms_platform/index.md # Swarms Platform Documentation @@ -60139,7 +67196,7 @@ The Swarms Platform is a versatile and powerful ecosystem for managing intellige -------------------------------------------------- -# File: swarms_platform\monetize.md +# File: swarms_platform/monetize.md # Swarms.World Monetization Guide @@ -60271,7 +67328,7 @@ packages -------------------------------------------------- -# File: swarms_platform\playground_page.md +# File: swarms_platform/playground_page.md # Swarms API Playground Documentation @@ -60613,7 +67670,7 @@ The Swarms Playground is your gateway to understanding and implementing the Swar -------------------------------------------------- -# File: swarms_platform\prompts\add_prompt.md +# File: swarms_platform/prompts/add_prompt.md # Prompts API Documentation @@ -60796,7 +67853,7 @@ The response will be a JSON object containing the result of the operation. Examp -------------------------------------------------- -# File: swarms_platform\prompts\edit_prompt.md +# File: swarms_platform/prompts/edit_prompt.md # Endpoint: Edit Prompt @@ -61016,7 +68073,7 @@ This comprehensive documentation provides all the necessary information to effec -------------------------------------------------- -# File: swarms_platform\prompts\fetch_prompts.md +# File: swarms_platform/prompts/fetch_prompts.md # Documentation for `getAllPrompts` API Endpoint @@ -61347,7 +68404,7 @@ This documentation provides a comprehensive guide to the `getAllPrompts` API end -------------------------------------------------- -# File: swarms_platform\share_and_discover.md +# File: swarms_platform/share_and_discover.md # Swarms Marketplace Documentation @@ -61746,7 +68803,7 @@ Together, we're building the future of agent collaboration, one contribution at -------------------------------------------------- -# File: swarms_rs\agents.md +# File: swarms_rs/agents.md # swarms-rs @@ -62104,7 +69161,7 @@ Contributions to swarms-rs are welcome! Check out our [GitHub repository](https: -------------------------------------------------- -# File: swarms_rs\overview.md +# File: swarms_rs/overview.md # swarms-rs 🚀 @@ -62164,7 +69221,7 @@ Contributions to swarms-rs are welcome! Check out our [GitHub repository](https: -------------------------------------------------- -# File: swarms_tools\finance.md +# File: swarms_tools/finance.md # Swarms Finance Tools Documentation @@ -62494,7 +69551,7 @@ The package automatically handles most dependencies, but you may need to install -------------------------------------------------- -# File: swarms_tools\overview.md +# File: swarms_tools/overview.md # Swarms Tools @@ -62739,7 +69796,7 @@ Explore the limitless possibilities of agent-based systems. Together, we can bui -------------------------------------------------- -# File: swarms_tools\search.md +# File: swarms_tools/search.md # Search Tools Documentation @@ -62916,7 +69973,7 @@ For issues and feature requests, please visit the [GitHub repository](https://gi -------------------------------------------------- -# File: swarms_tools\twitter.md +# File: swarms_tools/twitter.md # Twitter Tool Documentation diff --git a/tests/agent/benchmark_agent/test_benchmark_init.py b/tests/agent/benchmark_agent/test_agent_benchmark_init.py similarity index 97% rename from tests/agent/benchmark_agent/test_benchmark_init.py rename to tests/agent/benchmark_agent/test_agent_benchmark_init.py index 1a700e22..5f852576 100644 --- a/tests/agent/benchmark_agent/test_benchmark_init.py +++ b/tests/agent/benchmark_agent/test_agent_benchmark_init.py @@ -118,6 +118,10 @@ def benchmark_multiple_agents(num_agents=100): "Throughput", f"{(num_agents/total_elapsed_time) * 1000:.2f} agents/second", ) + time_table.add_row( + "Agents per Minute", + f"{(num_agents/total_elapsed_time) * 60000:.0f} agents/minute", + ) # Add memory measurements memory_table.add_row(