Merge branch 'master' into update-readme-branch

pull/855/head
Pavan Kumar 5 days ago committed by GitHub
commit 063b0c1546
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

@ -1,34 +0,0 @@
name: Python Package using Conda
on: [push]
jobs:
build-linux:
runs-on: ubuntu-latest
strategy:
max-parallel: 5
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.10
uses: actions/setup-python@v5
with:
python-version: '3.10'
- name: Add conda to system path
run: |
# $CONDA is an environment variable pointing to the root of the miniconda directory
echo $CONDA/bin >> $GITHUB_PATH
- name: Install dependencies
run: |
conda env update --file environment.yml --name base
- name: Lint with flake8
run: |
conda install flake8
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
conda install pytest
pytest

@ -1,765 +0,0 @@
# Swarms API: Orchestrating the Future of AI Agent Collaboration
In today's rapidly evolving AI landscape, we're witnessing a fundamental shift from single-agent AI systems to complex, collaborative multi-agent architectures. While individual AI models like GPT-4 and Claude have demonstrated remarkable capabilities, they often struggle with complex tasks requiring diverse expertise, nuanced decision-making, and specialized domain knowledge. Enter the Swarms API, an enterprise-grade solution designed to orchestrate collaborative intelligence through coordinated AI agent swarms.
## The Problem: The Limitations of Single-Agent AI
Despite significant advances in large language models and AI systems, single-agent architectures face inherent limitations when tackling complex real-world problems:
### Expertise Boundaries
Even the most advanced AI models have knowledge boundaries. No single model can possess expert-level knowledge across all domains simultaneously. When a task requires deep expertise in multiple areas (finance, law, medicine, and technical analysis, for example), a single agent quickly reaches its limits.
### Complex Reasoning Chains
Many real-world problems demand multistep reasoning with multiple feedback loops and verification processes. Single agents often struggle to maintain reasoning coherence through extended problem-solving journeys, leading to errors that compound over time.
### Workflow Orchestration
Enterprise applications frequently require sophisticated workflows with multiple handoffs, approvals, and specialized processing steps. Managing this orchestration with individual AI instances is inefficient and error-prone.
### Resource Optimization
Deploying high-powered AI models for every task is expensive and inefficient. Organizations need right-sized solutions that match computing resources to task requirements.
### Collaboration Mechanisms
The most sophisticated human problem-solving happens in teams, where specialists collaborate, debate, and refine solutions together. This collaborative intelligence is difficult to replicate with isolated AI agents.
## The Solution: Swarms API
The Swarms API addresses these challenges through a revolutionary approach to AI orchestration. By enabling multiple specialized agents to collaborate in coordinated swarms, it unlocks new capabilities previously unattainable with single-agent architectures.
### What is the Swarms API?
The Swarms API is an enterprise-grade platform that enables organizations to deploy and manage intelligent agent swarms in the cloud. Rather than relying on a single AI agent to handle complex tasks, the Swarms API orchestrates teams of specialized AI agents that work together, each handling specific aspects of a larger problem.
The platform provides a robust infrastructure for creating, executing, and managing sophisticated AI agent workflows without the burden of maintaining the underlying infrastructure. With its cloud-native architecture, the Swarms API offers scalability, reliability, and security essential for enterprise deployments.
## Core Capabilities
The Swarms API delivers a comprehensive suite of capabilities designed for production-grade AI orchestration:
### Intelligent Swarm Management
At its core, the Swarms API enables the creation and execution of collaborative agent swarms. These swarms consist of specialized AI agents designed to work together on complex tasks. Unlike traditional AI approaches where a single model handles the entire workload, swarms distribute tasks among specialized agents, each contributing its expertise to the collective solution.
For example, a financial analysis swarm might include:
- A data preprocessing agent that cleans and normalizes financial data
- A market analyst agent that identifies trends and patterns
- An economic forecasting agent that predicts future market conditions
- A report generation agent that compiles insights into a comprehensive analysis
By coordinating these specialized agents, the swarm can deliver more accurate, nuanced, and valuable results than any single agent could produce alone.
### Automatic Agent Generation
One of the most powerful features of the Swarms API is its ability to dynamically create optimized agents based on task requirements. Rather than manually configuring each agent in a swarm, users can specify the overall task and let the platform automatically generate appropriate agents with optimized prompts and configurations.
This automatic agent generation significantly reduces the expertise and effort required to deploy effective AI solutions. The system analyzes the task requirements and creates a set of agents specifically designed to address different aspects of the problem. This approach not only saves time but also improves the quality of results by ensuring each agent is properly configured for its specific role.
### Multiple Swarm Architectures
Different problems require different collaboration patterns. The Swarms API supports various swarm architectures to match specific workflow needs:
- **SequentialWorkflow**: Agents work in a predefined sequence, with each agent handling specific subtasks in order
- **ConcurrentWorkflow**: Multiple agents work simultaneously on different aspects of a task
- **GroupChat**: Agents collaborate in a discussion format to solve problems collectively
- **HierarchicalSwarm**: Organizes agents in a structured hierarchy with managers and workers
- **MajorityVoting**: Uses a consensus mechanism where multiple agents vote on the best solution
- **AutoSwarmBuilder**: Automatically designs and builds an optimal swarm architecture based on the task
- **MixtureOfAgents**: Combines multiple agent types to tackle diverse aspects of a problem
- **MultiAgentRouter**: Routes subtasks to specialized agents based on their capabilities
- **AgentRearrange**: Dynamically reorganizes the workflow between agents based on evolving task requirements
This flexibility allows organizations to select the most appropriate collaboration pattern for each specific use case, optimizing the balance between efficiency, thoroughness, and creativity.
### Scheduled Execution
The Swarms API enables automated, scheduled swarm executions, allowing organizations to set up recurring tasks that run automatically at specified times. This feature is particularly valuable for regular reporting, monitoring, and analysis tasks that need to be performed on a consistent schedule.
For example, a financial services company could schedule a daily market analysis swarm to run before trading hours, providing updated insights based on overnight market movements. Similarly, a cybersecurity team might schedule hourly security assessment swarms to continuously monitor potential threats.
### Comprehensive Logging
Transparency and auditability are essential for enterprise AI applications. The Swarms API provides comprehensive logging capabilities that track all API interactions, agent communications, and decision processes. This detailed logging enables:
- Debugging and troubleshooting swarm behaviors
- Auditing decision trails for compliance and quality assurance
- Analyzing performance patterns to identify optimization opportunities
- Documenting the rationale behind AI-generated recommendations
These logs provide valuable insights into how swarms operate and make decisions, increasing trust and enabling continuous improvement of AI workflows.
### Cost Management
AI deployment costs can quickly escalate without proper oversight. The Swarms API addresses this challenge through:
- **Predictable, transparent pricing**: Clear cost structures that make budgeting straightforward
- **Optimized resource utilization**: Intelligent allocation of computing resources based on task requirements
- **Detailed cost breakdowns**: Comprehensive reporting on token usage, agent costs, and total expenditures
- **Model flexibility**: Freedom to choose the most cost-effective models for each agent based on task complexity
This approach ensures organizations get maximum value from their AI investments without unexpected cost overruns.
### Enterprise Security
Security is paramount for enterprise AI deployments. The Swarms API implements robust security measures including:
- **Full API key authentication**: Secure access control for all API interactions
- **Comprehensive key management**: Tools for creating, rotating, and revoking API keys
- **Usage monitoring**: Tracking and alerting for suspicious activity patterns
- **Secure data handling**: Appropriate data protection throughout the swarm execution lifecycle
These security features ensure that sensitive data and AI workflows remain protected in accordance with enterprise security requirements.
## How It Works: Behind the Scenes
The Swarms API operates on a sophisticated architecture designed for reliability, scalability, and performance. Here's a look at what happens when you submit a task to the Swarms API:
1. **Task Submission**: You send a request to the API with your task description and desired swarm configuration.
2. **Swarm Configuration**: The system either uses your specified agent configuration or automatically generates an optimal swarm structure based on the task requirements.
3. **Agent Initialization**: Each agent in the swarm is initialized with its specific instructions, model parameters, and role definitions.
4. **Orchestration Setup**: The system establishes the communication and workflow patterns between agents based on the selected swarm architecture.
5. **Execution**: The swarm begins working on the task, with agents collaborating according to their defined roles and relationships.
6. **Monitoring and Adjustment**: Throughout execution, the system monitors agent performance and makes adjustments as needed.
7. **Result Compilation**: Once the task is complete, the system compiles the results into the requested format.
8. **Response Delivery**: The final output is returned to you, along with metadata about the execution process.
This entire process happens seamlessly in the cloud, with the Swarms API handling all the complexities of agent coordination, resource allocation, and workflow management.
## Real-World Applications
The Swarms API enables a wide range of applications across industries. Here are some compelling use cases that demonstrate its versatility:
### Financial Services
#### Investment Research
Financial institutions can deploy research swarms that combine market analysis, economic forecasting, company evaluation, and risk assessment. These swarms can evaluate investment opportunities much more comprehensively than single-agent systems, considering multiple factors simultaneously:
- Macroeconomic indicators
- Company fundamentals
- Market sentiment
- Technical analysis patterns
- Regulatory considerations
For example, an investment research swarm analyzing a potential stock purchase might include specialists in the company's industry, financial statement analysis, market trend identification, and risk assessment. This collaborative approach delivers more nuanced insights than any single analyst or model could produce independently.
#### Regulatory Compliance
Financial regulations are complex and constantly evolving. Compliance swarms can monitor regulatory changes, assess their impact on existing policies, and recommend appropriate adjustments. These swarms might include:
- Regulatory monitoring agents that track new rules and guidelines
- Policy analysis agents that evaluate existing compliance frameworks
- Gap assessment agents that identify discrepancies
- Documentation agents that update compliance materials
This approach ensures comprehensive coverage of regulatory requirements while minimizing compliance risks.
### Healthcare
#### Medical Research Analysis
The medical literature grows at an overwhelming pace, making it difficult for researchers and clinicians to stay current. Research analysis swarms can continuously scan new publications, identify relevant findings, and synthesize insights for specific research questions or clinical scenarios.
A medical research swarm might include:
- Literature scanning agents that identify relevant publications
- Methodology assessment agents that evaluate research quality
- Clinical relevance agents that determine practical applications
- Summary agents that compile key findings into accessible reports
This collaborative approach enables more thorough literature reviews and helps bridge the gap between research and clinical practice.
#### Treatment Planning
Complex medical cases often benefit from multidisciplinary input. Treatment planning swarms can integrate perspectives from different medical specialties, consider patient-specific factors, and recommend comprehensive care approaches.
For example, an oncology treatment planning swarm might include specialists in:
- Diagnostic interpretation
- Treatment protocol evaluation
- Drug interaction assessment
- Patient history analysis
- Evidence-based outcome prediction
By combining these specialized perspectives, the swarm can develop more personalized and effective treatment recommendations.
### Legal Services
#### Contract Analysis
Legal contracts contain numerous interconnected provisions that must be evaluated holistically. Contract analysis swarms can review complex agreements more thoroughly by assigning different sections to specialized agents:
- Definition analysis agents that ensure consistent terminology
- Risk assessment agents that identify potential liabilities
- Compliance agents that check regulatory requirements
- Precedent comparison agents that evaluate terms against standards
- Conflict detection agents that identify internal inconsistencies
This distributed approach enables more comprehensive contract reviews while reducing the risk of overlooking critical details.
#### Legal Research
Legal research requires examining statutes, case law, regulations, and scholarly commentary. Research swarms can conduct multi-faceted legal research by coordinating specialized agents focusing on different aspects of the legal landscape.
A legal research swarm might include:
- Statutory analysis agents that examine relevant laws
- Case law agents that review judicial precedents
- Regulatory agents that assess administrative rules
- Scholarly analysis agents that evaluate academic perspectives
- Synthesis agents that integrate findings into cohesive arguments
This collaborative approach produces more comprehensive legal analyses that consider multiple sources of authority.
### Research and Development
#### Scientific Literature Review
Scientific research increasingly spans multiple disciplines, making comprehensive literature reviews challenging. Literature review swarms can analyze publications across relevant fields, identify methodological approaches, and synthesize findings from diverse sources.
For example, a biomedical engineering literature review swarm might include specialists in:
- Materials science
- Cellular biology
- Clinical applications
- Regulatory requirements
- Statistical methods
By integrating insights from these different perspectives, the swarm can produce more comprehensive and valuable literature reviews.
#### Experimental Design
Designing robust experiments requires considering multiple factors simultaneously. Experimental design swarms can develop sophisticated research protocols by integrating methodological expertise, statistical considerations, practical constraints, and ethical requirements.
An experimental design swarm might coordinate:
- Methodology agents that design experimental procedures
- Statistical agents that determine appropriate sample sizes and analyses
- Logistics agents that assess practical feasibility
- Ethics agents that evaluate potential concerns
- Documentation agents that prepare formal protocols
This collaborative approach leads to more rigorous experimental designs while addressing potential issues preemptively.
### Software Development
#### Code Review and Optimization
Code review requires evaluating multiple aspects simultaneously: functionality, security, performance, maintainability, and adherence to standards. Code review swarms can distribute these concerns among specialized agents:
- Functionality agents that evaluate whether code meets requirements
- Security agents that identify potential vulnerabilities
- Performance agents that assess computational efficiency
- Style agents that check adherence to coding standards
- Documentation agents that review comments and documentation
By addressing these different aspects in parallel, code review swarms can provide more comprehensive feedback to development teams.
#### System Architecture Design
Designing complex software systems requires balancing numerous considerations. Architecture design swarms can develop more robust system designs by coordinating specialists in different architectural concerns:
- Scalability agents that evaluate growth potential
- Security agents that assess protective measures
- Performance agents that analyze efficiency
- Maintainability agents that consider long-term management
- Integration agents that evaluate external system connections
This collaborative approach leads to more balanced architectural decisions that address multiple requirements simultaneously.
## Getting Started with the Swarms API
The Swarms API is designed for straightforward integration into existing workflows. Let's walk through the setup process and explore some practical code examples for different industries.
### 1. Setting Up Your Environment
First, create an account on [swarms.world](https://swarms.world). After registration, navigate to the API key management interface at [https://swarms.world/platform/api-keys](https://swarms.world/platform/api-keys) to generate your API key.
Once you have your API key, set up your Python environment:
```python
# Install required packages
pip install requests python-dotenv
```
Create a basic project structure:
```
swarms-project/
├── .env # Store your API key securely
├── swarms_client.py # Helper functions for API interaction
└── examples/ # Industry-specific examples
```
In your `.env` file, add your API key:
```
SWARMS_API_KEY=your_api_key_here
```
### 2. Creating a Basic Swarms Client
Let's create a simple client to interact with the Swarms API:
```python
# swarms_client.py
import os
import requests
from dotenv import load_dotenv
import json
# Load environment variables
load_dotenv()
# Configuration
API_KEY = os.getenv("SWARMS_API_KEY")
BASE_URL = "https://api.swarms.world"
# Standard headers for all requests
headers = {
"x-api-key": API_KEY,
"Content-Type": "application/json"
}
def check_api_health():
"""Simple health check to verify API connectivity."""
response = requests.get(f"{BASE_URL}/health", headers=headers)
return response.json()
def run_swarm(swarm_config):
"""Execute a swarm with the provided configuration."""
response = requests.post(
f"{BASE_URL}/v1/swarm/completions",
headers=headers,
json=swarm_config
)
return response.json()
def get_available_swarms():
"""Retrieve list of available swarm types."""
response = requests.get(f"{BASE_URL}/v1/swarms/available", headers=headers)
return response.json()
def get_available_models():
"""Retrieve list of available AI models."""
response = requests.get(f"{BASE_URL}/v1/models/available", headers=headers)
return response.json()
def get_swarm_logs():
"""Retrieve logs of previous swarm executions."""
response = requests.get(f"{BASE_URL}/v1/swarm/logs", headers=headers)
return response.json()
```
### 3. Industry-Specific Examples
Let's explore practical applications of the Swarms API across different industries.
#### Healthcare: Clinical Research Assistant
This example creates a swarm that analyzes clinical trial data and summarizes findings:
```python
# healthcare_example.py
from swarms_client import run_swarm
import json
def clinical_research_assistant():
"""
Create a swarm that analyzes clinical trial data, identifies patterns,
and generates comprehensive research summaries.
"""
swarm_config = {
"name": "Clinical Research Assistant",
"description": "Analyzes medical research data and synthesizes findings",
"agents": [
{
"agent_name": "Data Preprocessor",
"description": "Cleans and organizes clinical trial data",
"system_prompt": "You are a data preprocessing specialist focused on clinical trials. "
"Your task is to organize, clean, and structure raw clinical data for analysis. "
"Identify and handle missing values, outliers, and inconsistencies in the data.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Clinical Analyst",
"description": "Analyzes preprocessed data to identify patterns and insights",
"system_prompt": "You are a clinical research analyst with expertise in interpreting medical data. "
"Your job is to examine preprocessed clinical trial data, identify significant patterns, "
"and determine the clinical relevance of these findings. Consider factors such as "
"efficacy, safety profiles, and patient subgroups.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Medical Writer",
"description": "Synthesizes analysis into comprehensive reports",
"system_prompt": "You are a medical writer specializing in clinical research. "
"Your task is to take the analyses provided and create comprehensive, "
"well-structured reports that effectively communicate findings to both "
"medical professionals and regulatory authorities. Follow standard "
"medical publication guidelines.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
}
],
"max_loops": 1,
"swarm_type": "SequentialWorkflow",
"task": "Analyze the provided Phase III clinical trial data for Drug XYZ, "
"a novel treatment for type 2 diabetes. Identify efficacy patterns across "
"different patient demographics, note any safety concerns, and prepare "
"a comprehensive summary suitable for submission to regulatory authorities."
}
# Execute the swarm
result = run_swarm(swarm_config)
# Print formatted results
print(json.dumps(result, indent=4))
return result
if __name__ == "__main__":
clinical_research_assistant()
```
#### Legal: Contract Analysis System
This example demonstrates a swarm designed to analyze complex legal contracts:
```python
# legal_example.py
from swarms_client import run_swarm
import json
def contract_analysis_system():
"""
Create a swarm that thoroughly analyzes legal contracts,
identifies potential risks, and suggests improvements.
"""
swarm_config = {
"name": "Contract Analysis System",
"description": "Analyzes legal contracts for risks and improvement opportunities",
"agents": [
{
"agent_name": "Clause Extractor",
"description": "Identifies and categorizes key clauses in contracts",
"system_prompt": "You are a legal document specialist. Your task is to "
"carefully review legal contracts and identify all key clauses, "
"categorizing them by type (liability, indemnification, termination, etc.). "
"Extract each clause with its context and prepare them for detailed analysis.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Risk Assessor",
"description": "Evaluates clauses for potential legal risks",
"system_prompt": "You are a legal risk assessment expert. Your job is to "
"analyze contract clauses and identify potential legal risks, "
"exposure points, and unfavorable terms. Rate each risk on a "
"scale of 1-5 and provide justification for your assessment.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Improvement Recommender",
"description": "Suggests alternative language to mitigate risks",
"system_prompt": "You are a contract drafting expert. Based on the risk "
"assessment provided, suggest alternative language for "
"problematic clauses to better protect the client's interests. "
"Ensure suggestions are legally sound and professionally worded.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Summary Creator",
"description": "Creates executive summary of findings and recommendations",
"system_prompt": "You are a legal communication specialist. Create a clear, "
"concise executive summary of the contract analysis, highlighting "
"key risks and recommendations. Your summary should be understandable "
"to non-legal executives while maintaining accuracy.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
}
],
"max_loops": 1,
"swarm_type": "SequentialWorkflow",
"task": "Analyze the attached software licensing agreement between TechCorp and ClientInc. "
"Identify all key clauses, assess potential risks to ClientInc, suggest improvements "
"to better protect ClientInc's interests, and create an executive summary of findings."
}
# Execute the swarm
result = run_swarm(swarm_config)
# Print formatted results
print(json.dumps(result, indent=4))
return result
if __name__ == "__main__":
contract_analysis_system()
```
#### Private Equity: Investment Opportunity Analysis
This example shows a swarm that performs comprehensive due diligence on potential investments:
```python
# private_equity_example.py
from swarms_client import run_swarm, schedule_swarm
import json
from datetime import datetime, timedelta
def investment_opportunity_analysis():
"""
Create a swarm that performs comprehensive due diligence
on potential private equity investment opportunities.
"""
swarm_config = {
"name": "PE Investment Analyzer",
"description": "Performs comprehensive analysis of private equity investment opportunities",
"agents": [
{
"agent_name": "Financial Analyst",
"description": "Analyzes financial statements and projections",
"system_prompt": "You are a private equity financial analyst with expertise in "
"evaluating company financials. Review the target company's financial "
"statements, analyze growth trajectories, profit margins, cash flow patterns, "
"and debt structure. Identify financial red flags and growth opportunities.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Market Researcher",
"description": "Assesses market conditions and competitive landscape",
"system_prompt": "You are a market research specialist in the private equity sector. "
"Analyze the target company's market position, industry trends, competitive "
"landscape, and growth potential. Identify market-related risks and opportunities "
"that could impact investment returns.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Operational Due Diligence",
"description": "Evaluates operational efficiency and improvement opportunities",
"system_prompt": "You are an operational due diligence expert. Analyze the target "
"company's operational structure, efficiency metrics, supply chain, "
"technology infrastructure, and management capabilities. Identify "
"operational improvement opportunities that could increase company value.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Risk Assessor",
"description": "Identifies regulatory, legal, and business risks",
"system_prompt": "You are a risk assessment specialist in private equity. "
"Evaluate potential regulatory challenges, legal liabilities, "
"compliance issues, and business model vulnerabilities. Rate "
"each risk based on likelihood and potential impact.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Investment Thesis Creator",
"description": "Synthesizes analysis into comprehensive investment thesis",
"system_prompt": "You are a private equity investment strategist. Based on the "
"analyses provided, develop a comprehensive investment thesis "
"that includes valuation assessment, potential returns, value "
"creation opportunities, exit strategies, and investment recommendations.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
}
],
"max_loops": 1,
"swarm_type": "SequentialWorkflow",
"task": "Perform comprehensive due diligence on HealthTech Inc., a potential acquisition "
"target in the healthcare technology sector. The company develops remote patient "
"monitoring solutions and has shown 35% year-over-year growth for the past three years. "
"Analyze financials, market position, operational structure, potential risks, and "
"develop an investment thesis with a recommended valuation range."
}
# Option 1: Execute the swarm immediately
result = run_swarm(swarm_config)
# Option 2: Schedule the swarm for tomorrow morning
tomorrow = (datetime.now() + timedelta(days=1)).replace(hour=8, minute=0, second=0).isoformat()
# scheduled_result = schedule_swarm(swarm_config, tomorrow, "America/New_York")
# Print formatted results from immediate execution
print(json.dumps(result, indent=4))
return result
if __name__ == "__main__":
investment_opportunity_analysis()
```
#### Education: Curriculum Development Assistant
This example shows how to use the Concurrent Workflow swarm type:
```python
# education_example.py
from swarms_client import run_swarm
import json
def curriculum_development_assistant():
"""
Create a swarm that assists in developing educational curriculum
with concurrent subject matter experts.
"""
swarm_config = {
"name": "Curriculum Development Assistant",
"description": "Develops comprehensive educational curriculum",
"agents": [
{
"agent_name": "Subject Matter Expert",
"description": "Provides domain expertise on the subject",
"system_prompt": "You are a subject matter expert in data science. "
"Your role is to identify the essential concepts, skills, "
"and knowledge that students need to master in a comprehensive "
"data science curriculum. Focus on both theoretical foundations "
"and practical applications, ensuring the content reflects current "
"industry standards and practices.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Instructional Designer",
"description": "Structures learning objectives and activities",
"system_prompt": "You are an instructional designer specializing in technical education. "
"Your task is to transform subject matter content into structured learning "
"modules with clear objectives, engaging activities, and appropriate assessments. "
"Design the learning experience to accommodate different learning styles and "
"knowledge levels.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Assessment Specialist",
"description": "Develops evaluation methods and assessments",
"system_prompt": "You are an educational assessment specialist. "
"Design comprehensive assessment strategies to evaluate student "
"learning throughout the curriculum. Create formative and summative "
"assessments, rubrics, and feedback mechanisms that align with learning "
"objectives and provide meaningful insights into student progress.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Curriculum Integrator",
"description": "Synthesizes input from all specialists into a cohesive curriculum",
"system_prompt": "You are a curriculum development coordinator. "
"Your role is to synthesize the input from subject matter experts, "
"instructional designers, and assessment specialists into a cohesive, "
"comprehensive curriculum. Ensure logical progression of topics, "
"integration of theory and practice, and alignment between content, "
"activities, and assessments.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
}
],
"max_loops": 1,
"swarm_type": "ConcurrentWorkflow", # Experts work simultaneously before integration
"task": "Develop a comprehensive 12-week data science curriculum for advanced undergraduate "
"students with programming experience. The curriculum should cover data analysis, "
"machine learning, data visualization, and ethics in AI. Include weekly learning "
"objectives, teaching materials, hands-on activities, and assessment methods. "
"The curriculum should prepare students for entry-level data science positions."
}
# Execute the swarm
result = run_swarm(swarm_config)
# Print formatted results
print(json.dumps(result, indent=4))
return result
if __name__ == "__main__":
curriculum_development_assistant()
```
### 5. Monitoring and Optimization
To optimize your swarm configurations and track usage patterns, you can retrieve and analyze logs:
```python
# analytics_example.py
from swarms_client import get_swarm_logs
import json
def analyze_swarm_usage():
"""
Analyze swarm usage patterns to optimize configurations and costs.
"""
# Retrieve logs
logs = get_swarm_logs()
return logs
if __name__ == "__main__":
analyze_swarm_usage()
```
### 6. Next Steps
Once you've implemented and tested these examples, you can further optimize your swarm configurations by:
1. Experimenting with different swarm architectures for the same task to compare results
2. Adjusting agent prompts to improve specialization and collaboration
3. Fine-tuning model parameters like temperature and max_tokens
4. Combining swarms into larger workflows through scheduled execution
The Swarms API's flexibility allows for continuous refinement of your AI orchestration strategies, enabling increasingly sophisticated solutions to complex problems.
## The Future of AI Agent Orchestration
The Swarms API represents a significant evolution in how we deploy AI for complex tasks. As we look to the future, several trends are emerging in the field of agent orchestration:
### Specialized Agent Ecosystems
We're moving toward rich ecosystems of highly specialized agents designed for specific tasks and domains. These specialized agents will have deep expertise in narrow areas, enabling more sophisticated collaboration when combined in swarms.
### Dynamic Swarm Formation
Future swarm platforms will likely feature even more advanced capabilities for dynamic swarm formation, where the system automatically determines not only which agents to include but also how they should collaborate based on real-time task analysis.
### Cross-Modal Collaboration
As AI capabilities expand across modalities (text, image, audio, video), we'll see increasing collaboration between agents specialized in different data types. This cross-modal collaboration will enable more comprehensive analysis and content creation spanning multiple formats.
### Human-Swarm Collaboration
The next frontier in agent orchestration will be seamless collaboration between human teams and AI swarms, where human specialists and AI agents work together, each contributing their unique strengths to complex problems.
### Continuous Learning Swarms
Future swarms will likely incorporate more sophisticated mechanisms for continuous improvement, with agent capabilities evolving based on past performance and feedback.
## Conclusion
The Swarms API represents a significant leap forward in AI orchestration, moving beyond the limitations of single-agent systems to unlock the power of collaborative intelligence. By enabling specialized agents to work together in coordinated swarms, this enterprise-grade platform opens new possibilities for solving complex problems across industries.
From financial analysis to healthcare research, legal services to software development, the applications for agent swarms are as diverse as they are powerful. The Swarms API provides the infrastructure, tools, and flexibility needed to deploy these collaborative AI systems at scale, with the security, reliability, and cost management features essential for enterprise adoption.
As we continue to push the boundaries of what AI can accomplish, the ability to orchestrate collaborative intelligence will become increasingly crucial. The Swarms API is at the forefront of this evolution, providing a glimpse into the future of AI—a future where the most powerful AI systems aren't individual models but coordinated teams of specialized agents working together to solve our most challenging problems.
For organizations looking to harness the full potential of AI, the Swarms API offers a compelling path forward—one that leverages the power of collaboration to achieve results beyond what any single AI agent could accomplish alone.
To explore the Swarms API and begin building your own intelligent agent swarms, visit [swarms.world](https://swarms.world) today.
---
## Resources
* Website: [swarms.ai](https://swarms.ai)
* Marketplace: [swarms.world](https://swarms.world)
* Cloud Platform: [cloud.swarms.ai](https://cloud.swarms.ai)
* Documentation: [docs.swarms.world](https://docs.swarms.world/en/latest/swarms_cloud/swarms_api/)

@ -7,7 +7,7 @@ Before we dive into the code, let's briefly introduce the Swarms framework. Swar
For more information and to contribute to the project, visit the [Swarms GitHub repository](https://github.com/kyegomez/swarms). We highly recommend exploring the documentation for a deeper understanding of Swarms' capabilities.
Additional resources:
- [Swarms Discord](https://discord.gg/swarms) for community discussions
- [Swarms Discord](https://discord.gg/jM3Z6M9uMq) for community discussions
- [Swarms Twitter](https://x.com/swarms_corp) for updates
- [Swarms Spotify](https://open.spotify.com/show/2HLiswhmUaMdjHC8AUHcCF?si=c831ef10c5ef4994) for podcasts
- [Swarms Blog](https://medium.com/@kyeg) for in-depth articles
@ -460,7 +460,7 @@ This system provides a powerful foundation for financial analysis, but there's a
Remember, the Swarms framework is a powerful and flexible tool that can be adapted to a wide range of complex tasks beyond just financial analysis. We encourage you to explore the [Swarms GitHub repository](https://github.com/kyegomez/swarms) for more examples and inspiration.
For more in-depth discussions and community support, consider joining the [Swarms Discord](https://discord.gg/swarms). You can also stay updated with the latest developments by following [Swarms on Twitter](https://x.com/swarms_corp).
For more in-depth discussions and community support, consider joining the [Swarms Discord](https://discord.gg/jM3Z6M9uMq). You can also stay updated with the latest developments by following [Swarms on Twitter](https://x.com/swarms_corp).
If you're interested in learning more about AI and its applications in various fields, check out the [Swarms Spotify podcast](https://open.spotify.com/show/2HLiswhmUaMdjHC8AUHcCF?si=c831ef10c5ef4994) and the [Swarms Blog](https://medium.com/@kyeg) for insightful articles and discussions.
@ -474,7 +474,7 @@ By leveraging the power of multi-agent AI systems, you're well-equipped to navig
* [Swarms Github](https://github.com/kyegomez/swarms)
* [Swarms Discord](https://discord.gg/swarms)
* [Swarms Discord](https://discord.gg/jM3Z6M9uMq)
* [Swarms Twitter](https://x.com/swarms_corp)
* [Swarms Spotify](https://open.spotify.com/show/2HLiswhmUaMdjHC8AUHcCF?si=c831ef10c5ef4994)
* [Swarms Blog](https://medium.com/@kyeg)

@ -261,7 +261,7 @@ The table below summarizes the estimated savings for each use case:
- [book a call](https://cal.com/swarms)
- Swarms Discord: https://discord.gg/swarms
- Swarms Discord: https://discord.gg/jM3Z6M9uMq
- Swarms Twitter: https://x.com/swarms_corp

@ -57,7 +57,7 @@ Here you'll find references about the Swarms framework, marketplace, community,
## Community
| Section | Links |
|----------------------|--------------------------------------------------------------------------------------------|
| Community | [Discord](https://discord.gg/swarms) |
| Community | [Discord](https://discord.gg/jM3Z6M9uMq) |
| Blog | [Blog](https://medium.com/@kyeg) |
| Event Calendar | [LUMA](https://lu.ma/swarms_calendar) |
| Twitter | [Twitter](https://x.com/swarms_corp) |

@ -6501,7 +6501,7 @@ Before we dive into the code, let's briefly introduce the Swarms framework. Swar
For more information and to contribute to the project, visit the [Swarms GitHub repository](https://github.com/kyegomez/swarms). We highly recommend exploring the documentation for a deeper understanding of Swarms' capabilities.
Additional resources:
- [Swarms Discord](https://discord.gg/swarms) for community discussions
- [Swarms Discord](https://discord.gg/jM3Z6M9uMq) for community discussions
- [Swarms Twitter](https://x.com/swarms_corp) for updates
- [Swarms Spotify](https://open.spotify.com/show/2HLiswhmUaMdjHC8AUHcCF?si=c831ef10c5ef4994) for podcasts
- [Swarms Blog](https://medium.com/@kyeg) for in-depth articles
@ -6954,7 +6954,7 @@ This system provides a powerful foundation for financial analysis, but there's a
Remember, the Swarms framework is a powerful and flexible tool that can be adapted to a wide range of complex tasks beyond just financial analysis. We encourage you to explore the [Swarms GitHub repository](https://github.com/kyegomez/swarms) for more examples and inspiration.
For more in-depth discussions and community support, consider joining the [Swarms Discord](https://discord.gg/swarms). You can also stay updated with the latest developments by following [Swarms on Twitter](https://x.com/swarms_corp).
For more in-depth discussions and community support, consider joining the [Swarms Discord](https://discord.gg/jM3Z6M9uMq). You can also stay updated with the latest developments by following [Swarms on Twitter](https://x.com/swarms_corp).
If you're interested in learning more about AI and its applications in various fields, check out the [Swarms Spotify podcast](https://open.spotify.com/show/2HLiswhmUaMdjHC8AUHcCF?si=c831ef10c5ef4994) and the [Swarms Blog](https://medium.com/@kyeg) for insightful articles and discussions.
@ -6968,7 +6968,7 @@ By leveraging the power of multi-agent AI systems, you're well-equipped to navig
* [Swarms Github](https://github.com/kyegomez/swarms)
* [Swarms Discord](https://discord.gg/swarms)
* [Swarms Discord](https://discord.gg/jM3Z6M9uMq)
* [Swarms Twitter](https://x.com/swarms_corp)
* [Swarms Spotify](https://open.spotify.com/show/2HLiswhmUaMdjHC8AUHcCF?si=c831ef10c5ef4994)
* [Swarms Blog](https://medium.com/@kyeg)
@ -7997,7 +7997,7 @@ The table below summarizes the estimated savings for each use case:
- [book a call](https://cal.com/swarms)
- Swarms Discord: https://discord.gg/swarms
- Swarms Discord: https://discord.gg/jM3Z6M9uMq
- Swarms Twitter: https://x.com/swarms_corp
@ -8946,7 +8946,7 @@ Here you'll find references about the Swarms framework, marketplace, community,
## Community
| Section | Links |
|----------------------|--------------------------------------------------------------------------------------------|
| Community | [Discord](https://discord.gg/swarms) |
| Community | [Discord](https://discord.gg/jM3Z6M9uMq) |
| Blog | [Blog](https://medium.com/@kyeg) |
| Event Calendar | [LUMA](https://lu.ma/swarms_calendar) |
| Twitter | [Twitter](https://x.com/swarms_corp) |
@ -28554,7 +28554,7 @@ Stay tuned for updates on the Swarm Exchange launch.
- **Documentation:** [Swarms Documentation](https://docs.swarms.world)
- **Support:** Contact us via our [Discord Community](https://discord.gg/swarms).
- **Support:** Contact us via our [Discord Community](https://discord.gg/jM3Z6M9uMq).
---
@ -47878,7 +47878,7 @@ For technical assistance with the Swarms API, please contact:
- Documentation: [https://docs.swarms.world](https://docs.swarms.world)
- Email: kye@swarms.world
- Community Discord: [https://discord.gg/swarms](https://discord.gg/swarms)
- Community Discord: [https://discord.gg/jM3Z6M9uMq](https://discord.gg/jM3Z6M9uMq)
- Swarms Marketplace: [https://swarms.world](https://swarms.world)
- Swarms AI Website: [https://swarms.ai](https://swarms.ai)
@ -50414,9 +50414,9 @@ To further enhance your understanding and usage of the Swarms Platform, explore
### Links
- [API Documentation](https://docs.swarms.world)
- [Community Forums](https://discord.gg/swarms)
- [Community Forums](https://discord.gg/jM3Z6M9uMq)
- [Tutorials and Guides](https://docs.swarms.world))
- [Support](https://discord.gg/swarms)
- [Support](https://discord.gg/jM3Z6M9uMq)
## Conclusion
@ -53007,7 +53007,7 @@ Your contributions fund:
[dao]: https://dao.swarms.world/
[investors]: https://investors.swarms.world/
[site]: https://swarms.world/
[discord]: https://discord.gg/swarms
[discord]: https://discord.gg/jM3Z6M9uMq
```

@ -55,7 +55,7 @@ extra:
- icon: fontawesome/brands/twitter
link: https://x.com/swarms_corp
- icon: fontawesome/brands/discord
link: https://discord.gg/swarms
link: https://discord.gg/jM3Z6M9uMq
analytics:
provider: google
@ -195,11 +195,11 @@ nav:
- Create and Run Agents from YAML: "swarms/agents/create_agents_yaml.md"
- Integrating Various Models into Your Agents: "swarms/models/agent_and_models.md"
- Tools:
- Structured Outputs: "swarms/agents/structured_outputs.md"
- Overview: "swarms/tools/main.md"
- What are tools?: "swarms/tools/build_tool.md"
- ToolAgent: "swarms/agents/tool_agent.md"
- Tool Storage: "swarms/tools/tool_storage.md"
- Structured Outputs: "swarms/agents/structured_outputs.md"
- Agent MCP Integration: "swarms/structs/agent_mcp.md"
- Comprehensive Tool Guide with MCP, Callables, and more: "swarms/tools/tools_examples.md"
- RAG || Long Term Memory:
- Integrating RAG with Agents: "swarms/memory/diy_memory.md"
- Third-Party Agent Integrations:
@ -225,12 +225,12 @@ nav:
- How to Create New Swarm Architectures: "swarms/structs/create_new_swarm.md"
- Introduction to Hiearchical Swarm Architectures: "swarms/structs/multi_swarm_orchestration.md"
- Swarm Architecture Documentation:
- Swarm Architectures Documentation:
- Overview: "swarms/structs/overview.md"
- MajorityVoting: "swarms/structs/majorityvoting.md"
- AgentRearrange: "swarms/structs/agent_rearrange.md"
- RoundRobin: "swarms/structs/round_robin_swarm.md"
- Mixture of Agents: "swarms/structs/moa.md"
- GraphWorkflow: "swarms/structs/graph_workflow.md"
- GroupChat: "swarms/structs/group_chat.md"
- AgentRegistry: "swarms/structs/agent_registry.md"
- SpreadSheetSwarm: "swarms/structs/spreadsheet_swarm.md"
@ -242,20 +242,27 @@ nav:
- MatrixSwarm: "swarms/structs/matrix_swarm.md"
- ModelRouter: "swarms/structs/model_router.md"
- MALT: "swarms/structs/malt.md"
- Auto Agent Builder: "swarms/structs/auto_agent_builder.md"
- Various Execution Methods: "swarms/structs/various_execution_methods.md"
- Hybrid Hierarchical-Cluster Swarm: "swarms/structs/hhcs.md"
- Deep Research Swarm: "swarms/structs/deep_research_swarm.md"
- Auto Swarm Builder: "swarms/structs/auto_swarm_builder.md"
- Swarm Matcher: "swarms/structs/swarm_matcher.md"
- Council of Judges: "swarms/structs/council_of_judges.md"
- Hiearchical Architectures:
- Auto Agent Builder: "swarms/structs/auto_agent_builder.md"
- Hybrid Hierarchical-Cluster Swarm: "swarms/structs/hhcs.md"
- Auto Swarm Builder: "swarms/structs/auto_swarm_builder.md"
- Workflows:
- ConcurrentWorkflow: "swarms/structs/concurrentworkflow.md"
- SequentialWorkflow: "swarms/structs/sequential_workflow.md"
- Structs:
- Conversation: "swarms/structs/conversation.md"
- ConcurrentWorkflow: "swarms/structs/concurrentworkflow.md"
- SequentialWorkflow: "swarms/structs/sequential_workflow.md"
- GraphWorkflow: "swarms/structs/graph_workflow.md"
- Communication Structure: "swarms/structs/conversation.md"
- Swarms Tools:
- Overview: "swarms_tools/overview.md"
- BaseTool Reference: "swarms/tools/base_tool.md"
- MCP Client Utils: "swarms/tools/mcp_client_call.md"
- Vertical Tools:
- Finance: "swarms_tools/finance.md"
@ -271,8 +278,8 @@ nav:
- Faiss: "swarms_memory/faiss.md"
- Deployment Solutions:
- Deploying Swarms on Google Cloud Run: "swarms_cloud/cloud_run.md"
- Phala Deployment: "swarms_cloud/phala_deploy.md"
- Deploy your Swarms on Google Cloud Run: "swarms_cloud/cloud_run.md"
- Deploy your Swarms on Phala: "swarms_cloud/phala_deploy.md"
- About Us:
- Swarms Vision: "swarms/concept/vision.md"
@ -295,11 +302,6 @@ nav:
- Swarms 5.9.2: "swarms/changelog/changelog_new.md"
- Examples:
- Overview: "swarms/examples/unique_swarms.md"
- Swarms API Examples:
- Medical Swarm: "swarms/examples/swarms_api_medical.md"
- Finance Swarm: "swarms/examples/swarms_api_finance.md"
- ML Model Code Generation Swarm: "swarms/examples/swarms_api_ml_model.md"
- Individal LLM Examples:
- OpenAI: "swarms/examples/openai_example.md"
- Anthropic: "swarms/examples/claude.md"
@ -311,17 +313,17 @@ nav:
- XAI: "swarms/examples/xai.md"
- VLLM: "swarms/examples/vllm_integration.md"
- Llama4: "swarms/examples/llama4.md"
- Swarms Tools:
- Agent with Yahoo Finance: "swarms/examples/yahoo_finance.md"
- Twitter Agents: "swarms_tools/twitter.md"
- Blockchain Agents:
- Agent with HTX + CoinGecko: "swarms/examples/swarms_tools_htx.md"
- Agent with HTX + CoinGecko Function Calling: "swarms/examples/swarms_tools_htx_gecko.md"
- Lumo: "swarms/examples/lumo.md"
- Quant Crypto Agent: "swarms/examples/quant_crypto_agent.md"
- Meme Agents:
- Bob The Builder: "swarms/examples/bob_the_builder.md"
- Swarms Tools:
- Agent with Yahoo Finance: "swarms/examples/yahoo_finance.md"
- Twitter Agents: "swarms_tools/twitter.md"
- Blockchain Agents:
- Agent with HTX + CoinGecko: "swarms/examples/swarms_tools_htx.md"
- Agent with HTX + CoinGecko Function Calling: "swarms/examples/swarms_tools_htx_gecko.md"
- Lumo: "swarms/examples/lumo.md"
- Quant Crypto Agent: "swarms/examples/quant_crypto_agent.md"
- Multi-Agent Collaboration:
- Unique Swarms: "swarms/examples/unique_swarms.md"
- Swarms DAO: "swarms/examples/swarms_dao.md"
- Hybrid Hierarchical-Cluster Swarm Example: "swarms/examples/hhcs_examples.md"
- Group Chat Example: "swarms/examples/groupchat_example.md"
@ -330,6 +332,11 @@ nav:
- ConcurrentWorkflow with VLLM Agents: "swarms/examples/vllm.md"
- External Agents:
- Swarms of Browser Agents: "swarms/examples/swarms_of_browser_agents.md"
- Swarms API Examples:
- Medical Swarm: "swarms/examples/swarms_api_medical.md"
- Finance Swarm: "swarms/examples/swarms_api_finance.md"
- ML Model Code Generation Swarm: "swarms/examples/swarms_api_ml_model.md"
- Swarms UI:
- Overview: "swarms/ui/main.md"
@ -358,6 +365,11 @@ nav:
- Swarms API Tools: "swarms_cloud/swarms_api_tools.md"
- Individual Agent Completions: "swarms_cloud/agent_api.md"
- Clients:
- Swarms API Python Client: "swarms_cloud/python_client.md"
- Swarms API Rust Client: "swarms_cloud/rust_client.md"
- Pricing:
- Swarms API Pricing: "swarms_cloud/api_pricing.md"
- Swarms API Pricing in Chinese: "swarms_cloud/chinese_api_pricing.md"

@ -1,112 +1,99 @@
# Agentic Structured Outputs
# :material-code-json: Agentic Structured Outputs
Structured outputs help ensure that your agents return data in a consistent, predictable format that can be easily parsed and processed by your application. This is particularly useful when building complex applications that require standardized data handling.
!!! abstract "Overview"
Structured outputs help ensure that your agents return data in a consistent, predictable format that can be easily parsed and processed by your application. This is particularly useful when building complex applications that require standardized data handling.
## Schema Definition
## :material-file-document-outline: Schema Definition
Structured outputs are defined using JSON Schema format. Here's the basic structure:
```python
tools = [
{
"type": "function",
"function": {
"name": "function_name",
"description": "Description of what the function does",
"parameters": {
"type": "object",
"properties": {
# Define your parameters here
},
"required": [
# List required parameters
]
=== "Basic Schema"
```python title="Basic Tool Schema"
tools = [
{
"type": "function",
"function": {
"name": "function_name",
"description": "Description of what the function does",
"parameters": {
"type": "object",
"properties": {
# Define your parameters here
},
"required": [
# List required parameters
]
}
}
}
}
]
```
]
```
### Parameter Types
=== "Advanced Schema"
```python title="Advanced Tool Schema with Multiple Parameters"
tools = [
{
"type": "function",
"function": {
"name": "advanced_function",
"description": "Advanced function with multiple parameter types",
"parameters": {
"type": "object",
"properties": {
"text_param": {
"type": "string",
"description": "A text parameter"
},
"number_param": {
"type": "number",
"description": "A numeric parameter"
},
"boolean_param": {
"type": "boolean",
"description": "A boolean parameter"
},
"array_param": {
"type": "array",
"items": {"type": "string"},
"description": "An array of strings"
}
},
"required": ["text_param", "number_param"]
}
}
}
]
```
### :material-format-list-bulleted-type: Parameter Types
The following parameter types are supported:
- `string`: Text values
- `number`: Numeric values
- `boolean`: True/False values
- `object`: Nested objects
- `array`: Lists or arrays
- `null`: Null values
## Implementation Steps
1. **Define Your Schema**
```python
tools = [
{
"type": "function",
"function": {
"name": "get_stock_price",
"description": "Retrieve stock price information",
"parameters": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "Stock ticker symbol"
},
# Add more parameters as needed
},
"required": ["ticker"]
}
}
}
]
```
2. **Initialize the Agent**
```python
from swarms import Agent
agent = Agent(
agent_name="Your-Agent-Name",
agent_description="Agent description",
system_prompt="Your system prompt",
tools_list_dictionary=tools
)
```
3. **Run the Agent**
```python
response = agent.run("Your query here")
```
4. **Parse the Output**
```python
from swarms.utils.str_to_dict import str_to_dict
parsed_output = str_to_dict(response)
```
## Example Usage
Here's a complete example using a financial agent:
| Type | Description | Example |
|------|-------------|---------|
| `string` | Text values | `"Hello World"` |
| `number` | Numeric values | `42`, `3.14` |
| `boolean` | True/False values | `true`, `false` |
| `object` | Nested objects | `{"key": "value"}` |
| `array` | Lists or arrays | `[1, 2, 3]` |
| `null` | Null values | `null` |
```python
from dotenv import load_dotenv
from swarms import Agent
from swarms.utils.str_to_dict import str_to_dict
## :material-cog: Implementation Steps
!!! tip "Quick Start Guide"
Follow these steps to implement structured outputs in your agent:
# Load environment variables
load_dotenv()
### Step 1: Define Your Schema
# Define tools with structured output schema
```python
tools = [
{
"type": "function",
"function": {
"name": "get_stock_price",
"description": "Retrieve the current stock price and related information",
"description": "Retrieve stock price information",
"parameters": {
"type": "object",
"properties": {
@ -114,72 +101,233 @@ tools = [
"type": "string",
"description": "Stock ticker symbol"
},
"include_history": {
"include_volume": {
"type": "boolean",
"description": "Include historical data"
},
"time": {
"type": "string",
"format": "date-time",
"description": "Time for stock data"
"description": "Include trading volume data"
}
},
"required": ["ticker", "include_history", "time"]
"required": ["ticker"]
}
}
}
]
```
### Step 2: Initialize the Agent
```
from swarms import Agent
# Initialize agent
agent = Agent(
agent_name="Financial-Analysis-Agent",
agent_description="Personal finance advisor agent",
system_prompt="Your system prompt here",
max_loops=1,
agent_name="Your-Agent-Name",
agent_description="Agent description",
system_prompt="Your system prompt",
tools_list_dictionary=tools
)
```
### Step 3: Run the Agent
```python
response = agent.run("Your query here")
```
### Step 4: Parse the Output
# Run agent
response = agent.run("What is the current stock price for AAPL?")
```python
from swarms.utils.str_to_dict import str_to_dict
# Parse structured output
parsed_data = str_to_dict(response)
parsed_output = str_to_dict(response)
```
## Best Practices
## :material-code-braces: Example Usage
!!! example "Complete Financial Agent Example"
Here's a comprehensive example using a financial analysis agent:
=== "Python Implementation"
```python
from dotenv import load_dotenv
from swarms import Agent
from swarms.utils.str_to_dict import str_to_dict
# Load environment variables
load_dotenv()
# Define tools with structured output schema
tools = [
{
"type": "function",
"function": {
"name": "get_stock_price",
"description": "Retrieve the current stock price and related information",
"parameters": {
"type": "object",
"properties": {
"ticker": {
"type": "string",
"description": "Stock ticker symbol (e.g., AAPL, GOOGL)"
},
"include_history": {
"type": "boolean",
"description": "Include historical data in the response"
},
"time": {
"type": "string",
"format": "date-time",
"description": "Specific time for stock data (ISO format)"
}
},
"required": ["ticker", "include_history", "time"]
}
}
}
]
# Initialize agent
agent = Agent(
agent_name="Financial-Analysis-Agent",
agent_description="Personal finance advisor agent",
system_prompt="You are a helpful financial analysis assistant.",
max_loops=1,
tools_list_dictionary=tools
)
# Run agent
response = agent.run("What is the current stock price for AAPL?")
# Parse structured output
parsed_data = str_to_dict(response)
print(f"Parsed response: {parsed_data}")
```
=== "Expected Output"
```json
{
"function_calls": [
{
"name": "get_stock_price",
"arguments": {
"ticker": "AAPL",
"include_history": true,
"time": "2024-01-15T10:30:00Z"
}
}
]
}
```
## :material-check-circle: Best Practices
1. **Schema Design**
- Keep schemas as simple as possible while meeting your needs
- Use clear, descriptive parameter names
- Include detailed descriptions for each parameter
- Specify all required parameters explicitly
!!! success "Schema Design"
- **Keep it simple**: Design schemas that are as simple as possible while meeting your needs
- **Clear naming**: Use descriptive parameter names that clearly indicate their purpose
- **Detailed descriptions**: Include comprehensive descriptions for each parameter
- **Required fields**: Explicitly specify all required parameters
2. **Error Handling**
- Always validate the output format
- Implement proper error handling for parsing failures
- Use try-except blocks when converting strings to dictionaries
!!! info "Error Handling"
- **Validate output**: Always validate the output format before processing
- **Exception handling**: Implement proper error handling for parsing failures
- **Safety first**: Use try-except blocks when converting strings to dictionaries
3. **Performance**
- Minimize the number of required parameters
- Use appropriate data types for each parameter
- Consider caching parsed results if used frequently
!!! performance "Performance Tips"
- **Minimize requirements**: Keep the number of required parameters to a minimum
- **Appropriate types**: Use the most appropriate data types for each parameter
- **Caching**: Consider caching parsed results if they're used frequently
## Troubleshooting
## :material-alert-circle: Troubleshooting
Common issues and solutions:
!!! warning "Common Issues"
1. **Invalid Output Format**
- Ensure your schema matches the expected output
- Verify all required fields are present
- Check for proper JSON formatting
### Invalid Output Format
2. **Parsing Errors**
- Use `str_to_dict()` for reliable string-to-dictionary conversion
- Validate input strings before parsing
- Handle potential parsing exceptions
!!! failure "Problem"
The agent returns data in an unexpected format
!!! success "Solution"
- Ensure your schema matches the expected output structure
- Verify all required fields are present in the response
- Check for proper JSON formatting in the output
### Parsing Errors
!!! failure "Problem"
Errors occur when trying to parse the agent's response
!!! success "Solution"
```python
from swarms.utils.str_to_dict import str_to_dict
try:
parsed_data = str_to_dict(response)
except Exception as e:
print(f"Parsing error: {e}")
# Handle the error appropriately
```
### Missing Fields
!!! failure "Problem"
Required fields are missing from the output
!!! success "Solution"
- Verify all required fields are defined in the schema
- Check if the agent is properly configured with the tools
- Review the system prompt for clarity and completeness
## :material-lightbulb: Advanced Features
!!! note "Pro Tips"
=== "Nested Objects"
```python title="nested_schema.py"
"properties": {
"user_info": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "number"},
"preferences": {
"type": "array",
"items": {"type": "string"}
}
}
}
}
```
=== "Conditional Fields"
```python title="conditional_schema.py"
"properties": {
"data_type": {
"type": "string",
"enum": ["stock", "crypto", "forex"]
},
"symbol": {"type": "string"},
"exchange": {
"type": "string",
"description": "Required for crypto and forex"
}
}
```
3. **Missing Fields**
- Verify all required fields are defined in the schema
- Check if the agent is properly configured
- Review the system prompt for clarity
---

@ -152,7 +152,7 @@ Stay tuned for updates on the Swarm Exchange launch.
- **Documentation:** [Swarms Documentation](https://docs.swarms.world)
- **Support:** Contact us via our [Discord Community](https://discord.gg/swarms).
- **Support:** Contact us via our [Discord Community](https://discord.gg/jM3Z6M9uMq).
---

@ -0,0 +1,792 @@
# Agent MCP Integration Guide
<div class="grid cards" markdown>
- :material-connection: **Direct MCP Server Connection**
---
Connect agents to MCP servers via URL for seamless integration
[:octicons-arrow-right-24: Quick Start](#quick-start)
- :material-tools: **Dynamic Tool Discovery**
---
Automatically fetch and utilize tools from MCP servers
[:octicons-arrow-right-24: Tool Discovery](#integration-flow)
- :material-chart-line: **Real-time Communication**
---
Server-sent Events (SSE) for live data streaming
[:octicons-arrow-right-24: Configuration](#configuration-options)
- :material-code-json: **Structured Output**
---
Process and format responses with multiple output types
[:octicons-arrow-right-24: Examples](#example-implementations)
</div>
## Overview
The **Model Context Protocol (MCP)** integration enables Swarms agents to dynamically connect to external tools and services through a standardized protocol. This powerful feature expands agent capabilities by providing access to APIs, databases, and specialized services.
!!! info "What is MCP?"
The Model Context Protocol is a standardized way for AI agents to interact with external tools and services, providing a consistent interface for tool discovery and execution.
---
## :material-check-circle: Features Matrix
=== "✅ Current Capabilities"
| Feature | Status | Description |
|---------|--------|-------------|
| **Direct MCP Connection** | ✅ Ready | Connect via URL to MCP servers |
| **Tool Discovery** | ✅ Ready | Auto-fetch available tools |
| **SSE Communication** | ✅ Ready | Real-time server communication |
| **Multiple Tool Execution** | ✅ Ready | Execute multiple tools per session |
| **Structured Output** | ✅ Ready | Format responses in multiple types |
=== "🚧 In Development"
| Feature | Status | Expected |
|---------|--------|----------|
| **MCPConnection Model** | 🚧 Development | Q1 2024 |
| **Multiple Server Support** | 🚧 Planned | Q2 2024 |
| **Parallel Function Calling** | 🚧 Research | Q2 2024 |
| **Auto-discovery** | 🚧 Planned | Q3 2024 |
---
## :material-rocket: Quick Start
!!! tip "Prerequisites"
=== "System Requirements"
- Python 3.8+
- Swarms framework
- Running MCP server
=== "Installation"
```bash
pip install swarms
```
### Step 1: Basic Agent Setup
!!! example "Simple MCP Agent"
```python
from swarms import Agent
# Initialize agent with MCP integration
agent = Agent(
agent_name="Financial-Analysis-Agent",
agent_description="AI-powered financial advisor",
max_loops=1,
mcp_url="http://localhost:8000/sse", # Your MCP server
output_type="all",
)
# Execute task using MCP tools
result = agent.run(
"Get current Bitcoin price and analyze market trends"
)
print(result)
```
### Step 2: Advanced Configuration
!!! example "Production-Ready Setup"
```python
from swarms import Agent
from swarms.prompts.finance_agent_sys_prompt import FINANCIAL_AGENT_SYS_PROMPT
agent = Agent(
agent_name="Advanced-Financial-Agent",
agent_description="Comprehensive market analysis agent",
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
max_loops=3,
mcp_url="http://production-server:8000/sse",
output_type="json",
# Additional parameters for production
temperature=0.1,
verbose=True,
)
```
---
## Integration Flow
The following diagram illustrates the complete MCP integration workflow:
```mermaid
graph TD
A[🚀 Agent Receives Task] --> B[🔗 Connect to MCP Server]
B --> C[🔍 Discover Available Tools]
C --> D[🧠 Analyze Task Requirements]
D --> E[📝 Generate Tool Request]
E --> F[📤 Send to MCP Server]
F --> G[⚙️ Server Processes Request]
G --> H[📥 Receive Response]
H --> I[🔄 Process & Validate]
I --> J[📊 Summarize Results]
J --> K[✅ Return Final Output]
class A,K startEnd
class D,I,J process
class F,G,H communication
```
### Detailed Process Breakdown
!!! abstract "Process Steps"
=== "1-3: Initialization"
**Task Initiation** - Agent receives user query
**Server Connection** - Establish MCP server link
**Tool Discovery** - Fetch available tool schemas
=== "4-6: Execution"
**Task Analysis** - Determine required tools
**Request Generation** - Create structured API calls
**Server Communication** - Send requests via SSE
=== "7-9: Processing"
**Server Processing** - MCP server executes tools
**Response Handling** - Receive and validate data
**Result Processing** - Parse and structure output
=== "10-11: Completion"
**Summarization** - Generate user-friendly summary
**Final Output** - Return complete response
---
## :material-cog: Configuration Options
### Agent Parameters
!!! note "Configuration Reference"
| Parameter | Type | Description | Default | Example |
|-----------|------|-------------|---------|---------|
| `mcp_url` | `str` | MCP server endpoint | `None` | `"http://localhost:8000/sse"` |
| `output_type` | `str` | Response format | `"str"` | `"json"`, `"all"`, `"dict"` |
| `max_loops` | `int` | Execution iterations | `1` | `3` |
| `temperature` | `float` | Response creativity | `0.1` | `0.1-1.0` |
| `verbose` | `bool` | Debug logging | `False` | `True` |
---
## :material-code-tags: Example Implementations
### Cryptocurrency Trading Agent
!!! example "Crypto Price Monitor"
```python
from swarms import Agent
crypto_agent = Agent(
agent_name="Crypto-Trading-Agent",
agent_description="Real-time cryptocurrency market analyzer",
max_loops=2,
mcp_url="http://crypto-server:8000/sse",
output_type="json",
temperature=0.1,
)
# Multi-exchange price comparison
result = crypto_agent.run(
"""
Compare Bitcoin and Ethereum prices across OKX and HTX exchanges.
Calculate arbitrage opportunities and provide trading recommendations.
"""
)
```
### Financial Analysis Suite
!!! example "Advanced Financial Agent"
```python
from swarms import Agent
from swarms.prompts.finance_agent_sys_prompt import FINANCIAL_AGENT_SYS_PROMPT
financial_agent = Agent(
agent_name="Financial-Analysis-Suite",
agent_description="Comprehensive financial market analyst",
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
max_loops=4,
mcp_url="http://finance-api:8000/sse",
output_type="all",
temperature=0.2,
)
# Complex market analysis
analysis = financial_agent.run(
"""
Perform a comprehensive analysis of Tesla (TSLA) stock:
1. Current price and technical indicators
2. Recent news sentiment analysis
3. Competitor comparison (GM, Ford)
4. Investment recommendation with risk assessment
"""
)
```
### Custom Industry Agent
!!! example "Healthcare Data Agent"
```python
from swarms import Agent
healthcare_agent = Agent(
agent_name="Healthcare-Data-Agent",
agent_description="Medical data analysis and research assistant",
max_loops=3,
mcp_url="http://medical-api:8000/sse",
output_type="dict",
system_prompt="""
You are a healthcare data analyst. Use available medical databases
and research tools to provide accurate, evidence-based information.
Always cite sources and include confidence levels.
""",
)
research = healthcare_agent.run(
"Research latest treatments for Type 2 diabetes and their efficacy rates"
)
```
---
## :material-server: MCP Server Development
### FastMCP Server Example
!!! example "Building a Custom MCP Server"
```python
from mcp.server.fastmcp import FastMCP
import requests
from typing import Optional
import asyncio
# Initialize MCP server
mcp = FastMCP("crypto_analysis_server")
@mcp.tool(
name="get_crypto_price",
description="Fetch current cryptocurrency price with market data",
)
def get_crypto_price(
symbol: str,
currency: str = "USD",
include_24h_change: bool = True
) -> dict:
"""
Get real-time cryptocurrency price and market data.
Args:
symbol: Cryptocurrency symbol (e.g., BTC, ETH)
currency: Target currency for price (default: USD)
include_24h_change: Include 24-hour price change data
"""
try:
url = f"https://api.coingecko.com/api/v3/simple/price"
params = {
"ids": symbol.lower(),
"vs_currencies": currency.lower(),
"include_24hr_change": include_24h_change
}
response = requests.get(url, params=params, timeout=10)
response.raise_for_status()
data = response.json()
return {
"symbol": symbol.upper(),
"price": data[symbol.lower()][currency.lower()],
"currency": currency.upper(),
"change_24h": data[symbol.lower()].get("24h_change", 0),
"timestamp": "2024-01-15T10:30:00Z"
}
except Exception as e:
return {"error": f"Failed to fetch price: {str(e)}"}
@mcp.tool(
name="analyze_market_sentiment",
description="Analyze cryptocurrency market sentiment from social media",
)
def analyze_market_sentiment(symbol: str, timeframe: str = "24h") -> dict:
"""Analyze market sentiment for a cryptocurrency."""
# Implement sentiment analysis logic
return {
"symbol": symbol,
"sentiment_score": 0.75,
"sentiment": "Bullish",
"confidence": 0.85,
"timeframe": timeframe
}
if __name__ == "__main__":
mcp.run(transport="sse")
```
### Server Best Practices
!!! tip "Server Development Guidelines"
=== "🏗️ Architecture"
- **Modular Design**: Separate tools into logical modules
- **Error Handling**: Implement comprehensive error responses
- **Async Support**: Use async/await for better performance
- **Type Hints**: Include proper type annotations
=== "🔒 Security"
- **Input Validation**: Sanitize all user inputs
- **Rate Limiting**: Implement request throttling
- **Authentication**: Add API key validation
- **Logging**: Log all requests and responses
=== "⚡ Performance"
- **Caching**: Cache frequently requested data
- **Connection Pooling**: Reuse database connections
- **Timeouts**: Set appropriate request timeouts
- **Load Testing**: Test under realistic load
---
## :material-alert: Current Limitations
!!! warning "Important Limitations"
### 🚧 MCPConnection Model
The enhanced connection model is under development:
```python
# ❌ Not available yet
from swarms.schemas.mcp_schemas import MCPConnection
mcp_config = MCPConnection(
url="http://server:8000/sse",
headers={"Authorization": "Bearer token"},
timeout=30,
retry_attempts=3
)
# ✅ Use direct URL instead
mcp_url = "http://server:8000/sse"
```
### 🚧 Single Server Limitation
Currently supports one server per agent:
```python
# ❌ Multiple servers not supported
mcp_servers = [
"http://server1:8000/sse",
"http://server2:8000/sse"
]
# ✅ Single server only
mcp_url = "http://primary-server:8000/sse"
```
### 🚧 Sequential Execution
Tools execute sequentially, not in parallel:
```python
# Current: tool1() → tool2() → tool3()
# Future: tool1() | tool2() | tool3() (parallel)
```
---
## :material-wrench: Troubleshooting
### Common Issues & Solutions
!!! bug "Connection Problems"
=== "Server Unreachable"
**Symptoms**: Connection timeout or refused
**Solutions**:
```bash
# Check server status
curl -I http://localhost:8000/sse
# Verify port is open
netstat -tulpn | grep :8000
# Test network connectivity
ping your-server-host
```
=== "Authentication Errors"
**Symptoms**: 401/403 HTTP errors
**Solutions**:
```python
# Verify API credentials
headers = {"Authorization": "Bearer your-token"}
# Check token expiration
# Validate permissions
```
=== "SSL/TLS Issues"
**Symptoms**: Certificate errors
**Solutions**:
```python
# For development only
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
```
!!! bug "Tool Discovery Failures"
=== "Empty Tool List"
**Symptoms**: No tools found from server
**Debugging**:
```python
# Check server tool registration
@mcp.tool(name="tool_name", description="...")
def your_tool():
pass
# Verify server startup logs
# Check tool endpoint responses
```
=== "Schema Validation Errors"
**Symptoms**: Invalid tool parameters
**Solutions**:
```python
# Ensure proper type hints
def tool(param: str, optional: int = 0) -> dict:
return {"result": "success"}
# Validate parameter types
# Check required vs optional parameters
```
!!! bug "Performance Issues"
=== "Slow Response Times"
**Symptoms**: Long wait times for responses
**Optimization**:
```python
# Increase timeout
agent = Agent(
mcp_url="http://server:8000/sse",
timeout=60, # seconds
)
# Optimize server performance
# Use connection pooling
# Implement caching
```
=== "Memory Usage"
**Symptoms**: High memory consumption
**Solutions**:
```python
# Limit max_loops
agent = Agent(max_loops=2)
# Use streaming for large responses
# Implement garbage collection
```
### Debugging Tools
!!! tip "Debug Configuration"
```python
import logging
# Enable debug logging
logging.basicConfig(level=logging.DEBUG)
agent = Agent(
agent_name="Debug-Agent",
mcp_url="http://localhost:8000/sse",
verbose=True, # Enable verbose output
output_type="all", # Get full execution trace
)
# Monitor network traffic
# Check server logs
# Use profiling tools
```
---
## :material-security: Security Best Practices
### Authentication & Authorization
!!! shield "Security Checklist"
=== "🔑 Authentication"
- **API Keys**: Use strong, unique API keys
- **Token Rotation**: Implement automatic token refresh
- **Encryption**: Use HTTPS for all communications
- **Storage**: Secure credential storage (environment variables)
=== "🛡️ Authorization"
- **Role-Based Access**: Implement user role restrictions
- **Tool Permissions**: Limit tool access per user/agent
- **Rate Limiting**: Prevent abuse with request limits
- **Audit Logging**: Log all tool executions
=== "🔒 Data Protection"
- **Input Sanitization**: Validate all user inputs
- **Output Filtering**: Sanitize sensitive data in responses
- **Encryption**: Encrypt sensitive data in transit/rest
- **Compliance**: Follow industry standards (GDPR, HIPAA)
### Secure Configuration
!!! example "Production Security Setup"
```python
import os
from swarms import Agent
# Secure configuration
agent = Agent(
agent_name="Production-Agent",
mcp_url=os.getenv("MCP_SERVER_URL"), # From environment
# Additional security headers would go here when MCPConnection is available
verbose=False, # Disable verbose logging in production
output_type="json", # Structured output only
)
# Environment variables (.env file)
"""
MCP_SERVER_URL=https://secure-server.company.com/sse
MCP_API_KEY=your-secure-api-key
MCP_TIMEOUT=30
"""
```
---
## :material-chart-line: Performance Optimization
### Agent Optimization
!!! rocket "Performance Tips"
=== "⚡ Configuration"
```python
# Optimized agent settings
agent = Agent(
max_loops=2, # Limit iterations
temperature=0.1, # Reduce randomness
output_type="json", # Structured output
# Future: connection_pool_size=10
)
```
=== "🔄 Caching"
```python
# Implement response caching
from functools import lru_cache
@lru_cache(maxsize=100)
def cached_mcp_call(query):
return agent.run(query)
```
=== "📊 Monitoring"
```python
import time
start_time = time.time()
result = agent.run("query")
execution_time = time.time() - start_time
print(f"Execution time: {execution_time:.2f}s")
```
### Server Optimization
!!! rocket "Server Performance"
```python
from mcp.server.fastmcp import FastMCP
import asyncio
from concurrent.futures import ThreadPoolExecutor
mcp = FastMCP("optimized_server")
# Async tool with thread pool
@mcp.tool(name="async_heavy_task")
async def heavy_computation(data: str) -> dict:
loop = asyncio.get_event_loop()
with ThreadPoolExecutor() as executor:
result = await loop.run_in_executor(
executor, process_heavy_task, data
)
return result
def process_heavy_task(data):
# CPU-intensive processing
return {"processed": data}
```
---
## :material-timeline: Future Roadmap
### Upcoming Features
!!! rocket "Development Timeline"
=== "1 Week"
- **MCPConnection Model** - Enhanced configuration
- **Authentication Support** - Built-in auth mechanisms
- **Error Recovery** - Automatic retry logic
- **Connection Pooling** - Improved performance
=== "2 Week"
- **Multiple Server Support** - Connect to multiple MCPs
- **Parallel Execution** - Concurrent tool calling
- **Load Balancing** - Distribute requests across servers
- **Advanced Monitoring** - Real-time metrics
=== "3 Week"
- **Auto-discovery** - Automatic server detection
- **Workflow Engine** - Complex task orchestration
- **Plugin System** - Custom MCP extensions
- **Cloud Integration** - Native cloud provider support
### Contributing
!!! heart "Get Involved"
We welcome contributions to improve MCP integration:
- **Bug Reports**: [GitHub Issues](https://github.com/kyegomez/swarms/issues)
- **Feature Requests**: [Discussions](https://github.com/kyegomez/swarms/discussions)
- **Code Contributions**: [Pull Requests](https://github.com/kyegomez/swarms/pulls)
- **Documentation**: Help improve these docs
---
## :material-help-circle: Support & Resources
### Getting Help
!!! question "Need Assistance?"
=== "📚 Documentation"
- [Official Docs](https://docs.swarms.world)
- [Tutorials](https://docs.swarms.world/tutorials)
=== "💬 Community"
- [Discord Server](https://discord.gg/jM3Z6M9uMq)
- [GitHub Discussions](https://github.com/kyegomez/swarms/discussions)
=== "🔧 Development"
- [GitHub Repository](https://github.com/kyegomez/swarms)
- [Example Projects](https://github.com/kyegomez/swarms/tree/main/examples)
- [Contributing Guide](https://github.com/kyegomez/swarms/blob/main/CONTRIBUTING.md)
### Quick Reference
!!! abstract "Cheat Sheet"
```python
# Basic setup
from swarms import Agent
agent = Agent(
agent_name="Your-Agent",
mcp_url="http://localhost:8000/sse",
output_type="json",
max_loops=2
)
# Execute task
result = agent.run("Your query here")
# Common patterns
crypto_query = "Get Bitcoin price"
analysis_query = "Analyze Tesla stock performance"
research_query = "Research recent AI developments"
```
---
## :material-check-all: Conclusion
The MCP integration brings powerful external tool connectivity to Swarms agents, enabling them to access real-world data and services through a standardized protocol. While some advanced features are still in development, the current implementation provides robust functionality for most use cases.
!!! success "Ready to Start?"
Begin with the [Quick Start](#quick-start) section and explore the [examples](#example-implementations) to see MCP integration in action. As new features become available, this documentation will be updated with the latest capabilities and best practices.
!!! tip "Stay Updated"
Join our [Discord community](https://discord.gg/jM3Z6M9uMq) to stay informed about new MCP features and connect with other developers building amazing agent applications.
---
<div class="grid cards" markdown>
- :material-rocket: **[Quick Start](#quick-start)**
Get up and running with MCP integration in minutes
- :material-book-open: **[Examples](#example-implementations)**
Explore real-world implementations and use cases
- :material-cog: **[Configuration](#configuration-options)**
Learn about all available configuration options
- :material-help: **[Troubleshooting](#troubleshooting)**
Solve common issues and optimize performance
</div>

@ -2,251 +2,596 @@
## Introduction
The `Conversation` class is a powerful tool for managing and structuring conversation data in a Python program. It enables you to create, manipulate, and analyze conversations easily. This documentation will provide you with a comprehensive understanding of the `Conversation` class, its attributes, methods, and how to effectively use it.
The `Conversation` class is a powerful tool for managing and structuring conversation data in a Python program. It enables you to create, manipulate, and analyze conversations easily. This documentation provides a comprehensive understanding of the `Conversation` class, its attributes, methods, and how to effectively use it.
## Table of Contents
1. **Class Definition**
- Overview
- Attributes
1. [Class Definition](#1-class-definition)
2. [Initialization Parameters](#2-initialization-parameters)
3. [Methods](#3-methods)
4. [Examples](#4-examples)
2. **Methods**
- `__init__(self, time_enabled: bool = False, *args, **kwargs)`
- `add(self, role: str, content: str, *args, **kwargs)`
- `delete(self, index: str)`
- `update(self, index: str, role, content)`
- `query(self, index: str)`
- `search(self, keyword: str)`
- `display_conversation(self, detailed: bool = False)`
- `export_conversation(self, filename: str)`
- `import_conversation(self, filename: str)`
- `count_messages_by_role(self)`
- `return_history_as_string(self)`
- `save_as_json(self, filename: str)`
- `load_from_json(self, filename: str)`
- `search_keyword_in_conversation(self, keyword: str)`
## 1. Class Definition
---
### Overview
### 1. Class Definition
The `Conversation` class is designed to manage conversations by keeping track of messages and their attributes. It offers methods for adding, deleting, updating, querying, and displaying messages within the conversation. Additionally, it supports exporting and importing conversations, searching for specific keywords, and more.
#### Overview
### Attributes
| Attribute | Type | Description |
|-----------|------|-------------|
| id | str | Unique identifier for the conversation |
| name | str | Name of the conversation |
| system_prompt | Optional[str] | System prompt for the conversation |
| time_enabled | bool | Flag to enable time tracking for messages |
| autosave | bool | Flag to enable automatic saving |
| save_filepath | str | File path for saving conversation history |
| conversation_history | list | List storing conversation messages |
| tokenizer | Any | Tokenizer for counting tokens |
| context_length | int | Maximum tokens allowed in conversation |
| rules | str | Rules for the conversation |
| custom_rules_prompt | str | Custom prompt for rules |
| user | str | User identifier for messages |
| auto_save | bool | Flag to enable auto-saving |
| save_as_yaml | bool | Flag to save as YAML |
| save_as_json_bool | bool | Flag to save as JSON |
| token_count | bool | Flag to enable token counting |
| cache_enabled | bool | Flag to enable prompt caching |
| cache_stats | dict | Statistics about cache usage |
| cache_lock | threading.Lock | Lock for thread-safe cache operations |
| conversations_dir | str | Directory to store cached conversations |
## 2. Initialization Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| id | str | generated | Unique conversation ID |
| name | str | None | Name of the conversation |
| system_prompt | Optional[str] | None | System prompt for the conversation |
| time_enabled | bool | False | Enable time tracking |
| autosave | bool | False | Enable automatic saving |
| save_filepath | str | None | File path for saving |
| tokenizer | Any | None | Tokenizer for counting tokens |
| context_length | int | 8192 | Maximum tokens allowed |
| rules | str | None | Conversation rules |
| custom_rules_prompt | str | None | Custom rules prompt |
| user | str | "User:" | User identifier |
| auto_save | bool | True | Enable auto-saving |
| save_as_yaml | bool | True | Save as YAML |
| save_as_json_bool | bool | False | Save as JSON |
| token_count | bool | True | Enable token counting |
| cache_enabled | bool | True | Enable prompt caching |
| conversations_dir | Optional[str] | None | Directory for cached conversations |
| provider | Literal["mem0", "in-memory"] | "in-memory" | Storage provider |
## 3. Methods
### `add(role: str, content: Union[str, dict, list], metadata: Optional[dict] = None)`
Adds a message to the conversation history.
| Parameter | Type | Description |
|-----------|------|-------------|
| role | str | Role of the speaker |
| content | Union[str, dict, list] | Message content |
| metadata | Optional[dict] | Additional metadata |
Example:
```python
conversation = Conversation()
conversation.add("user", "Hello, how are you?")
conversation.add("assistant", "I'm doing well, thank you!")
```
The `Conversation` class is designed to manage conversations by keeping track of messages and their attributes. It offers methods for adding, deleting, updating, querying, and displaying messages within the conversation. Additionally, it supports exporting and importing conversations, searching for specific keywords, and more.
### `add_multiple_messages(roles: List[str], contents: List[Union[str, dict, list]])`
#### Attributes
Adds multiple messages to the conversation history.
- `time_enabled (bool)`: A flag indicating whether to enable timestamp recording for messages.
- `conversation_history (list)`: A list that stores messages in the conversation.
| Parameter | Type | Description |
|-----------|------|-------------|
| roles | List[str] | List of speaker roles |
| contents | List[Union[str, dict, list]] | List of message contents |
### 2. Methods
Example:
```python
conversation = Conversation()
conversation.add_multiple_messages(
["user", "assistant"],
["Hello!", "Hi there!"]
)
```
#### `__init__(self, time_enabled: bool = False, *args, **kwargs)`
### `delete(index: str)`
- **Description**: Initializes a new Conversation object.
- **Parameters**:
- `time_enabled (bool)`: If `True`, timestamps will be recorded for each message. Default is `False`.
Deletes a message from the conversation history.
#### `add(self, role: str, content: str, *args, **kwargs)`
| Parameter | Type | Description |
|-----------|------|-------------|
| index | str | Index of message to delete |
- **Description**: Adds a message to the conversation history.
- **Parameters**:
- `role (str)`: The role of the speaker (e.g., "user," "assistant").
- `content (str)`: The content of the message.
Example:
```python
conversation = Conversation()
conversation.add("user", "Hello")
conversation.delete(0) # Deletes the first message
```
#### `delete(self, index: str)`
### `update(index: str, role: str, content: Union[str, dict])`
- **Description**: Deletes a message from the conversation history.
- **Parameters**:
- `index (str)`: The index of the message to delete.
Updates a message in the conversation history.
#### `update(self, index: str, role, content)`
| Parameter | Type | Description |
|-----------|------|-------------|
| index | str | Index of message to update |
| role | str | New role of speaker |
| content | Union[str, dict] | New message content |
- **Description**: Updates a message in the conversation history.
- **Parameters**:
- `index (str)`: The index of the message to update.
- `role (_type_)`: The new role of the speaker.
- `content (_type_)`: The new content of the message.
Example:
```python
conversation = Conversation()
conversation.add("user", "Hello")
conversation.update(0, "user", "Hi there!")
```
#### `query(self, index: str)`
### `query(index: str)`
- **Description**: Retrieves a message from the conversation history.
- **Parameters**:
- `index (str)`: The index of the message to query.
- **Returns**: The message as a string.
Retrieves a message from the conversation history.
#### `search(self, keyword: str)`
| Parameter | Type | Description |
|-----------|------|-------------|
| index | str | Index of message to query |
- **Description**: Searches for messages containing a specific keyword in the conversation history.
- **Parameters**:
- `keyword (str)`: The keyword to search for.
- **Returns**: A list of messages that contain the keyword.
Example:
```python
conversation = Conversation()
conversation.add("user", "Hello")
message = conversation.query(0)
```
#### `display_conversation(self, detailed: bool = False)`
### `search(keyword: str)`
- **Description**: Displays the conversation history.
- **Parameters**:
- `detailed (bool, optional)`: If `True`, provides detailed information about each message. Default is `False`.
Searches for messages containing a keyword.
#### `export_conversation(self, filename: str)`
| Parameter | Type | Description |
|-----------|------|-------------|
| keyword | str | Keyword to search for |
- **Description**: Exports the conversation history to a text file.
- **Parameters**:
- `filename (str)`: The name of the file to export to.
Example:
```python
conversation = Conversation()
conversation.add("user", "Hello world")
results = conversation.search("world")
```
#### `import_conversation(self, filename: str)`
### `display_conversation(detailed: bool = False)`
- **Description**: Imports a conversation history from a text file.
- **Parameters**:
- `filename (str)`: The name of the file to import from.
Displays the conversation history.
#### `count_messages_by_role(self)`
| Parameter | Type | Description |
|-----------|------|-------------|
| detailed | bool | Show detailed information |
- **Description**: Counts the number of messages by role in the conversation.
- **Returns**: A dictionary containing the count of messages for each role.
Example:
```python
conversation = Conversation()
conversation.add("user", "Hello")
conversation.display_conversation(detailed=True)
```
#### `return_history_as_string(self)`
### `export_conversation(filename: str)`
- **Description**: Returns the entire conversation history as a single string.
- **Returns**: The conversation history as a string.
Exports conversation history to a file.
#### `save_as_json(self, filename: str)`
| Parameter | Type | Description |
|-----------|------|-------------|
| filename | str | Output file path |
- **Description**: Saves the conversation history as a JSON file.
- **Parameters**:
- `filename (str)`: The name of the JSON file to save.
Example:
```python
conversation = Conversation()
conversation.add("user", "Hello")
conversation.export_conversation("chat.txt")
```
#### `load_from_json(self, filename: str)`
### `import_conversation(filename: str)`
- **Description**: Loads a conversation history from a JSON file.
- **Parameters**:
- `filename (str)`: The name of the JSON file to load.
Imports conversation history from a file.
#### `search_keyword_in_conversation(self, keyword: str)`
| Parameter | Type | Description |
|-----------|------|-------------|
| filename | str | Input file path |
- **Description**: Searches for a keyword in the conversation history and returns matching messages.
- **Parameters**:
- `keyword (str)`: The keyword to search for.
- **Returns**: A list of messages containing the keyword.
Example:
```python
conversation = Conversation()
conversation.import_conversation("chat.txt")
```
## Examples
### `count_messages_by_role()`
Here are some usage examples of the `Conversation` class:
Counts messages by role.
### Creating a Conversation
Returns: Dict[str, int]
Example:
```python
from swarms.structs import Conversation
conversation = Conversation()
conversation.add("user", "Hello")
conversation.add("assistant", "Hi")
counts = conversation.count_messages_by_role()
```
### `return_history_as_string()`
Returns conversation history as a string.
Returns: str
Example:
```python
conversation = Conversation()
conversation.add("user", "Hello")
history = conversation.return_history_as_string()
```
### `save_as_json(filename: str)`
conv = Conversation()
Saves conversation history as JSON.
| Parameter | Type | Description |
|-----------|------|-------------|
| filename | str | Output JSON file path |
Example:
```python
conversation = Conversation()
conversation.add("user", "Hello")
conversation.save_as_json("chat.json")
```
### Adding Messages
### `load_from_json(filename: str)`
Loads conversation history from JSON.
| Parameter | Type | Description |
|-----------|------|-------------|
| filename | str | Input JSON file path |
Example:
```python
conv.add("user", "Hello, world!")
conv.add("assistant", "Hello, user!")
conversation = Conversation()
conversation.load_from_json("chat.json")
```
### Displaying the Conversation
### `truncate_memory_with_tokenizer()`
Truncates conversation history based on token limit.
Example:
```python
conv.display_conversation()
conversation = Conversation(tokenizer=some_tokenizer)
conversation.truncate_memory_with_tokenizer()
```
### Searching for Messages
### `clear()`
Clears the conversation history.
Example:
```python
result = conv.search("Hello")
conversation = Conversation()
conversation.add("user", "Hello")
conversation.clear()
```
### Exporting and Importing Conversations
### `to_json()`
Converts conversation history to JSON string.
Returns: str
Example:
```python
conv.export_conversation("conversation.txt")
conv.import_conversation("conversation.txt")
conversation = Conversation()
conversation.add("user", "Hello")
json_str = conversation.to_json()
```
### Counting Messages by Role
### `to_dict()`
Converts conversation history to dictionary.
Returns: list
Example:
```python
counts = conv.count_messages_by_role()
conversation = Conversation()
conversation.add("user", "Hello")
dict_data = conversation.to_dict()
```
### Loading and Saving as JSON
### `to_yaml()`
Converts conversation history to YAML string.
Returns: str
Example:
```python
conv.save_as_json("conversation.json")
conv.load_from_json("conversation.json")
conversation = Conversation()
conversation.add("user", "Hello")
yaml_str = conversation.to_yaml()
```
Certainly! Let's continue with more examples and additional information about the `Conversation` class.
### `get_visible_messages(agent: "Agent", turn: int)`
Gets visible messages for an agent at a specific turn.
### Querying a Specific Message
| Parameter | Type | Description |
|-----------|------|-------------|
| agent | Agent | The agent |
| turn | int | Turn number |
You can retrieve a specific message from the conversation by its index:
Returns: List[Dict]
Example:
```python
message = conv.query(0) # Retrieves the first message
conversation = Conversation()
visible_msgs = conversation.get_visible_messages(agent, 1)
```
### Updating a Message
### `get_last_message_as_string()`
Gets the last message as a string.
You can update a message's content or role within the conversation:
Returns: str
Example:
```python
conv.update(0, "user", "Hi there!") # Updates the first message
conversation = Conversation()
conversation.add("user", "Hello")
last_msg = conversation.get_last_message_as_string()
```
### Deleting a Message
### `return_messages_as_list()`
If you want to remove a message from the conversation, you can use the `delete` method:
Returns messages as a list of strings.
Returns: List[str]
Example:
```python
conv.delete(0) # Deletes the first message
conversation = Conversation()
conversation.add("user", "Hello")
messages = conversation.return_messages_as_list()
```
### Counting Messages by Role
### `return_messages_as_dictionary()`
Returns messages as a list of dictionaries.
You can count the number of messages by role in the conversation:
Returns: List[Dict]
Example:
```python
counts = conv.count_messages_by_role()
# Example result: {'user': 2, 'assistant': 2}
conversation = Conversation()
conversation.add("user", "Hello")
messages = conversation.return_messages_as_dictionary()
```
### Exporting and Importing as Text
### `add_tool_output_to_agent(role: str, tool_output: dict)`
You can export the conversation to a text file and later import it:
Adds tool output to conversation.
| Parameter | Type | Description |
|-----------|------|-------------|
| role | str | Role of the tool |
| tool_output | dict | Tool output to add |
Example:
```python
conv.export_conversation("conversation.txt") # Export
conv.import_conversation("conversation.txt") # Import
conversation = Conversation()
conversation.add_tool_output_to_agent("tool", {"result": "success"})
```
### Exporting and Importing as JSON
### `return_json()`
Returns conversation as JSON string.
Conversations can also be saved and loaded as JSON files:
Returns: str
Example:
```python
conv.save_as_json("conversation.json") # Save as JSON
conv.load_from_json("conversation.json") # Load from JSON
conversation = Conversation()
conversation.add("user", "Hello")
json_str = conversation.return_json()
```
### Searching for a Keyword
### `get_final_message()`
You can search for messages containing a specific keyword within the conversation:
Gets the final message.
Returns: str
Example:
```python
results = conv.search_keyword_in_conversation("Hello")
conversation = Conversation()
conversation.add("user", "Hello")
final_msg = conversation.get_final_message()
```
### `get_final_message_content()`
Gets content of final message.
These examples demonstrate the versatility of the `Conversation` class in managing and interacting with conversation data. Whether you're building a chatbot, conducting analysis, or simply organizing dialogues, this class offers a robust set of tools to help you accomplish your goals.
Returns: str
## Conclusion
Example:
```python
conversation = Conversation()
conversation.add("user", "Hello")
content = conversation.get_final_message_content()
```
### `return_all_except_first()`
Returns all messages except first.
Returns: List[Dict]
Example:
```python
conversation = Conversation()
conversation.add("system", "Start")
conversation.add("user", "Hello")
messages = conversation.return_all_except_first()
```
The `Conversation` class is a valuable utility for handling conversation data in Python. With its ability to add, update, delete, search, export, and import messages, you have the flexibility to work with conversations in various ways. Feel free to explore its features and adapt them to your specific projects and applications.
### `return_all_except_first_string()`
Returns all messages except first as string.
Returns: str
Example:
```python
conversation = Conversation()
conversation.add("system", "Start")
conversation.add("user", "Hello")
messages = conversation.return_all_except_first_string()
```
### `batch_add(messages: List[dict])`
Adds multiple messages in batch.
| Parameter | Type | Description |
|-----------|------|-------------|
| messages | List[dict] | List of messages to add |
Example:
```python
conversation = Conversation()
conversation.batch_add([
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi"}
])
```
### `get_cache_stats()`
Gets cache usage statistics.
Returns: Dict[str, int]
Example:
```python
conversation = Conversation()
stats = conversation.get_cache_stats()
```
### `load_conversation(name: str, conversations_dir: Optional[str] = None)`
Loads a conversation from cache.
| Parameter | Type | Description |
|-----------|------|-------------|
| name | str | Name of conversation |
| conversations_dir | Optional[str] | Directory containing conversations |
Returns: Conversation
Example:
```python
conversation = Conversation.load_conversation("my_chat")
```
### `list_cached_conversations(conversations_dir: Optional[str] = None)`
Lists all cached conversations.
| Parameter | Type | Description |
|-----------|------|-------------|
| conversations_dir | Optional[str] | Directory containing conversations |
Returns: List[str]
Example:
```python
conversations = Conversation.list_cached_conversations()
```
### `clear_memory()`
Clears the conversation memory.
Example:
```python
conversation = Conversation()
conversation.add("user", "Hello")
conversation.clear_memory()
```
## 4. Examples
### Basic Usage
```python
from swarms.structs import Conversation
# Create a new conversation
conversation = Conversation(
name="my_chat",
system_prompt="You are a helpful assistant",
time_enabled=True
)
# Add messages
conversation.add("user", "Hello!")
conversation.add("assistant", "Hi there!")
# Display conversation
conversation.display_conversation()
# Save conversation
conversation.save_as_json("my_chat.json")
```
### Advanced Usage with Token Counting
```python
from swarms.structs import Conversation
from some_tokenizer import Tokenizer
# Create conversation with token counting
conversation = Conversation(
tokenizer=Tokenizer(),
context_length=4096,
token_count=True
)
# Add messages
conversation.add("user", "Hello, how are you?")
conversation.add("assistant", "I'm doing well, thank you!")
# Get token statistics
stats = conversation.get_cache_stats()
print(f"Total tokens: {stats['total_tokens']}")
```
### Using Different Storage Providers
```python
# In-memory storage
conversation = Conversation(provider="in-memory")
conversation.add("user", "Hello")
# Mem0 storage
conversation = Conversation(provider="mem0")
conversation.add("user", "Hello")
```
## Conclusion
If you have any further questions or need additional assistance, please don't hesitate to ask!
The `Conversation` class provides a comprehensive set of tools for managing conversations in Python applications. It supports various storage backends, token counting, caching, and multiple export/import formats. The class is designed to be flexible and extensible, making it suitable for a wide range of use cases from simple chat applications to complex conversational AI systems.

@ -0,0 +1,284 @@
# CouncilAsAJudge
The `CouncilAsAJudge` is a sophisticated evaluation system that employs multiple AI agents to assess model responses across various dimensions. It provides comprehensive, multi-dimensional analysis of AI model outputs through parallel evaluation and aggregation.
## Overview
The `CouncilAsAJudge` implements a council of specialized AI agents that evaluate different aspects of a model's response. Each agent focuses on a specific dimension of evaluation, and their findings are aggregated into a comprehensive report.
```mermaid
graph TD
A[User Query] --> B[Base Agent]
B --> C[Model Response]
C --> D[CouncilAsAJudge]
subgraph "Evaluation Dimensions"
D --> E1[Accuracy Agent]
D --> E2[Helpfulness Agent]
D --> E3[Harmlessness Agent]
D --> E4[Coherence Agent]
D --> E5[Conciseness Agent]
D --> E6[Instruction Adherence Agent]
end
E1 --> F[Evaluation Aggregation]
E2 --> F
E3 --> F
E4 --> F
E5 --> F
E6 --> F
F --> G[Comprehensive Report]
style D fill:#f9f,stroke:#333,stroke-width:2px
style F fill:#bbf,stroke:#333,stroke-width:2px
```
## Key Features
- Parallel evaluation across multiple dimensions
- Caching system for improved performance
- Dynamic model selection
- Comprehensive evaluation metrics
- Thread-safe execution
- Detailed technical analysis
## Installation
```bash
pip install swarms
```
## Basic Usage
```python
from swarms import Agent, CouncilAsAJudge
# Create a base agent
base_agent = Agent(
agent_name="Financial-Analysis-Agent",
system_prompt="You are a financial expert helping users understand and establish ROTH IRAs.",
model_name="claude-opus-4-20250514",
max_loops=1,
)
# Run the base agent
user_query = "How can I establish a ROTH IRA to buy stocks and get a tax break?"
model_output = base_agent.run(user_query)
# Create and run the council
panel = CouncilAsAJudge()
results = panel.run(user_query, model_output)
print(results)
```
## Advanced Usage
### Custom Model Configuration
```python
from swarms import CouncilAsAJudge
# Initialize with custom model
council = CouncilAsAJudge(
model_name="anthropic/claude-3-sonnet-20240229",
output_type="all",
cache_size=256,
max_workers=4,
random_model_name=False
)
```
### Parallel Processing Configuration
```python
from swarms import CouncilAsAJudge
# Configure parallel processing
council = CouncilAsAJudge(
max_workers=8, # Custom number of worker threads
random_model_name=True # Enable dynamic model selection
)
```
## Evaluation Dimensions
The council evaluates responses across six key dimensions:
| Dimension | Evaluation Criteria |
|-----------|-------------------|
| **Accuracy** | • Factual correctness<br>• Source credibility<br>• Temporal consistency<br>• Technical accuracy |
| **Helpfulness** | • Problem-solving efficacy<br>• Solution feasibility<br>• Context inclusion<br>• Proactive addressing of follow-ups |
| **Harmlessness** | • Safety assessment<br>• Ethical considerations<br>• Age-appropriateness<br>• Content sensitivity |
| **Coherence** | • Structural integrity<br>• Logical flow<br>• Information hierarchy<br>• Transition effectiveness |
| **Conciseness** | • Communication efficiency<br>• Information density<br>• Redundancy elimination<br>• Focus maintenance |
| **Instruction Adherence** | • Requirement coverage<br>• Constraint compliance<br>• Format matching<br>• Scope appropriateness |
## API Reference
### CouncilAsAJudge
```python
class CouncilAsAJudge:
def __init__(
self,
id: str = swarm_id(),
name: str = "CouncilAsAJudge",
description: str = "Evaluates the model's response across multiple dimensions",
model_name: str = "gpt-4o-mini",
output_type: str = "all",
cache_size: int = 128,
max_workers: int = None,
random_model_name: bool = True,
)
```
#### Parameters
- `id` (str): Unique identifier for the council
- `name` (str): Display name of the council
- `description` (str): Description of the council's purpose
- `model_name` (str): Name of the model to use for evaluations
- `output_type` (str): Type of output to return
- `cache_size` (int): Size of the LRU cache for prompts
- `max_workers` (int): Maximum number of worker threads
- `random_model_name` (bool): Whether to use random model selection
### Methods
#### run
```python
def run(self, task: str, model_response: str) -> None
```
Evaluates a model response across all dimensions.
##### Parameters
- `task` (str): Original user prompt
- `model_response` (str): Model's response to evaluate
##### Returns
- Comprehensive evaluation report
## Examples
### Financial Analysis Example
```python
from swarms import Agent, CouncilAsAJudge
# Create financial analysis agent
financial_agent = Agent(
agent_name="Financial-Analysis-Agent",
system_prompt="You are a financial expert helping users understand and establish ROTH IRAs.",
model_name="claude-opus-4-20250514",
max_loops=1,
)
# Run analysis
query = "How can I establish a ROTH IRA to buy stocks and get a tax break?"
response = financial_agent.run(query)
# Evaluate response
council = CouncilAsAJudge()
evaluation = council.run(query, response)
print(evaluation)
```
### Technical Documentation Example
```python
from swarms import Agent, CouncilAsAJudge
# Create documentation agent
doc_agent = Agent(
agent_name="Documentation-Agent",
system_prompt="You are a technical documentation expert.",
model_name="gpt-4",
max_loops=1,
)
# Generate documentation
query = "Explain how to implement a REST API using FastAPI"
response = doc_agent.run(query)
# Evaluate documentation quality
council = CouncilAsAJudge(
model_name="anthropic/claude-3-sonnet-20240229",
output_type="all"
)
evaluation = council.run(query, response)
print(evaluation)
```
## Best Practices
### Model Selection
!!! tip "Model Selection Best Practices"
- Choose appropriate models for your use case
- Consider using random model selection for diverse evaluations
- Match model capabilities to evaluation requirements
### Performance Optimization
!!! note "Performance Tips"
- Adjust cache size based on memory constraints
- Configure worker threads based on CPU cores
- Monitor memory usage with large responses
### Error Handling
!!! warning "Error Handling Guidelines"
- Implement proper exception handling
- Monitor evaluation failures
- Log evaluation results for analysis
### Resource Management
!!! info "Resource Management"
- Clean up resources after evaluation
- Monitor thread pool usage
- Implement proper shutdown procedures
## Troubleshooting
### Memory Issues
!!! danger "Memory Problems"
If you encounter memory-related problems:
- Reduce cache size
- Decrease number of worker threads
- Process smaller chunks of text
### Performance Problems
!!! warning "Performance Issues"
To improve performance:
- Increase cache size
- Adjust worker thread count
- Use more efficient models
### Evaluation Failures
!!! danger "Evaluation Issues"
When evaluations fail:
- Check model availability
- Verify input format
- Monitor error logs
## Contributing
!!! success "Contributing"
Contributions are welcome! Please feel free to submit a Pull Request.
## License
!!! info "License"
This project is licensed under the MIT License - see the LICENSE file for details.

@ -40,28 +40,10 @@ Features:
✅ Long term memory database with RAG (ChromaDB, Pinecone, Qdrant)
```python
import os
from dotenv import load_dotenv
# Import the OpenAIChat model and the Agent struct
from swarms import Agent
from swarm_models import OpenAIChat
# Load the environment variables
load_dotenv()
# Get the API key from the environment
api_key = os.environ.get("OPENAI_API_KEY")
# Initialize the language model
llm = OpenAIChat(
temperature=0.5, model_name="gpt-4", openai_api_key=api_key, max_tokens=4000
)
## Initialize the workflow
agent = Agent(llm=llm, max_loops=1, autosave=True, dashboard=True)
agent = Agent(temperature=0.5, model_name="gpt-4o-mini", max_tokens=4000, max_loops=1, autosave=True, dashboard=True)
# Run the workflow on a task
agent.run("Generate a 10,000 word blog on health and wellness.")

@ -0,0 +1,69 @@
# Multi-Agent Architectures Overview
This page provides a comprehensive overview of all available multi-agent architectures in Swarms, their use cases, and functionality.
## Architecture Comparison
=== "Core Architectures"
| Architecture | Use Case | Key Functionality | Documentation |
|-------------|----------|-------------------|---------------|
| MajorityVoting | Decision making through consensus | Combines multiple agent opinions and selects the most common answer | [Docs](majorityvoting.md) |
| AgentRearrange | Optimizing agent order | Dynamically reorders agents based on task requirements | [Docs](agent_rearrange.md) |
| RoundRobin | Equal task distribution | Cycles through agents in a fixed order | [Docs](round_robin_swarm.md) |
| Mixture of Agents | Complex problem solving | Combines diverse expert agents for comprehensive analysis | [Docs](moa.md) |
| GroupChat | Collaborative discussions | Simulates group discussions with multiple agents | [Docs](group_chat.md) |
| AgentRegistry | Agent management | Central registry for managing and accessing agents | [Docs](agent_registry.md) |
| SpreadSheetSwarm | Data processing | Collaborative data processing and analysis | [Docs](spreadsheet_swarm.md) |
| ForestSwarm | Hierarchical decision making | Tree-like structure for complex decision processes | [Docs](forest_swarm.md) |
| SwarmRouter | Task routing | Routes tasks to appropriate agents based on requirements | [Docs](swarm_router.md) |
| TaskQueueSwarm | Task management | Manages and prioritizes tasks in a queue | [Docs](taskqueue_swarm.md) |
| SwarmRearrange | Dynamic swarm optimization | Optimizes swarm configurations for specific tasks | [Docs](swarm_rearrange.md) |
| MultiAgentRouter | Advanced task routing | Routes tasks to specialized agents based on capabilities | [Docs](multi_agent_router.md) |
| MatrixSwarm | Parallel processing | Matrix-based organization for parallel task execution | [Docs](matrix_swarm.md) |
| ModelRouter | Model selection | Routes tasks to appropriate AI models | [Docs](model_router.md) |
| MALT | Multi-agent learning | Enables agents to learn from each other | [Docs](malt.md) |
| Deep Research Swarm | Research automation | Conducts comprehensive research across multiple domains | [Docs](deep_research_swarm.md) |
| Swarm Matcher | Agent matching | Matches tasks with appropriate agent combinations | [Docs](swarm_matcher.md) |
=== "Workflow Architectures"
| Architecture | Use Case | Key Functionality | Documentation |
|-------------|----------|-------------------|---------------|
| ConcurrentWorkflow | Parallel task execution | Executes multiple tasks simultaneously | [Docs](concurrentworkflow.md) |
| SequentialWorkflow | Step-by-step processing | Executes tasks in a specific sequence | [Docs](sequential_workflow.md) |
| GraphWorkflow | Complex task dependencies | Manages tasks with complex dependencies | [Docs](graph_workflow.md) |
=== "Hierarchical Architectures"
| Architecture | Use Case | Key Functionality | Documentation |
|-------------|----------|-------------------|---------------|
| Auto Agent Builder | Automated agent creation | Automatically creates and configures agents | [Docs](auto_agent_builder.md) |
| Hybrid Hierarchical-Cluster Swarm | Complex organization | Combines hierarchical and cluster-based organization | [Docs](hhcs.md) |
| Auto Swarm Builder | Automated swarm creation | Automatically creates and configures swarms | [Docs](auto_swarm_builder.md) |
## Communication Structure
!!! note "Communication Protocols"
The [Conversation](conversation.md) documentation details the communication protocols and structures used between agents in these architectures.
## Choosing the Right Architecture
When selecting a multi-agent architecture, consider the following factors:
!!! tip "Task Complexity"
Simple tasks may only need basic architectures like RoundRobin, while complex tasks might require Hierarchical or Graph-based approaches.
!!! tip "Parallelization Needs"
If tasks can be executed in parallel, consider ConcurrentWorkflow or MatrixSwarm.
!!! tip "Decision Making Requirements"
For consensus-based decisions, MajorityVoting is ideal.
!!! tip "Resource Optimization"
If you need to optimize agent usage, consider SwarmRouter or TaskQueueSwarm.
!!! tip "Learning Requirements"
If agents need to learn from each other, MALT is the appropriate choice.
!!! tip "Dynamic Adaptation"
For tasks requiring dynamic adaptation, consider SwarmRearrange or Auto Swarm Builder.
For more detailed information about each architecture, please refer to their respective documentation pages.

@ -0,0 +1,820 @@
# BaseTool Class Documentation
## Overview
The `BaseTool` class is a comprehensive tool management system for function calling, schema conversion, and execution. It provides a unified interface for converting Python functions to OpenAI function calling schemas, managing Pydantic models, executing tools with proper error handling, and supporting multiple AI provider formats (OpenAI, Anthropic, etc.).
**Key Features:**
- Convert Python functions to OpenAI function calling schemas
- Manage Pydantic models and their schemas
- Execute tools with proper error handling and validation
- Support for parallel and sequential function execution
- Schema validation for multiple AI providers
- Automatic tool execution from API responses
- Caching for improved performance
## Initialization Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `verbose` | `Optional[bool]` | `None` | Enable detailed logging output |
| `base_models` | `Optional[List[type[BaseModel]]]` | `None` | List of Pydantic models to manage |
| `autocheck` | `Optional[bool]` | `None` | Enable automatic validation checks |
| `auto_execute_tool` | `Optional[bool]` | `None` | Enable automatic tool execution |
| `tools` | `Optional[List[Callable[..., Any]]]` | `None` | List of callable functions to manage |
| `tool_system_prompt` | `Optional[str]` | `None` | System prompt for tool operations |
| `function_map` | `Optional[Dict[str, Callable]]` | `None` | Mapping of function names to callables |
| `list_of_dicts` | `Optional[List[Dict[str, Any]]]` | `None` | List of dictionary representations |
## Methods Overview
| Method | Description |
|--------|-------------|
| `func_to_dict` | Convert a callable function to OpenAI function calling schema |
| `load_params_from_func_for_pybasemodel` | Load function parameters for Pydantic BaseModel integration |
| `base_model_to_dict` | Convert Pydantic BaseModel to OpenAI schema dictionary |
| `multi_base_models_to_dict` | Convert multiple Pydantic BaseModels to OpenAI schema |
| `dict_to_openai_schema_str` | Convert dictionary to OpenAI schema string |
| `multi_dict_to_openai_schema_str` | Convert multiple dictionaries to OpenAI schema string |
| `get_docs_from_callable` | Extract documentation from callable items |
| `execute_tool` | Execute a tool based on response string |
| `detect_tool_input_type` | Detect the type of tool input |
| `dynamic_run` | Execute dynamic run with automatic type detection |
| `execute_tool_by_name` | Search for and execute tool by name |
| `execute_tool_from_text` | Execute tool from JSON-formatted string |
| `check_str_for_functions_valid` | Check if output is valid JSON with matching function |
| `convert_funcs_into_tools` | Convert all functions in tools list to OpenAI format |
| `convert_tool_into_openai_schema` | Convert tools into OpenAI function calling schema |
| `check_func_if_have_docs` | Check if function has proper documentation |
| `check_func_if_have_type_hints` | Check if function has proper type hints |
| `find_function_name` | Find function by name in tools list |
| `function_to_dict` | Convert function to dictionary representation |
| `multiple_functions_to_dict` | Convert multiple functions to dictionary representations |
| `execute_function_with_dict` | Execute function using dictionary of parameters |
| `execute_multiple_functions_with_dict` | Execute multiple functions with parameter dictionaries |
| `validate_function_schema` | Validate function schema for different AI providers |
| `get_schema_provider_format` | Get detected provider format of schema |
| `convert_schema_between_providers` | Convert schema between provider formats |
| `execute_function_calls_from_api_response` | Execute function calls from API responses |
| `detect_api_response_format` | Detect the format of API response |
---
## Detailed Method Documentation
### `func_to_dict`
**Description:** Convert a callable function to OpenAI function calling schema dictionary.
**Arguments:**
- `function` (Callable[..., Any], optional): The function to convert
**Returns:** `Dict[str, Any]` - OpenAI function calling schema dictionary
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def add_numbers(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
# Create BaseTool instance
tool = BaseTool(verbose=True)
# Convert function to OpenAI schema
schema = tool.func_to_dict(add_numbers)
print(schema)
# Output: {'type': 'function', 'function': {'name': 'add_numbers', 'description': 'Add two numbers together.', 'parameters': {...}}}
```
### `load_params_from_func_for_pybasemodel`
**Description:** Load and process function parameters for Pydantic BaseModel integration.
**Arguments:**
- `func` (Callable[..., Any]): The function to process
- `*args`: Additional positional arguments
- `**kwargs`: Additional keyword arguments
**Returns:** `Callable[..., Any]` - Processed function with loaded parameters
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def calculate_area(length: float, width: float) -> float:
"""Calculate area of a rectangle."""
return length * width
tool = BaseTool()
processed_func = tool.load_params_from_func_for_pybasemodel(calculate_area)
```
### `base_model_to_dict`
**Description:** Convert a Pydantic BaseModel to OpenAI function calling schema dictionary.
**Arguments:**
- `pydantic_type` (type[BaseModel]): The Pydantic model class to convert
- `*args`: Additional positional arguments
- `**kwargs`: Additional keyword arguments
**Returns:** `dict[str, Any]` - OpenAI function calling schema dictionary
**Example:**
```python
from pydantic import BaseModel
from swarms.tools.base_tool import BaseTool
class UserInfo(BaseModel):
name: str
age: int
email: str
tool = BaseTool()
schema = tool.base_model_to_dict(UserInfo)
print(schema)
```
### `multi_base_models_to_dict`
**Description:** Convert multiple Pydantic BaseModels to OpenAI function calling schema.
**Arguments:**
- `base_models` (List[BaseModel]): List of Pydantic models to convert
**Returns:** `dict[str, Any]` - Combined OpenAI function calling schema
**Example:**
```python
from pydantic import BaseModel
from swarms.tools.base_tool import BaseTool
class User(BaseModel):
name: str
age: int
class Product(BaseModel):
name: str
price: float
tool = BaseTool()
schemas = tool.multi_base_models_to_dict([User, Product])
print(schemas)
```
### `dict_to_openai_schema_str`
**Description:** Convert a dictionary to OpenAI function calling schema string.
**Arguments:**
- `dict` (dict[str, Any]): Dictionary to convert
**Returns:** `str` - OpenAI schema string representation
**Example:**
```python
from swarms.tools.base_tool import BaseTool
my_dict = {
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather information",
"parameters": {"type": "object", "properties": {"city": {"type": "string"}}}
}
}
tool = BaseTool()
schema_str = tool.dict_to_openai_schema_str(my_dict)
print(schema_str)
```
### `multi_dict_to_openai_schema_str`
**Description:** Convert multiple dictionaries to OpenAI function calling schema string.
**Arguments:**
- `dicts` (list[dict[str, Any]]): List of dictionaries to convert
**Returns:** `str` - Combined OpenAI schema string representation
**Example:**
```python
from swarms.tools.base_tool import BaseTool
dict1 = {"type": "function", "function": {"name": "func1", "description": "Function 1"}}
dict2 = {"type": "function", "function": {"name": "func2", "description": "Function 2"}}
tool = BaseTool()
schema_str = tool.multi_dict_to_openai_schema_str([dict1, dict2])
print(schema_str)
```
### `get_docs_from_callable`
**Description:** Extract documentation from a callable item.
**Arguments:**
- `item`: The callable item to extract documentation from
**Returns:** Processed documentation
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def example_function():
"""This is an example function with documentation."""
pass
tool = BaseTool()
docs = tool.get_docs_from_callable(example_function)
print(docs)
```
### `execute_tool`
**Description:** Execute a tool based on a response string.
**Arguments:**
- `response` (str): JSON response string containing tool execution details
- `*args`: Additional positional arguments
- `**kwargs`: Additional keyword arguments
**Returns:** `Callable` - Result of the tool execution
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def greet(name: str) -> str:
"""Greet a person by name."""
return f"Hello, {name}!"
tool = BaseTool(tools=[greet])
response = '{"name": "greet", "parameters": {"name": "Alice"}}'
result = tool.execute_tool(response)
print(result) # Output: "Hello, Alice!"
```
### `detect_tool_input_type`
**Description:** Detect the type of tool input for appropriate processing.
**Arguments:**
- `input` (ToolType): The input to analyze
**Returns:** `str` - Type of the input ("Pydantic", "Dictionary", "Function", or "Unknown")
**Example:**
```python
from swarms.tools.base_tool import BaseTool
from pydantic import BaseModel
class MyModel(BaseModel):
value: int
def my_function():
pass
tool = BaseTool()
print(tool.detect_tool_input_type(MyModel)) # "Pydantic"
print(tool.detect_tool_input_type(my_function)) # "Function"
print(tool.detect_tool_input_type({"key": "value"})) # "Dictionary"
```
### `dynamic_run`
**Description:** Execute a dynamic run based on the input type with automatic type detection.
**Arguments:**
- `input` (Any): The input to be processed (Pydantic model, dict, or function)
**Returns:** `str` - The result of the dynamic run (schema string or execution result)
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def multiply(x: int, y: int) -> int:
"""Multiply two numbers."""
return x * y
tool = BaseTool(auto_execute_tool=False)
result = tool.dynamic_run(multiply)
print(result) # Returns OpenAI schema string
```
### `execute_tool_by_name`
**Description:** Search for a tool by name and execute it with the provided response.
**Arguments:**
- `tool_name` (str): The name of the tool to execute
- `response` (str): JSON response string containing execution parameters
**Returns:** `Any` - The result of executing the tool
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def calculate_sum(a: int, b: int) -> int:
"""Calculate sum of two numbers."""
return a + b
tool = BaseTool(function_map={"calculate_sum": calculate_sum})
result = tool.execute_tool_by_name("calculate_sum", '{"a": 5, "b": 3}')
print(result) # Output: 8
```
### `execute_tool_from_text`
**Description:** Convert a JSON-formatted string into a tool dictionary and execute the tool.
**Arguments:**
- `text` (str): A JSON-formatted string representing a tool call with 'name' and 'parameters' keys
**Returns:** `Any` - The result of executing the tool
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def divide(x: float, y: float) -> float:
"""Divide x by y."""
return x / y
tool = BaseTool(function_map={"divide": divide})
text = '{"name": "divide", "parameters": {"x": 10, "y": 2}}'
result = tool.execute_tool_from_text(text)
print(result) # Output: 5.0
```
### `check_str_for_functions_valid`
**Description:** Check if the output is a valid JSON string with a function name that matches the function map.
**Arguments:**
- `output` (str): The output string to validate
**Returns:** `bool` - True if the output is valid and the function name matches, False otherwise
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def test_func():
pass
tool = BaseTool(function_map={"test_func": test_func})
valid_output = '{"type": "function", "function": {"name": "test_func"}}'
is_valid = tool.check_str_for_functions_valid(valid_output)
print(is_valid) # Output: True
```
### `convert_funcs_into_tools`
**Description:** Convert all functions in the tools list into OpenAI function calling format.
**Arguments:** None
**Returns:** None (modifies internal state)
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def func1(x: int) -> int:
"""Function 1."""
return x * 2
def func2(y: str) -> str:
"""Function 2."""
return y.upper()
tool = BaseTool(tools=[func1, func2])
tool.convert_funcs_into_tools()
print(tool.function_map) # {'func1': <function func1>, 'func2': <function func2>}
```
### `convert_tool_into_openai_schema`
**Description:** Convert tools into OpenAI function calling schema format.
**Arguments:** None
**Returns:** `dict[str, Any]` - Combined OpenAI function calling schema
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def add(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
def subtract(a: int, b: int) -> int:
"""Subtract b from a."""
return a - b
tool = BaseTool(tools=[add, subtract])
schema = tool.convert_tool_into_openai_schema()
print(schema)
```
### `check_func_if_have_docs`
**Description:** Check if a function has proper documentation.
**Arguments:**
- `func` (callable): The function to check
**Returns:** `bool` - True if function has documentation
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def documented_func():
"""This function has documentation."""
pass
def undocumented_func():
pass
tool = BaseTool()
print(tool.check_func_if_have_docs(documented_func)) # True
# tool.check_func_if_have_docs(undocumented_func) # Raises ToolDocumentationError
```
### `check_func_if_have_type_hints`
**Description:** Check if a function has proper type hints.
**Arguments:**
- `func` (callable): The function to check
**Returns:** `bool` - True if function has type hints
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def typed_func(x: int) -> str:
"""A typed function."""
return str(x)
def untyped_func(x):
"""An untyped function."""
return str(x)
tool = BaseTool()
print(tool.check_func_if_have_type_hints(typed_func)) # True
# tool.check_func_if_have_type_hints(untyped_func) # Raises ToolTypeHintError
```
### `find_function_name`
**Description:** Find a function by name in the tools list.
**Arguments:**
- `func_name` (str): The name of the function to find
**Returns:** `Optional[callable]` - The function if found, None otherwise
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def my_function():
"""My function."""
pass
tool = BaseTool(tools=[my_function])
found_func = tool.find_function_name("my_function")
print(found_func) # <function my_function at ...>
```
### `function_to_dict`
**Description:** Convert a function to dictionary representation.
**Arguments:**
- `func` (callable): The function to convert
**Returns:** `dict` - Dictionary representation of the function
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def example_func(param: str) -> str:
"""Example function."""
return param
tool = BaseTool()
func_dict = tool.function_to_dict(example_func)
print(func_dict)
```
### `multiple_functions_to_dict`
**Description:** Convert multiple functions to dictionary representations.
**Arguments:**
- `funcs` (list[callable]): List of functions to convert
**Returns:** `list[dict]` - List of dictionary representations
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def func1(x: int) -> int:
"""Function 1."""
return x
def func2(y: str) -> str:
"""Function 2."""
return y
tool = BaseTool()
func_dicts = tool.multiple_functions_to_dict([func1, func2])
print(func_dicts)
```
### `execute_function_with_dict`
**Description:** Execute a function using a dictionary of parameters.
**Arguments:**
- `func_dict` (dict): Dictionary containing function parameters
- `func_name` (Optional[str]): Name of function to execute (if not in dict)
**Returns:** `Any` - Result of function execution
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def power(base: int, exponent: int) -> int:
"""Calculate base to the power of exponent."""
return base ** exponent
tool = BaseTool(tools=[power])
result = tool.execute_function_with_dict({"base": 2, "exponent": 3}, "power")
print(result) # Output: 8
```
### `execute_multiple_functions_with_dict`
**Description:** Execute multiple functions using dictionaries of parameters.
**Arguments:**
- `func_dicts` (list[dict]): List of dictionaries containing function parameters
- `func_names` (Optional[list[str]]): Optional list of function names
**Returns:** `list[Any]` - List of results from function executions
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def add(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
tool = BaseTool(tools=[add, multiply])
results = tool.execute_multiple_functions_with_dict(
[{"a": 1, "b": 2}, {"a": 3, "b": 4}],
["add", "multiply"]
)
print(results) # [3, 12]
```
### `validate_function_schema`
**Description:** Validate the schema of a function for different AI providers.
**Arguments:**
- `schema` (Optional[Union[List[Dict[str, Any]], Dict[str, Any]]]): Function schema(s) to validate
- `provider` (str): Target provider format ("openai", "anthropic", "generic", "auto")
**Returns:** `bool` - True if schema(s) are valid, False otherwise
**Example:**
```python
from swarms.tools.base_tool import BaseTool
openai_schema = {
"type": "function",
"function": {
"name": "add_numbers",
"description": "Add two numbers",
"parameters": {
"type": "object",
"properties": {
"a": {"type": "integer"},
"b": {"type": "integer"}
},
"required": ["a", "b"]
}
}
}
tool = BaseTool()
is_valid = tool.validate_function_schema(openai_schema, "openai")
print(is_valid) # True
```
### `get_schema_provider_format`
**Description:** Get the detected provider format of a schema.
**Arguments:**
- `schema` (Dict[str, Any]): Function schema dictionary
**Returns:** `str` - Provider format ("openai", "anthropic", "generic", "unknown")
**Example:**
```python
from swarms.tools.base_tool import BaseTool
openai_schema = {
"type": "function",
"function": {"name": "test", "description": "Test function"}
}
tool = BaseTool()
provider = tool.get_schema_provider_format(openai_schema)
print(provider) # "openai"
```
### `convert_schema_between_providers`
**Description:** Convert a function schema between different provider formats.
**Arguments:**
- `schema` (Dict[str, Any]): Source function schema
- `target_provider` (str): Target provider format ("openai", "anthropic", "generic")
**Returns:** `Dict[str, Any]` - Converted schema
**Example:**
```python
from swarms.tools.base_tool import BaseTool
openai_schema = {
"type": "function",
"function": {
"name": "test_func",
"description": "Test function",
"parameters": {"type": "object", "properties": {}}
}
}
tool = BaseTool()
anthropic_schema = tool.convert_schema_between_providers(openai_schema, "anthropic")
print(anthropic_schema)
# Output: {"name": "test_func", "description": "Test function", "input_schema": {...}}
```
### `execute_function_calls_from_api_response`
**Description:** Automatically detect and execute function calls from OpenAI or Anthropic API responses.
**Arguments:**
- `api_response` (Union[Dict[str, Any], str, List[Any]]): The API response containing function calls
- `sequential` (bool): If True, execute functions sequentially. If False, execute in parallel
- `max_workers` (int): Maximum number of worker threads for parallel execution
- `return_as_string` (bool): If True, return results as formatted strings
**Returns:** `Union[List[Any], List[str]]` - List of results from executed functions
**Example:**
```python
from swarms.tools.base_tool import BaseTool
def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: Sunny, 25°C"
# Simulated OpenAI API response
openai_response = {
"choices": [{
"message": {
"tool_calls": [{
"type": "function",
"function": {
"name": "get_weather",
"arguments": '{"city": "New York"}'
},
"id": "call_123"
}]
}
}]
}
tool = BaseTool(tools=[get_weather])
results = tool.execute_function_calls_from_api_response(openai_response)
print(results) # ["Function 'get_weather' result:\nWeather in New York: Sunny, 25°C"]
```
### `detect_api_response_format`
**Description:** Detect the format of an API response.
**Arguments:**
- `response` (Union[Dict[str, Any], str, BaseModel]): API response to analyze
**Returns:** `str` - Detected format ("openai", "anthropic", "generic", "unknown")
**Example:**
```python
from swarms.tools.base_tool import BaseTool
openai_response = {
"choices": [{"message": {"tool_calls": []}}]
}
anthropic_response = {
"content": [{"type": "tool_use", "name": "test", "input": {}}]
}
tool = BaseTool()
print(tool.detect_api_response_format(openai_response)) # "openai"
print(tool.detect_api_response_format(anthropic_response)) # "anthropic"
```
---
## Exception Classes
The BaseTool class defines several custom exception classes for better error handling:
- `BaseToolError`: Base exception class for all BaseTool related errors
- `ToolValidationError`: Raised when tool validation fails
- `ToolExecutionError`: Raised when tool execution fails
- `ToolNotFoundError`: Raised when a requested tool is not found
- `FunctionSchemaError`: Raised when function schema conversion fails
- `ToolDocumentationError`: Raised when tool documentation is missing or invalid
- `ToolTypeHintError`: Raised when tool type hints are missing or invalid
## Usage Tips
1. **Always provide documentation and type hints** for your functions when using BaseTool
2. **Use verbose=True** during development for detailed logging
3. **Set up function_map** for efficient tool execution by name
4. **Validate schemas** before using them with different AI providers
5. **Use parallel execution** for better performance when executing multiple functions
6. **Handle exceptions** appropriately using the custom exception classes

@ -0,0 +1,244 @@
# MCP Client Call Reference Documentation
This document provides a comprehensive reference for the MCP (Model Control Protocol) client call functions, including detailed parameter descriptions, return types, and usage examples.
## Table of Contents
- [aget_mcp_tools](#aget_mcp_tools)
- [get_mcp_tools_sync](#get_mcp_tools_sync)
- [get_tools_for_multiple_mcp_servers](#get_tools_for_multiple_mcp_servers)
- [execute_tool_call_simple](#execute_tool_call_simple)
## Function Reference
### aget_mcp_tools
Asynchronously fetches available MCP tools from the server with retry logic.
#### Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| server_path | Optional[str] | No | Path to the MCP server script |
| format | str | No | Format of the returned tools (default: "openai") |
| connection | Optional[MCPConnection] | No | MCP connection object |
| *args | Any | No | Additional positional arguments |
| **kwargs | Any | No | Additional keyword arguments |
#### Returns
- `List[Dict[str, Any]]`: List of available MCP tools in OpenAI format
#### Raises
- `MCPValidationError`: If server_path is invalid
- `MCPConnectionError`: If connection to server fails
#### Example
```python
import asyncio
from swarms.tools.mcp_client_call import aget_mcp_tools
from swarms.tools.mcp_connection import MCPConnection
async def main():
# Using server path
tools = await aget_mcp_tools(server_path="http://localhost:8000")
# Using connection object
connection = MCPConnection(
host="localhost",
port=8000,
headers={"Authorization": "Bearer token"}
)
tools = await aget_mcp_tools(connection=connection)
print(f"Found {len(tools)} tools")
if __name__ == "__main__":
asyncio.run(main())
```
### get_mcp_tools_sync
Synchronous version of get_mcp_tools that handles event loop management.
#### Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| server_path | Optional[str] | No | Path to the MCP server script |
| format | str | No | Format of the returned tools (default: "openai") |
| connection | Optional[MCPConnection] | No | MCP connection object |
| *args | Any | No | Additional positional arguments |
| **kwargs | Any | No | Additional keyword arguments |
#### Returns
- `List[Dict[str, Any]]`: List of available MCP tools in OpenAI format
#### Raises
- `MCPValidationError`: If server_path is invalid
- `MCPConnectionError`: If connection to server fails
- `MCPExecutionError`: If event loop management fails
#### Example
```python
from swarms.tools.mcp_client_call import get_mcp_tools_sync
from swarms.tools.mcp_connection import MCPConnection
# Using server path
tools = get_mcp_tools_sync(server_path="http://localhost:8000")
# Using connection object
connection = MCPConnection(
host="localhost",
port=8000,
headers={"Authorization": "Bearer token"}
)
tools = get_mcp_tools_sync(connection=connection)
print(f"Found {len(tools)} tools")
```
### get_tools_for_multiple_mcp_servers
Get tools for multiple MCP servers concurrently using ThreadPoolExecutor.
#### Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| urls | List[str] | Yes | List of server URLs to fetch tools from |
| connections | List[MCPConnection] | No | Optional list of MCPConnection objects |
| format | str | No | Format to return tools in (default: "openai") |
| output_type | Literal["json", "dict", "str"] | No | Type of output format (default: "str") |
| max_workers | Optional[int] | No | Maximum number of worker threads |
#### Returns
- `List[Dict[str, Any]]`: Combined list of tools from all servers
#### Raises
- `MCPExecutionError`: If fetching tools from any server fails
#### Example
```python
from swarms.tools.mcp_client_call import get_tools_for_multiple_mcp_servers
from swarms.tools.mcp_connection import MCPConnection
# Define server URLs
urls = [
"http://server1:8000",
"http://server2:8000"
]
# Optional: Define connections
connections = [
MCPConnection(host="server1", port=8000),
MCPConnection(host="server2", port=8000)
]
# Get tools from all servers
tools = get_tools_for_multiple_mcp_servers(
urls=urls,
connections=connections,
format="openai",
output_type="dict",
max_workers=4
)
print(f"Found {len(tools)} tools across all servers")
```
### execute_tool_call_simple
Execute a tool call using the MCP client.
#### Parameters
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| response | Any | No | Tool call response object |
| server_path | str | No | Path to the MCP server |
| connection | Optional[MCPConnection] | No | MCP connection object |
| output_type | Literal["json", "dict", "str", "formatted"] | No | Type of output format (default: "str") |
| *args | Any | No | Additional positional arguments |
| **kwargs | Any | No | Additional keyword arguments |
#### Returns
- `List[Dict[str, Any]]`: Result of the tool execution
#### Raises
- `MCPConnectionError`: If connection to server fails
- `MCPExecutionError`: If tool execution fails
#### Example
```python
import asyncio
from swarms.tools.mcp_client_call import execute_tool_call_simple
from swarms.tools.mcp_connection import MCPConnection
async def main():
# Example tool call response
response = {
"name": "example_tool",
"parameters": {"param1": "value1"}
}
# Using server path
result = await execute_tool_call_simple(
response=response,
server_path="http://localhost:8000",
output_type="json"
)
# Using connection object
connection = MCPConnection(
host="localhost",
port=8000,
headers={"Authorization": "Bearer token"}
)
result = await execute_tool_call_simple(
response=response,
connection=connection,
output_type="dict"
)
print(f"Tool execution result: {result}")
if __name__ == "__main__":
asyncio.run(main())
```
## Error Handling
The MCP client functions use a retry mechanism with exponential backoff for failed requests. The following error types may be raised:
- `MCPValidationError`: Raised when input validation fails
- `MCPConnectionError`: Raised when connection to the MCP server fails
- `MCPExecutionError`: Raised when tool execution fails
## Best Practices
1. Always handle potential exceptions when using these functions
2. Use connection objects for authenticated requests
3. Consider using the async versions for better performance in async applications
4. Use appropriate output types based on your needs
5. When working with multiple servers, adjust max_workers based on your system's capabilities

@ -0,0 +1,604 @@
# Swarms Tools Documentation
Swarms provides a comprehensive toolkit for integrating various types of tools into your AI agents. This guide covers all available tool options including callable functions, MCP servers, schemas, and more.
## Installation
```bash
pip install swarms
```
## Overview
Swarms provides a comprehensive suite of tool integration methods to enhance your AI agents' capabilities:
| Tool Type | Description |
|-----------|-------------|
| **Callable Functions** | Direct integration of Python functions with proper type hints and comprehensive docstrings for immediate tool functionality |
| **MCP Servers** | Model Context Protocol servers enabling distributed tool functionality across multiple services and environments |
| **Tool Schemas** | Structured tool definitions that provide standardized interfaces and validation for tool integration |
| **Tool Collections** | Pre-built tool packages offering ready-to-use functionality for common use cases |
---
## Method 1: Callable Functions
Callable functions are the simplest way to add tools to your Swarms agents. They are regular Python functions with type hints and comprehensive docstrings.
### Step 1: Define Your Tool Functions
Create functions with the following requirements:
- **Type hints** for all parameters and return values
- **Comprehensive docstrings** with Args, Returns, Raises, and Examples sections
- **Error handling** for robust operation
#### Example: Cryptocurrency Price Tools
```python
import json
import requests
from swarms import Agent
def get_coin_price(coin_id: str, vs_currency: str = "usd") -> str:
"""
Get the current price of a specific cryptocurrency.
Args:
coin_id (str): The CoinGecko ID of the cryptocurrency
Examples: 'bitcoin', 'ethereum', 'cardano'
vs_currency (str, optional): The target currency for price conversion.
Supported: 'usd', 'eur', 'gbp', 'jpy', etc.
Defaults to "usd".
Returns:
str: JSON formatted string containing the coin's current price and market data
including market cap, 24h volume, and price changes
Raises:
requests.RequestException: If the API request fails due to network issues
ValueError: If coin_id is empty or invalid
TimeoutError: If the request takes longer than 10 seconds
Example:
>>> result = get_coin_price("bitcoin", "usd")
>>> print(result)
{"bitcoin": {"usd": 45000, "usd_market_cap": 850000000000, ...}}
>>> result = get_coin_price("ethereum", "eur")
>>> print(result)
{"ethereum": {"eur": 3200, "eur_market_cap": 384000000000, ...}}
"""
try:
# Validate input parameters
if not coin_id or not coin_id.strip():
raise ValueError("coin_id cannot be empty")
url = "https://api.coingecko.com/api/v3/simple/price"
params = {
"ids": coin_id.lower().strip(),
"vs_currencies": vs_currency.lower(),
"include_market_cap": True,
"include_24hr_vol": True,
"include_24hr_change": True,
"include_last_updated_at": True,
}
response = requests.get(url, params=params, timeout=10)
response.raise_for_status()
data = response.json()
# Check if the coin was found
if not data:
return json.dumps({
"error": f"Cryptocurrency '{coin_id}' not found. Please check the coin ID."
})
return json.dumps(data, indent=2)
except requests.RequestException as e:
return json.dumps({
"error": f"Failed to fetch price for {coin_id}: {str(e)}",
"suggestion": "Check your internet connection and try again"
})
except ValueError as e:
return json.dumps({"error": str(e)})
except Exception as e:
return json.dumps({"error": f"Unexpected error: {str(e)}"})
def get_top_cryptocurrencies(limit: int = 10, vs_currency: str = "usd") -> str:
"""
Fetch the top cryptocurrencies by market capitalization.
Args:
limit (int, optional): Number of coins to retrieve.
Range: 1-250 coins
Defaults to 10.
vs_currency (str, optional): The target currency for price conversion.
Supported: 'usd', 'eur', 'gbp', 'jpy', etc.
Defaults to "usd".
Returns:
str: JSON formatted string containing top cryptocurrencies with detailed market data
including: id, symbol, name, current_price, market_cap, market_cap_rank,
total_volume, price_change_24h, price_change_7d, last_updated
Raises:
requests.RequestException: If the API request fails
ValueError: If limit is not between 1 and 250
Example:
>>> result = get_top_cryptocurrencies(5, "usd")
>>> print(result)
[{"id": "bitcoin", "name": "Bitcoin", "current_price": 45000, ...}]
>>> result = get_top_cryptocurrencies(limit=3, vs_currency="eur")
>>> print(result)
[{"id": "bitcoin", "name": "Bitcoin", "current_price": 38000, ...}]
"""
try:
# Validate parameters
if not isinstance(limit, int) or not 1 <= limit <= 250:
raise ValueError("Limit must be an integer between 1 and 250")
url = "https://api.coingecko.com/api/v3/coins/markets"
params = {
"vs_currency": vs_currency.lower(),
"order": "market_cap_desc",
"per_page": limit,
"page": 1,
"sparkline": False,
"price_change_percentage": "24h,7d",
}
response = requests.get(url, params=params, timeout=10)
response.raise_for_status()
data = response.json()
# Simplify and structure the data for better readability
simplified_data = []
for coin in data:
simplified_data.append({
"id": coin.get("id"),
"symbol": coin.get("symbol", "").upper(),
"name": coin.get("name"),
"current_price": coin.get("current_price"),
"market_cap": coin.get("market_cap"),
"market_cap_rank": coin.get("market_cap_rank"),
"total_volume": coin.get("total_volume"),
"price_change_24h": round(coin.get("price_change_percentage_24h", 0), 2),
"price_change_7d": round(coin.get("price_change_percentage_7d_in_currency", 0), 2),
"last_updated": coin.get("last_updated"),
})
return json.dumps(simplified_data, indent=2)
except (requests.RequestException, ValueError) as e:
return json.dumps({
"error": f"Failed to fetch top cryptocurrencies: {str(e)}"
})
except Exception as e:
return json.dumps({"error": f"Unexpected error: {str(e)}"})
def search_cryptocurrencies(query: str) -> str:
"""
Search for cryptocurrencies by name or symbol.
Args:
query (str): The search term (coin name or symbol)
Examples: 'bitcoin', 'btc', 'ethereum', 'eth'
Case-insensitive search
Returns:
str: JSON formatted string containing search results with coin details
including: id, name, symbol, market_cap_rank, thumb (icon URL)
Limited to top 10 results for performance
Raises:
requests.RequestException: If the API request fails
ValueError: If query is empty
Example:
>>> result = search_cryptocurrencies("ethereum")
>>> print(result)
{"coins": [{"id": "ethereum", "name": "Ethereum", "symbol": "eth", ...}]}
>>> result = search_cryptocurrencies("btc")
>>> print(result)
{"coins": [{"id": "bitcoin", "name": "Bitcoin", "symbol": "btc", ...}]}
"""
try:
# Validate input
if not query or not query.strip():
raise ValueError("Search query cannot be empty")
url = "https://api.coingecko.com/api/v3/search"
params = {"query": query.strip()}
response = requests.get(url, params=params, timeout=10)
response.raise_for_status()
data = response.json()
# Extract and format the results
coins = data.get("coins", [])[:10] # Limit to top 10 results
result = {
"coins": coins,
"query": query,
"total_results": len(data.get("coins", [])),
"showing": min(len(coins), 10)
}
return json.dumps(result, indent=2)
except requests.RequestException as e:
return json.dumps({
"error": f'Failed to search for "{query}": {str(e)}'
})
except ValueError as e:
return json.dumps({"error": str(e)})
except Exception as e:
return json.dumps({"error": f"Unexpected error: {str(e)}"})
```
### Step 2: Configure Your Agent
Create an agent with the following key parameters:
```python
# Initialize the agent with cryptocurrency tools
agent = Agent(
agent_name="Financial-Analysis-Agent", # Unique identifier for your agent
agent_description="Personal finance advisor agent with cryptocurrency market analysis capabilities",
system_prompt="""You are a personal finance advisor agent with access to real-time
cryptocurrency data from CoinGecko. You can help users analyze market trends, check
coin prices, find trending cryptocurrencies, and search for specific coins. Always
provide accurate, up-to-date information and explain market data in an easy-to-understand way.""",
max_loops=1, # Number of reasoning loops
max_tokens=4096, # Maximum response length
model_name="anthropic/claude-3-opus-20240229", # LLM model to use
dynamic_temperature_enabled=True, # Enable adaptive creativity
output_type="all", # Return complete response
tools=[ # List of callable functions
get_coin_price,
get_top_cryptocurrencies,
search_cryptocurrencies,
],
)
```
### Step 3: Use Your Agent
```python
# Example usage with different queries
response = agent.run("What are the top 5 cryptocurrencies by market cap?")
print(response)
# Query with specific parameters
response = agent.run("Get the current price of Bitcoin and Ethereum in EUR")
print(response)
# Search functionality
response = agent.run("Search for cryptocurrencies related to 'cardano'")
print(response)
```
---
## Method 2: MCP (Model Context Protocol) Servers
MCP servers provide a standardized way to create distributed tool functionality. They're ideal for:
- **Reusable tools** across multiple agents
- **Complex tool logic** that needs isolation
- **Third-party tool integration**
- **Scalable architectures**
### Step 1: Create Your MCP Server
```python
from mcp.server.fastmcp import FastMCP
import requests
# Initialize the MCP server with configuration
mcp = FastMCP("OKXCryptoPrice") # Server name for identification
mcp.settings.port = 8001 # Port for server communication
```
### Step 2: Define MCP Tools
Each MCP tool requires the `@mcp.tool` decorator with specific parameters:
```python
@mcp.tool(
name="get_okx_crypto_price", # Tool identifier (must be unique)
description="Get the current price and basic information for a given cryptocurrency from OKX exchange.",
)
def get_okx_crypto_price(symbol: str) -> str:
"""
Get the current price and basic information for a given cryptocurrency using OKX API.
Args:
symbol (str): The cryptocurrency trading pair
Format: 'BASE-QUOTE' (e.g., 'BTC-USDT', 'ETH-USDT')
If only base currency provided, '-USDT' will be appended
Case-insensitive input
Returns:
str: A formatted string containing:
- Current price in USDT
- 24-hour price change percentage
- Formatted for human readability
Raises:
requests.RequestException: If the OKX API request fails
ValueError: If symbol format is invalid
ConnectionError: If unable to connect to OKX servers
Example:
>>> get_okx_crypto_price('BTC-USDT')
'Current price of BTC/USDT: $45,000.00\n24h Change: +2.34%'
>>> get_okx_crypto_price('eth') # Automatically converts to ETH-USDT
'Current price of ETH/USDT: $3,200.50\n24h Change: -1.23%'
"""
try:
# Input validation and formatting
if not symbol or not symbol.strip():
return "Error: Please provide a valid trading pair (e.g., 'BTC-USDT')"
# Normalize symbol format
symbol = symbol.upper().strip()
if not symbol.endswith("-USDT"):
symbol = f"{symbol}-USDT"
# OKX API endpoint for ticker information
url = f"https://www.okx.com/api/v5/market/ticker?instId={symbol}"
# Make the API request with timeout
response = requests.get(url, timeout=10)
response.raise_for_status()
data = response.json()
# Check API response status
if data.get("code") != "0":
return f"Error: {data.get('msg', 'Unknown error from OKX API')}"
# Extract ticker data
ticker_data = data.get("data", [{}])[0]
if not ticker_data:
return f"Error: Could not find data for {symbol}. Please verify the trading pair exists."
# Parse numerical data
price = float(ticker_data.get("last", 0))
change_percent = float(ticker_data.get("change24h", 0)) * 100 # Convert to percentage
# Format response
base_currency = symbol.split("-")[0]
change_symbol = "+" if change_percent >= 0 else ""
return (f"Current price of {base_currency}/USDT: ${price:,.2f}\n"
f"24h Change: {change_symbol}{change_percent:.2f}%")
except requests.exceptions.Timeout:
return "Error: Request timed out. OKX servers may be slow."
except requests.exceptions.RequestException as e:
return f"Error fetching OKX data: {str(e)}"
except (ValueError, KeyError) as e:
return f"Error parsing OKX response: {str(e)}"
except Exception as e:
return f"Unexpected error: {str(e)}"
@mcp.tool(
name="get_okx_crypto_volume", # Second tool with different functionality
description="Get the 24-hour trading volume for a given cryptocurrency from OKX exchange.",
)
def get_okx_crypto_volume(symbol: str) -> str:
"""
Get the 24-hour trading volume for a given cryptocurrency using OKX API.
Args:
symbol (str): The cryptocurrency trading pair
Format: 'BASE-QUOTE' (e.g., 'BTC-USDT', 'ETH-USDT')
If only base currency provided, '-USDT' will be appended
Case-insensitive input
Returns:
str: A formatted string containing:
- 24-hour trading volume in the base currency
- Volume formatted with thousand separators
- Currency symbol for clarity
Raises:
requests.RequestException: If the OKX API request fails
ValueError: If symbol format is invalid
Example:
>>> get_okx_crypto_volume('BTC-USDT')
'24h Trading Volume for BTC/USDT: 12,345.67 BTC'
>>> get_okx_crypto_volume('ethereum') # Converts to ETH-USDT
'24h Trading Volume for ETH/USDT: 98,765.43 ETH'
"""
try:
# Input validation and formatting
if not symbol or not symbol.strip():
return "Error: Please provide a valid trading pair (e.g., 'BTC-USDT')"
# Normalize symbol format
symbol = symbol.upper().strip()
if not symbol.endswith("-USDT"):
symbol = f"{symbol}-USDT"
# OKX API endpoint
url = f"https://www.okx.com/api/v5/market/ticker?instId={symbol}"
# Make API request
response = requests.get(url, timeout=10)
response.raise_for_status()
data = response.json()
# Validate API response
if data.get("code") != "0":
return f"Error: {data.get('msg', 'Unknown error from OKX API')}"
ticker_data = data.get("data", [{}])[0]
if not ticker_data:
return f"Error: Could not find data for {symbol}. Please verify the trading pair."
# Extract volume data
volume_24h = float(ticker_data.get("vol24h", 0))
base_currency = symbol.split("-")[0]
return f"24h Trading Volume for {base_currency}/USDT: {volume_24h:,.2f} {base_currency}"
except requests.exceptions.RequestException as e:
return f"Error fetching OKX data: {str(e)}"
except Exception as e:
return f"Error: {str(e)}"
```
### Step 3: Start Your MCP Server
```python
if __name__ == "__main__":
# Run the MCP server with SSE (Server-Sent Events) transport
# Server will be available at http://localhost:8001/sse
mcp.run(transport="sse")
```
### Step 4: Connect Agent to MCP Server
```python
from swarms import Agent
# Method 2: Using direct URL (simpler for development)
mcp_url = "http://0.0.0.0:8001/sse"
# Initialize agent with MCP tools
agent = Agent(
agent_name="Financial-Analysis-Agent", # Agent identifier
agent_description="Personal finance advisor with OKX exchange data access",
system_prompt="""You are a financial analysis agent with access to real-time
cryptocurrency data from OKX exchange. You can check prices, analyze trading volumes,
and provide market insights. Always format numerical data clearly and explain
market movements in context.""",
max_loops=1, # Processing loops
mcp_url=mcp_url, # MCP server connection
output_type="all", # Complete response format
# Note: tools are automatically loaded from MCP server
)
```
### Step 5: Use Your MCP-Enabled Agent
```python
# The agent automatically discovers and uses tools from the MCP server
response = agent.run(
"Fetch the price for Bitcoin using the OKX exchange and also get its trading volume"
)
print(response)
# Multiple tool usage
response = agent.run(
"Compare the prices of BTC, ETH, and ADA on OKX, and show their trading volumes"
)
print(response)
```
---
## Best Practices
### Function Design
| Practice | Description |
|----------|-------------|
| Type Hints | Always use type hints for all parameters and return values |
| Docstrings | Write comprehensive docstrings with Args, Returns, Raises, and Examples |
| Error Handling | Implement proper error handling with specific exception types |
| Input Validation | Validate input parameters before processing |
| Data Structure | Return structured data (preferably JSON) for consistency |
### MCP Server Development
| Practice | Description |
|----------|-------------|
| Tool Naming | Use descriptive tool names that clearly indicate functionality |
| Timeouts | Set appropriate timeouts for external API calls |
| Error Handling | Implement graceful error handling for network issues |
| Configuration | Use environment variables for sensitive configuration |
| Testing | Test tools independently before integration |
### Agent Configuration
| Practice | Description |
|----------|-------------|
| Loop Control | Choose appropriate max_loops based on task complexity |
| Token Management | Set reasonable token limits to control response length |
| System Prompts | Write clear system prompts that explain tool capabilities |
| Agent Naming | Use meaningful agent names for debugging and logging |
| Tool Integration | Consider tool combinations for comprehensive functionality |
### Performance Optimization
| Practice | Description |
|----------|-------------|
| Data Caching | Cache frequently requested data when possible |
| Connection Management | Use connection pooling for multiple API calls |
| Rate Control | Implement rate limiting to respect API constraints |
| Performance Monitoring | Monitor tool execution times and optimize slow operations |
| Async Operations | Use async operations for concurrent tool execution when supported |
---
## Troubleshooting
### Common Issues
#### Tool Not Found
```python
# Ensure function is in tools list
agent = Agent(
# ... other config ...
tools=[your_function_name], # Function object, not string
)
```
#### MCP Connection Failed
```python
# Check server status and URL
import requests
response = requests.get("http://localhost:8001/health") # Health check endpoint
```
#### Type Hint Errors
```python
# Always specify return types
def my_tool(param: str) -> str: # Not just -> None
return "result"
```
#### JSON Parsing Issues
```python
# Always return valid JSON strings
import json
return json.dumps({"result": data}, indent=2)
```

@ -603,4 +603,4 @@ agent_config = {
[:material-file-document: Swarms.ai Documentation](https://docs.swarms.world){ .md-button }
[:material-application: Swarms.ai Platform](https://swarms.world/platform){ .md-button }
[:material-key: API Key Management](https://swarms.world/platform/api-keys){ .md-button }
[:material-forum: Swarms.ai Community](https://discord.gg/swarms){ .md-button }
[:material-forum: Swarms.ai Community](https://discord.gg/jM3Z6M9uMq){ .md-button }

@ -1,9 +0,0 @@
# Available Models
| Model Name | Description | Input Price | Output Price | Use Cases |
|-----------------------|---------------------------------------------------------------------------------------------------------|--------------|--------------|------------------------------------------------------------------------|
| **nternlm-xcomposer2-4khd** | One of the highest performing VLMs (Video Language Models). | $4/1M Tokens | $8/1M Tokens | High-resolution video processing and understanding. |
## What models should we add?
[Book a call with us to learn more about your needs:](https://calendly.com/swarm-corp/30min)

@ -1,204 +0,0 @@
# CreateNow API Documentation
Welcome to the CreateNow API documentation! This API enables developers to generate AI-powered content, including images, music, videos, and speech, using natural language prompts. Use the endpoints below to start generating content.
---
## **1. Claim Your API Key**
To use the API, you must first claim your API key. Visit the following link to create an account and get your API key:
### **Claim Your Key**
```
https://createnow.xyz/account
```
After signing up, your API key will be available in your account dashboard. Keep it secure and include it in your API requests as a Bearer token.
---
## **2. Generation Endpoint**
The generation endpoint allows you to create AI-generated content using natural language prompts.
### **Endpoint**
```
POST https://createnow.xyz/api/v1/generate
```
### **Authentication**
Include a Bearer token in the `Authorization` header for all requests:
```
Authorization: Bearer YOUR_API_KEY
```
### **Basic Usage**
The simplest way to use the API is to send a prompt. The system will automatically detect the appropriate media type.
#### **Example Request (Basic)**
```json
{
"prompt": "a beautiful sunset over the ocean"
}
```
### **Advanced Options**
You can specify additional parameters for finer control over the output.
#### **Parameters**
| Parameter | Type | Description | Default |
|----------------|-----------|---------------------------------------------------------------------------------------------------|--------------|
| `prompt` | `string` | The natural language description of the content to generate. | Required |
| `type` | `string` | The type of content to generate (`image`, `music`, `video`, `speech`). | Auto-detect |
| `count` | `integer` | The number of outputs to generate (1-4). | 1 |
| `duration` | `integer` | Duration of audio or video content in seconds (applicable to `music` and `speech`). | N/A |
#### **Example Request (Advanced)**
```json
{
"prompt": "create an upbeat jazz melody",
"type": "music",
"count": 2,
"duration": 30
}
```
### **Response Format**
#### **Success Response**
```json
{
"success": true,
"outputs": [
{
"url": "https://createnow.xyz/storage/image1.png",
"creation_id": "12345",
"share_url": "https://createnow.xyz/share/12345"
}
],
"mediaType": "image",
"confidence": 0.95,
"detected": true
}
```
#### **Error Response**
```json
{
"error": "Invalid API Key",
"status": 401
}
```
---
## **3. Examples in Multiple Languages**
### **Python**
```python
import requests
url = "https://createnow.xyz/api/v1/generate"
headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
payload = {
"prompt": "a futuristic cityscape at night",
"type": "image",
"count": 2
}
response = requests.post(url, json=payload, headers=headers)
print(response.json())
```
### **Node.js**
```javascript
const axios = require('axios');
const url = "https://createnow.xyz/api/v1/generate";
const headers = {
Authorization: "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
};
const payload = {
prompt: "a futuristic cityscape at night",
type: "image",
count: 2
};
axios.post(url, payload, { headers })
.then(response => {
console.log(response.data);
})
.catch(error => {
console.error(error.response.data);
});
```
### **cURL**
```bash
curl -X POST https://createnow.xyz/api/v1/generate \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "a futuristic cityscape at night",
"type": "image",
"count": 2
}'
```
### **Java**
```java
import java.net.HttpURLConnection;
import java.net.URL;
import java.io.OutputStream;
public class CreateNowAPI {
public static void main(String[] args) throws Exception {
URL url = new URL("https://createnow.xyz/api/v1/generate");
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("POST");
conn.setRequestProperty("Authorization", "Bearer YOUR_API_KEY");
conn.setRequestProperty("Content-Type", "application/json");
conn.setDoOutput(true);
String jsonPayload = "{" +
"\"prompt\": \"a futuristic cityscape at night\", " +
"\"type\": \"image\", " +
"\"count\": 2}";
OutputStream os = conn.getOutputStream();
os.write(jsonPayload.getBytes());
os.flush();
int responseCode = conn.getResponseCode();
System.out.println("Response Code: " + responseCode);
}
}
```
---
## **4. Error Codes**
| Status Code | Meaning | Possible Causes |
|-------------|----------------------------------|----------------------------------------|
| 400 | Bad Request | Invalid parameters or payload. |
| 401 | Unauthorized | Invalid or missing API key. |
| 402 | Payment Required | Insufficient credits for the request. |
| 500 | Internal Server Error | Issue on the server side. |
---
## **5. Notes and Limitations**
- **Maximum Prompt Length:** 1000 characters.
- **Maximum Outputs per Request:** 4.
- **Supported Media Types:** `image`, `music`, `video`, `speech`.
- **Content Shareability:** Every output includes a unique creation ID and shareable URL.
- **Auto-Detection:** Uses advanced natural language processing to determine the most appropriate media type.
---
For further support or questions, please contact our support team at [support@createnow.xyz](mailto:support@createnow.xyz).

@ -1,94 +0,0 @@
# Getting Started with State-of-the-Art Vision Language Models (VLMs) Using the Swarms API
The intersection of vision and language tasks within the field of artificial intelligence has led to the emergence of highly sophisticated models known as Vision Language Models (VLMs). These models leverage the capabilities of both computer vision and natural language processing to provide a more nuanced understanding of multimodal inputs. In this blog post, we will guide you through the process of integrating state-of-the-art VLMs available through the Swarms API, focusing particularly on models like "internlm-xcomposer2-4khd", which represents a blend of high-performance language and visual understanding.
#### What Are Vision Language Models?
Vision Language Models are at the frontier of integrating visual data processing with text analysis. These models are trained on large datasets that include both images and their textual descriptions, learning to correlate visual elements with linguistic context. The result is a model that can not only recognize objects in an image but also generate descriptive, context-aware text, answer questions about the image, and even engage in a dialogue about its content.
#### Why Use Swarms API for VLMs?
Swarms API provides access to several cutting-edge VLMs including the "internlm-xcomposer2-4khd" model. This API is designed for developers looking to seamlessly integrate advanced multimodal capabilities into their applications without the need for extensive machine learning expertise or infrastructure. Swarms API is robust, scalable, and offers state-of-the-art models that are continuously updated to leverage the latest advancements in AI research.
#### Prerequisites
Before diving into the technical setup, ensure you have the following:
- An active account with Swarms API to obtain an API key.
- Python installed on your machine (Python 3.6 or later is recommended).
- An environment where you can install packages and run Python scripts (like Visual Studio Code, Jupyter Notebook, or simply your terminal).
#### Setting Up Your Environment
First, you'll need to install the `OpenAI` Python library if it's not already installed:
```bash
pip install openai
```
#### Integrating the Swarms API
Heres a basic guide on how to set up the Swarms API in your Python environment:
1. **API Key Configuration**:
Start by setting up your API key and base URL. Replace `"your_swarms_key"` with the actual API key you obtained from Swarms.
```python
from openai import OpenAI
openai_api_key = "your_swarms_key"
openai_api_base = "https://api.swarms.world/v1"
```
2. **Initialize Client**:
Initialize your OpenAI client with the provided API key and base URL.
```python
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
```
3. **Creating a Chat Completion**:
To use the VLM, youll send a request to the API with a multimodal input consisting of both an image and a text query. The following example shows how to structure this request:
```python
chat_response = client.chat.completions.create(
model="internlm-xcomposer2-4khd",
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
},
{"type": "text", "text": "What's in this image?"},
]
}
],
)
print("Chat response:", chat_response)
```
This code sends a multimodal query to the model, which includes an image URL followed by a text question regarding the image.
#### Understanding the Response
The response from the API will include details generated by the model about the image based on the textual query. This could range from simple descriptions to complex narratives, depending on the models capabilities and the nature of the question.
#### Best Practices
- **Data Privacy**: Always ensure that the images and data you use comply with privacy laws and regulations.
- **Error Handling**: Implement robust error handling to manage potential issues during API calls.
- **Model Updates**: Keep track of updates to the Swarms API and model improvements to leverage new features and improved accuracies.
#### Conclusion
Integrating VLMs via the Swarms API opens up a plethora of opportunities for developers to create rich, interactive, and intelligent applications that understand and interpret the world not just through text but through visuals as well. Whether youre building an educational tool, a content management system, or an interactive chatbot, these models can significantly enhance the way users interact with your application.
As you embark on your journey to integrate these powerful models into your projects, remember that the key to successful implementation lies in understanding the capabilities and limitations of the technology, continually testing with diverse data, and iterating based on user feedback and technological advances.
Happy coding, and heres to building more intelligent, multimodal applications!

@ -1,352 +0,0 @@
# Swarm Cloud API Reference
## Overview
The AI Chat Completion API processes text and image inputs to generate conversational responses. It supports various configurations to customize response behavior and manage input content.
## API Endpoints
### Chat Completion URL
`https://api.swarms.world`
- **Endpoint:** `/v1/chat/completions`
-- **Full Url** `https://api.swarms.world/v1/chat/completions`
- **Method:** POST
- **Description:** Generates a response based on the provided conversation history and parameters.
#### Request Parameters
| Parameter | Type | Description | Required |
|---------------|--------------------|-----------------------------------------------------------|----------|
| `model` | string | The AI model identifier. | Yes |
| `messages` | array of objects | A list of chat messages, including the sender's role and content. | Yes |
| `temperature` | float | Controls randomness. Lower values make responses more deterministic. | No |
| `top_p` | float | Controls diversity. Lower values lead to less random completions. | No |
| `max_tokens` | integer | The maximum number of tokens to generate. | No |
| `stream` | boolean | If set to true, responses are streamed back as they're generated. | No |
#### Response Structure
- **Success Response Code:** `200 OK`
```markdown
{
"model": string,
"object": string,
"choices": array of objects,
"usage": object
}
```
### List Models
- **Endpoint:** `/v1/models`
- **Method:** GET
- **Description:** Retrieves a list of available models.
#### Response Structure
- **Success Response Code:** `200 OK`
```markdown
{
"data": array of objects
}
```
## Objects
### Request
| Field | Type | Description | Required |
|-----------|---------------------|-----------------------------------------------|----------|
| `role` | string | The role of the message sender. | Yes |
| `content` | string or array | The content of the message. | Yes |
| `name` | string | An optional name identifier for the sender. | No |
### Response
| Field | Type | Description |
|-----------|--------|------------------------------------|
| `index` | integer| The index of the choice. |
| `message` | object | A `ChatMessageResponse` object. |
#### UsageInfo
| Field | Type | Description |
|-------------------|---------|-----------------------------------------------|
| `prompt_tokens` | integer | The number of tokens used in the prompt. |
| `total_tokens` | integer | The total number of tokens used. |
| `completion_tokens`| integer| The number of tokens used for the completion. |
## Example Requests
### Text Chat Completion
```json
POST /v1/chat/completions
{
"model": "cogvlm-chat-17b",
"messages": [
{
"role": "user",
"content": "Hello, world!"
}
],
"temperature": 0.8
}
```
### Image and Text Chat Completion
```json
POST /v1/chat/completions
{
"model": "cogvlm-chat-17b",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image"
},
{
"type": "image_url",
"image_url": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD..."
}
]
}
],
"temperature": 0.8,
"top_p": 0.9,
"max_tokens": 1024
}
```
## Error Codes
The API uses standard HTTP status codes to indicate the success or failure of an API call.
| Status Code | Description |
|-------------|-----------------------------------|
| 200 | OK - The request has succeeded. |
| 400 | Bad Request - Invalid request format. |
| 500 | Internal Server Error - An error occurred on the server. |
## Examples in Various Languages
### Python
```python
import requests
import base64
from PIL import Image
from io import BytesIO
# Convert image to Base64
def image_to_base64(image_path):
with Image.open(image_path) as image:
buffered = BytesIO()
image.save(buffered, format="JPEG")
img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
return img_str
# Replace 'image.jpg' with the path to your image
base64_image = image_to_base64("your_image.jpg")
text_data = {"type": "text", "text": "Describe what is in the image"}
image_data = {
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{base64_image}"},
}
# Construct the request data
request_data = {
"model": "cogvlm-chat-17b",
"messages": [{"role": "user", "content": [text_data, image_data]}],
"temperature": 0.8,
"top_p": 0.9,
"max_tokens": 1024,
}
# Specify the URL of your FastAPI application
url = "https://api.swarms.world/v1/chat/completions"
# Send the request
response = requests.post(url, json=request_data)
# Print the response from the server
print(response.text)
```
### Example API Request in Node
```js
const fs = require('fs');
const https = require('https');
const sharp = require('sharp');
// Convert image to Base64
async function imageToBase64(imagePath) {
try {
const imageBuffer = await sharp(imagePath).jpeg().toBuffer();
return imageBuffer.toString('base64');
} catch (error) {
console.error('Error converting image to Base64:', error);
}
}
// Main function to execute the workflow
async function main() {
const base64Image = await imageToBase64("your_image.jpg");
const textData = { type: "text", text: "Describe what is in the image" };
const imageData = {
type: "image_url",
image_url: { url: `data:image/jpeg;base64,${base64Image}` },
};
// Construct the request data
const requestData = JSON.stringify({
model: "cogvlm-chat-17b",
messages: [{ role: "user", content: [textData, imageData] }],
temperature: 0.8,
top_p: 0.9,
max_tokens: 1024,
});
const options = {
hostname: 'api.swarms.world',
path: '/v1/chat/completions',
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': requestData.length,
},
};
const req = https.request(options, (res) => {
let responseBody = '';
res.on('data', (chunk) => {
responseBody += chunk;
});
res.on('end', () => {
console.log('Response:', responseBody);
});
});
req.on('error', (error) => {
console.error(error);
});
req.write(requestData);
req.end();
}
main();
```
### Example API Request in Go
```go
package main
import (
"bytes"
"encoding/base64"
"encoding/json"
"fmt"
"image"
"image/jpeg"
_ "image/png" // Register PNG format
"io"
"net/http"
"os"
)
// imageToBase64 converts an image to a Base64-encoded string.
func imageToBase64(imagePath string) (string, error) {
file, err := os.Open(imagePath)
if err != nil {
return "", err
}
defer file.Close()
img, _, err := image.Decode(file)
if err != nil {
return "", err
}
buf := new(bytes.Buffer)
err = jpeg.Encode(buf, img, nil)
if err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(buf.Bytes()), nil
}
// main is the entry point of the program.
func main() {
base64Image, err := imageToBase64("your_image.jpg")
if err != nil {
fmt.Println("Error converting image to Base64:", err)
return
}
requestData := map[string]interface{}{
"model": "cogvlm-chat-17b",
"messages": []map[string]interface{}{
{
"role": "user",
"content": []map[string]string{{"type": "text", "text": "Describe what is in the image"}, {"type": "image_url", "image_url": {"url": fmt.Sprintf("data:image/jpeg;base64,%s", base64Image)}}},
},
},
"temperature": 0.8,
"top_p": 0.9,
"max_tokens": 1024,
}
requestBody, err := json.Marshal(requestData)
if err != nil {
fmt.Println("Error marshaling request data:", err)
return
}
url := "https://api.swarms.world/v1/chat/completions"
request, err := http.NewRequest("POST", url, bytes.NewBuffer(requestBody))
if err != nil {
fmt.Println("Error creating request:", err)
return
}
request.Header.Set("Content-Type", "application/json")
client := &http.Client{}
response, err := client.Do(request)
if err != nil {
fmt.Println("Error sending request:", err)
return
}
defer response.Body.Close()
responseBody, err := io.ReadAll(response.Body)
if err != nil {
fmt.Println("Error reading response body:", err)
return
}
fmt.Println("Response:", string(responseBody))
}
```
## Conclusion
This API reference provides the necessary details to understand and interact with the AI Chat Completion API. By following the outlined request and response formats, users can integrate this API into their applications to generate dynamic and contextually relevant conversational responses.

@ -1,103 +0,0 @@
## Migrate from OpenAI to Swarms in 3 lines of code
If youve been using GPT-3.5 or GPT-4, switching to Swarms is easy!
Swarms VLMs are available to use through our OpenAI compatible API. Additionally, if you have been building or prototyping using OpenAIs Python SDK you can keep your code as-is and use Swarmss VLMs models.
In this example, we will show you how to change just three lines of code to make your Python application use Swarmss Open Source models through OpenAIs Python SDK.
## Getting Started
Migrate OpenAIs Python SDK example script to use Swarmss LLM endpoints.
These are the three modifications necessary to achieve our goal:
Redefine OPENAI_API_KEY your API key environment variable to use your Swarms key.
Redefine OPENAI_BASE_URL to point to `https://api.swarms.world/v1/chat/completions`
Change the model name to an Open Source model, for example: cogvlm-chat-17b
## Requirements
We will be using Python and OpenAIs Python SDK.
## Instructions
Set up a Python virtual environment. Read Creating Virtual Environments here.
```sh
python3 -m venv .venv
source .venv/bin/activate
```
Install the pip requirements in your local python virtual environment
`python3 -m pip install openai`
## Environment setup
To run this example, there are simple steps to take:
Get an Swarms API token by following these instructions.
Expose the token in a new SWARMS_API_TOKEN environment variable:
`export SWARMS_API_TOKEN=<your-token>`
Switch the OpenAI token and base URL environment variable
`export OPENAI_API_KEY=$SWARMS_API_TOKEN`
`export OPENAI_BASE_URL="https://api.swarms.world/v1/chat/completions"`
If you prefer, you can also directly paste your token into the client initialization.
## Example code
Once youve completed the steps above, the code below will call Swarms LLMs:
```python
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
openai_api_key = ""
openai_api_base = "https://api.swarms.world/v1"
model = "internlm-xcomposer2-4khd"
client = OpenAI(api_key=openai_api_key, base_url=openai_api_base)
# Note that this model expects the image to come before the main text
chat_response = client.chat.completions.create(
model=model,
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://home-cdn.reolink.us/wp-content/uploads/2022/04/010345091648784709.4253.jpg",
},
},
{
"type": "text",
"text": "What is the most dangerous object in the image?",
},
],
}
],
temperature=0.1,
max_tokens=5000,
)
print("Chat response:", chat_response)
``` 
Note that you need to supply one of Swarmss supported LLMs as an argument, as in the example above. For a complete list of our supported LLMs, check out our REST API page.
## Example output
The code above produces the following object:
```python
ChatCompletionMessage(content=" Hello! How can I assist you today? Do you have any questions or tasks you'd like help with? Please let me know and I'll do my best to assist you.", role='assistant' function_call=None, tool_calls=None)
```

File diff suppressed because it is too large Load Diff

@ -0,0 +1,733 @@
# Swarms Client - Production Grade Rust SDK
A high-performance, production-ready Rust client for the Swarms API with comprehensive features for building multi-agent AI systems.
## Features
- **🚀 High Performance**: Built with `reqwest` and `tokio` for maximum throughput
- **🔄 Connection Pooling**: Automatic HTTP connection reuse and pooling
- **⚡ Circuit Breaker**: Automatic failure detection and recovery
- **💾 Intelligent Caching**: TTL-based in-memory caching with concurrent access
- **📊 Rate Limiting**: Configurable concurrent request limits
- **🔄 Retry Logic**: Exponential backoff with jitter
- **📝 Comprehensive Logging**: Structured logging with `tracing`
- **✅ Type Safety**: Full compile-time type checking with `serde`
## Installation
Install `swarms-rs` globally using cargo:
```bash
cargo install swarms-rs
```
## Quick Start
```rust
use swarms_client::SwarmsClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize the client with API key from environment
let client = SwarmsClient::builder()
.unwrap()
.from_env()? // Loads API key from SWARMS_API_KEY environment variable
.timeout(std::time::Duration::from_secs(60))
.max_retries(3)
.build()?;
// Make a simple swarm completion request
let response = client.swarm()
.completion()
.name("My First Swarm")
.swarm_type(SwarmType::Auto)
.task("Analyze the pros and cons of quantum computing")
.agent(|agent| {
agent
.name("Researcher")
.description("Conducts in-depth research")
.model("gpt-4o")
})
.send()
.await?;
println!("Swarm output: {}", response.output);
Ok(())
}
```
## API Reference
### SwarmsClient
The main client for interacting with the Swarms API.
#### Constructor Methods
##### `SwarmsClient::builder()`
Creates a new client builder for configuring the client.
**Returns**: `Result<ClientBuilder, SwarmsError>`
**Example**:
```rust
let client = SwarmsClient::builder()
.unwrap()
.api_key("your-api-key")
.timeout(Duration::from_secs(60))
.build()?;
```
##### `SwarmsClient::with_config(config: ClientConfig)`
Creates a client with custom configuration.
| Parameter | Type | Description |
|-----------|------|-------------|
| `config` | `ClientConfig` | Client configuration settings |
**Returns**: `Result<SwarmsClient, SwarmsError>`
**Example**:
```rust
let config = ClientConfig {
api_key: "your-api-key".to_string(),
base_url: "https://api.swarms.com/".parse().unwrap(),
timeout: Duration::from_secs(120),
max_retries: 5,
..Default::default()
};
let client = SwarmsClient::with_config(config)?;
```
#### Resource Access Methods
| Method | Returns | Description |
|--------|---------|-------------|
| `agent()` | `AgentResource` | Access agent-related operations |
| `swarm()` | `SwarmResource` | Access swarm-related operations |
| `models()` | `ModelsResource` | Access model listing operations |
| `logs()` | `LogsResource` | Access logging operations |
#### Cache Management Methods
| Method | Parameters | Returns | Description |
|--------|------------|---------|-------------|
| `clear_cache()` | None | `()` | Clears all cached responses |
| `cache_stats()` | None | `Option<(usize, usize)>` | Returns (valid_entries, total_entries) |
### ClientBuilder
Builder for configuring the Swarms client.
#### Configuration Methods
| Method | Parameters | Returns | Description |
|--------|------------|---------|-------------|
| `new()` | None | `ClientBuilder` | Creates a new builder with defaults |
| `from_env()` | None | `Result<ClientBuilder, SwarmsError>` | Loads API key from environment |
| `api_key(key)` | `String` | `ClientBuilder` | Sets the API key |
| `base_url(url)` | `&str` | `Result<ClientBuilder, SwarmsError>` | Sets the base URL |
| `timeout(duration)` | `Duration` | `ClientBuilder` | Sets request timeout |
| `max_retries(count)` | `usize` | `ClientBuilder` | Sets maximum retry attempts |
| `retry_delay(duration)` | `Duration` | `ClientBuilder` | Sets retry delay duration |
| `max_concurrent_requests(count)` | `usize` | `ClientBuilder` | Sets concurrent request limit |
| `enable_cache(enabled)` | `bool` | `ClientBuilder` | Enables/disables caching |
| `cache_ttl(duration)` | `Duration` | `ClientBuilder` | Sets cache TTL |
| `build()` | None | `Result<SwarmsClient, SwarmsError>` | Builds the client |
**Example**:
```rust
let client = SwarmsClient::builder()
.unwrap()
.from_env()?
.timeout(Duration::from_secs(120))
.max_retries(5)
.max_concurrent_requests(50)
.enable_cache(true)
.cache_ttl(Duration::from_secs(600))
.build()?;
```
### SwarmResource
Resource for swarm-related operations.
#### Methods
| Method | Parameters | Returns | Description |
|--------|------------|---------|-------------|
| `completion()` | None | `SwarmCompletionBuilder` | Creates a new swarm completion builder |
| `create(request)` | `SwarmSpec` | `Result<SwarmCompletionResponse, SwarmsError>` | Creates a swarm completion directly |
| `create_batch(requests)` | `Vec<SwarmSpec>` | `Result<Vec<SwarmCompletionResponse>, SwarmsError>` | Creates multiple swarm completions |
| `list_types()` | None | `Result<SwarmTypesResponse, SwarmsError>` | Lists available swarm types |
### SwarmCompletionBuilder
Builder for creating swarm completion requests.
#### Configuration Methods
| Method | Parameters | Returns | Description |
|--------|------------|---------|-------------|
| `name(name)` | `String` | `SwarmCompletionBuilder` | Sets the swarm name |
| `description(desc)` | `String` | `SwarmCompletionBuilder` | Sets the swarm description |
| `swarm_type(type)` | `SwarmType` | `SwarmCompletionBuilder` | Sets the swarm type |
| `task(task)` | `String` | `SwarmCompletionBuilder` | Sets the main task |
| `agent(builder_fn)` | `Fn(AgentSpecBuilder) -> AgentSpecBuilder` | `SwarmCompletionBuilder` | Adds an agent using a builder function |
| `max_loops(count)` | `u32` | `SwarmCompletionBuilder` | Sets maximum execution loops |
| `service_tier(tier)` | `String` | `SwarmCompletionBuilder` | Sets the service tier |
| `send()` | None | `Result<SwarmCompletionResponse, SwarmsError>` | Sends the request |
### AgentResource
Resource for agent-related operations.
#### Methods
| Method | Parameters | Returns | Description |
|--------|------------|---------|-------------|
| `completion()` | None | `AgentCompletionBuilder` | Creates a new agent completion builder |
| `create(request)` | `AgentCompletion` | `Result<AgentCompletionResponse, SwarmsError>` | Creates an agent completion directly |
| `create_batch(requests)` | `Vec<AgentCompletion>` | `Result<Vec<AgentCompletionResponse>, SwarmsError>` | Creates multiple agent completions |
### AgentCompletionBuilder
Builder for creating agent completion requests.
#### Configuration Methods
| Method | Parameters | Returns | Description |
|--------|------------|---------|-------------|
| `agent_name(name)` | `String` | `AgentCompletionBuilder` | Sets the agent name |
| `task(task)` | `String` | `AgentCompletionBuilder` | Sets the task |
| `model(model)` | `String` | `AgentCompletionBuilder` | Sets the AI model |
| `description(desc)` | `String` | `AgentCompletionBuilder` | Sets the agent description |
| `system_prompt(prompt)` | `String` | `AgentCompletionBuilder` | Sets the system prompt |
| `temperature(temp)` | `f32` | `AgentCompletionBuilder` | Sets the temperature (0.0-1.0) |
| `max_tokens(tokens)` | `u32` | `AgentCompletionBuilder` | Sets maximum tokens |
| `max_loops(loops)` | `u32` | `AgentCompletionBuilder` | Sets maximum loops |
| `send()` | None | `Result<AgentCompletionResponse, SwarmsError>` | Sends the request |
### SwarmType Enum
Available swarm types for different execution patterns.
| Variant | Description |
|---------|-------------|
| `AgentRearrange` | Agents can be rearranged based on task requirements |
| `MixtureOfAgents` | Combines multiple agents with different specializations |
| `SpreadSheetSwarm` | Organized like a spreadsheet with structured data flow |
| `SequentialWorkflow` | Agents execute in a sequential order |
| `ConcurrentWorkflow` | Agents execute concurrently |
| `GroupChat` | Agents interact in a group chat format |
| `MultiAgentRouter` | Routes tasks between multiple agents |
| `AutoSwarmBuilder` | Automatically builds swarm structure |
| `HiearchicalSwarm` | Hierarchical organization of agents |
| `Auto` | Automatically selects the best swarm type |
| `MajorityVoting` | Agents vote on decisions |
| `Malt` | Multi-Agent Language Tasks |
| `DeepResearchSwarm` | Specialized for deep research tasks |
## Detailed Examples
### 1. Simple Agent Completion
```rust
use swarms_client::{SwarmsClient};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = SwarmsClient::builder()
.unwrap()
.from_env()?
.build()?;
let response = client.agent()
.completion()
.agent_name("Content Writer")
.task("Write a blog post about sustainable technology")
.model("gpt-4o")
.temperature(0.7)
.max_tokens(2000)
.description("An expert content writer specializing in technology topics")
.system_prompt("You are a professional content writer with expertise in technology and sustainability. Write engaging, informative content that is well-structured and SEO-friendly.")
.send()
.await?;
println!("Agent Response: {}", response.outputs);
println!("Tokens Used: {}", response.usage.total_tokens);
Ok(())
}
```
### 2. Multi-Agent Research Swarm
```rust
use swarms_client::{SwarmsClient, SwarmType};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = SwarmsClient::builder()
.unwrap()
.from_env()?
.timeout(Duration::from_secs(300)) // 5 minutes for complex tasks
.build()?;
let response = client.swarm()
.completion()
.name("AI Research Swarm")
.description("A comprehensive research team analyzing AI trends and developments")
.swarm_type(SwarmType::SequentialWorkflow)
.task("Conduct a comprehensive analysis of the current state of AI in healthcare, including recent developments, challenges, and future prospects")
// Data Collection Agent
.agent(|agent| {
agent
.name("Data Collector")
.description("Gathers comprehensive data and recent developments")
.model("gpt-4o")
.system_prompt("You are a research data collector specializing in AI and healthcare. Your job is to gather the most recent and relevant information about AI applications in healthcare, including clinical trials, FDA approvals, and industry developments.")
.temperature(0.3)
.max_tokens(3000)
})
// Technical Analyst
.agent(|agent| {
agent
.name("Technical Analyst")
.description("Analyzes technical aspects and implementation details")
.model("gpt-4o")
.system_prompt("You are a technical analyst with deep expertise in AI/ML technologies. Analyze the technical feasibility, implementation challenges, and technological requirements of AI solutions in healthcare.")
.temperature(0.4)
.max_tokens(3000)
})
// Market Analyst
.agent(|agent| {
agent
.name("Market Analyst")
.description("Analyzes market trends, adoption rates, and economic factors")
.model("gpt-4o")
.system_prompt("You are a market research analyst specializing in healthcare technology markets. Analyze market size, growth projections, key players, investment trends, and economic factors affecting AI adoption in healthcare.")
.temperature(0.5)
.max_tokens(3000)
})
// Regulatory Expert
.agent(|agent| {
agent
.name("Regulatory Expert")
.description("Analyzes regulatory landscape and compliance requirements")
.model("gpt-4o")
.system_prompt("You are a regulatory affairs expert with deep knowledge of healthcare regulations and AI governance. Analyze regulatory challenges, compliance requirements, ethical considerations, and policy developments affecting AI in healthcare.")
.temperature(0.3)
.max_tokens(3000)
})
// Report Synthesizer
.agent(|agent| {
agent
.name("Report Synthesizer")
.description("Synthesizes all analyses into a comprehensive report")
.model("gpt-4o")
.system_prompt("You are an expert report writer and strategic analyst. Synthesize all the previous analyses into a comprehensive, well-structured executive report with clear insights, recommendations, and future outlook.")
.temperature(0.6)
.max_tokens(4000)
})
.max_loops(1)
.service_tier("premium")
.send()
.await?;
println!("Research Report:");
println!("{}", response.output);
println!("\nSwarm executed with {} agents", response.number_of_agents);
Ok(())
}
```
### 3. Financial Analysis Swarm (From Example)
```rust
use swarms_client::{SwarmsClient, SwarmType};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = SwarmsClient::builder()
.unwrap()
.from_env()?
.timeout(Duration::from_secs(120))
.max_retries(3)
.build()?;
let response = client.swarm()
.completion()
.name("Financial Health Analysis Swarm")
.description("A sequential workflow of specialized financial agents analyzing company health")
.swarm_type(SwarmType::ConcurrentWorkflow)
.task("Analyze the financial health of Apple Inc. (AAPL) based on their latest quarterly report")
// Financial Data Collector Agent
.agent(|agent| {
agent
.name("Financial Data Collector")
.description("Specializes in gathering and organizing financial data from various sources")
.model("gpt-4o")
.system_prompt("You are a financial data collection specialist. Your role is to gather and organize relevant financial data, including revenue, expenses, profit margins, and key financial ratios. Present the data in a clear, structured format.")
.temperature(0.7)
.max_tokens(2000)
})
// Financial Ratio Analyzer Agent
.agent(|agent| {
agent
.name("Ratio Analyzer")
.description("Analyzes key financial ratios and metrics")
.model("gpt-4o")
.system_prompt("You are a financial ratio analysis expert. Your role is to calculate and interpret key financial ratios such as P/E ratio, debt-to-equity, current ratio, and return on equity. Provide insights on what these ratios indicate about the company's financial health.")
.temperature(0.7)
.max_tokens(2000)
})
// Additional agents...
.agent(|agent| {
agent
.name("Investment Advisor")
.description("Provides investment recommendations based on analysis")
.model("gpt-4o")
.system_prompt("You are an investment advisory specialist. Your role is to synthesize the analysis from previous agents and provide clear, actionable investment recommendations. Consider both short-term and long-term investment perspectives.")
.temperature(0.7)
.max_tokens(2000)
})
.max_loops(1)
.service_tier("standard")
.send()
.await?;
println!("Financial Analysis Results:");
println!("{}", response.output);
Ok(())
}
```
### 4. Batch Processing
```rust
use swarms_client::{SwarmsClient, AgentCompletion, AgentSpec};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = SwarmsClient::builder()
.unwrap()
.from_env()?
.max_concurrent_requests(20) // Allow more concurrent requests for batch
.build()?;
// Create multiple agent completion requests
let requests = vec![
AgentCompletion {
agent_config: AgentSpec {
agent_name: "Content Creator 1".to_string(),
model_name: "gpt-4o-mini".to_string(),
temperature: 0.7,
max_tokens: 1000,
..Default::default()
},
task: "Write a social media post about renewable energy".to_string(),
history: None,
},
AgentCompletion {
agent_config: AgentSpec {
agent_name: "Content Creator 2".to_string(),
model_name: "gpt-4o-mini".to_string(),
temperature: 0.8,
max_tokens: 1000,
..Default::default()
},
task: "Write a social media post about electric vehicles".to_string(),
history: None,
},
// Add more requests...
];
// Process all requests in batch
let responses = client.agent()
.create_batch(requests)
.await?;
for (i, response) in responses.iter().enumerate() {
println!("Response {}: {}", i + 1, response.outputs);
println!("Tokens used: {}\n", response.usage.total_tokens);
}
Ok(())
}
```
### 5. Custom Configuration with Error Handling
```rust
use swarms_client::{SwarmsClient, SwarmsError, ClientConfig};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Custom configuration for production use
let config = ClientConfig {
api_key: std::env::var("SWARMS_API_KEY")?,
base_url: "https://swarms-api-285321057562.us-east1.run.app/".parse()?,
timeout: Duration::from_secs(180),
max_retries: 5,
retry_delay: Duration::from_secs(2),
max_concurrent_requests: 50,
circuit_breaker_threshold: 10,
circuit_breaker_timeout: Duration::from_secs(120),
enable_cache: true,
cache_ttl: Duration::from_secs(600),
};
let client = SwarmsClient::with_config(config)?;
// Example with comprehensive error handling
match client.swarm()
.completion()
.name("Production Swarm")
.swarm_type(SwarmType::Auto)
.task("Analyze market trends for Q4 2024")
.agent(|agent| {
agent
.name("Market Analyst")
.model("gpt-4o")
.temperature(0.5)
})
.send()
.await
{
Ok(response) => {
println!("Success! Job ID: {}", response.job_id);
println!("Output: {}", response.output);
},
Err(SwarmsError::Authentication { message, .. }) => {
eprintln!("Authentication error: {}", message);
},
Err(SwarmsError::RateLimit { message, .. }) => {
eprintln!("Rate limit exceeded: {}", message);
// Implement backoff strategy
},
Err(SwarmsError::InsufficientCredits { message, .. }) => {
eprintln!("Insufficient credits: {}", message);
},
Err(SwarmsError::CircuitBreakerOpen) => {
eprintln!("Circuit breaker is open - service temporarily unavailable");
},
Err(e) => {
eprintln!("Other error: {}", e);
}
}
Ok(())
}
```
### 6. Monitoring and Observability
```rust
use swarms_client::SwarmsClient;
use tracing::{info, warn, error};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize tracing for observability
tracing_subscriber::init();
let client = SwarmsClient::builder()
.unwrap()
.from_env()?
.enable_cache(true)
.build()?;
// Monitor cache performance
if let Some((valid, total)) = client.cache_stats() {
info!("Cache stats: {}/{} entries valid", valid, total);
}
// Make request with monitoring
let start = std::time::Instant::now();
let response = client.swarm()
.completion()
.name("Monitored Swarm")
.task("Analyze system performance metrics")
.agent(|agent| {
agent
.name("Performance Analyst")
.model("gpt-4o-mini")
})
.send()
.await?;
let duration = start.elapsed();
info!("Request completed in {:?}", duration);
if duration > Duration::from_secs(30) {
warn!("Request took longer than expected: {:?}", duration);
}
// Clear cache periodically in production
client.clear_cache();
Ok(())
}
```
## Error Handling
The client provides comprehensive error handling with specific error types:
### SwarmsError Types
| Error Type | Description | Recommended Action |
|------------|-------------|-------------------|
| `Authentication` | Invalid API key or authentication failure | Check API key and permissions |
| `RateLimit` | Rate limit exceeded | Implement exponential backoff |
| `InvalidRequest` | Malformed request parameters | Validate input parameters |
| `InsufficientCredits` | Not enough credits for operation | Check account balance |
| `Api` | General API error | Check API status and retry |
| `Network` | Network connectivity issues | Check internet connection |
| `Timeout` | Request timeout | Increase timeout or retry |
| `CircuitBreakerOpen` | Circuit breaker preventing requests | Wait for recovery period |
| `Serialization` | JSON serialization/deserialization error | Check data format |
### Error Handling Best Practices
```rust
use swarms_client::{SwarmsClient, SwarmsError};
async fn handle_swarm_request(client: &SwarmsClient, task: &str) -> Result<String, SwarmsError> {
match client.swarm()
.completion()
.task(task)
.agent(|agent| agent.name("Worker").model("gpt-4o-mini"))
.send()
.await
{
Ok(response) => Ok(response.output.to_string()),
Err(SwarmsError::RateLimit { .. }) => {
// Implement exponential backoff
tokio::time::sleep(Duration::from_secs(5)).await;
Err(SwarmsError::RateLimit {
message: "Rate limited - should retry".to_string(),
status: Some(429),
request_id: None,
})
},
Err(e) => Err(e),
}
}
```
## Performance Features
### Connection Pooling
The client automatically manages HTTP connection pooling for optimal performance:
```rust
// Connections are automatically pooled and reused
let client = SwarmsClient::builder()
.unwrap()
.from_env()?
.max_concurrent_requests(100) // Allow up to 100 concurrent requests
.build()?;
```
### Caching
Intelligent caching reduces redundant API calls:
```rust
let client = SwarmsClient::builder()
.unwrap()
.from_env()?
.enable_cache(true)
.cache_ttl(Duration::from_secs(300)) // 5-minute TTL
.build()?;
// GET requests are automatically cached
let models = client.models().list().await?; // First call hits API
let models_cached = client.models().list().await?; // Second call uses cache
```
### Circuit Breaker
Automatic failure detection and recovery:
```rust
let client = SwarmsClient::builder()
.unwrap()
.from_env()?
.build()?;
// Circuit breaker automatically opens after 5 failures
// and recovers after 60 seconds
```
## Configuration Reference
### ClientConfig Structure
| Field | Type | Default | Description |
|-------|------|---------|-------------|
| `api_key` | `String` | `""` | Swarms API key |
| `base_url` | `Url` | `https://swarms-api-285321057562.us-east1.run.app/` | API base URL |
| `timeout` | `Duration` | `60s` | Request timeout |
| `max_retries` | `usize` | `3` | Maximum retry attempts |
| `retry_delay` | `Duration` | `1s` | Base retry delay |
| `max_concurrent_requests` | `usize` | `100` | Concurrent request limit |
| `circuit_breaker_threshold` | `usize` | `5` | Failure threshold for circuit breaker |
| `circuit_breaker_timeout` | `Duration` | `60s` | Circuit breaker recovery time |
| `enable_cache` | `bool` | `true` | Enable response caching |
| `cache_ttl` | `Duration` | `300s` | Cache time-to-live |
## Environment Variables
| Variable | Description | Example |
|----------|-------------|---------|
| `SWARMS_API_KEY` | Your Swarms API key | `sk-xxx...` |
| `SWARMS_BASE_URL` | Custom API base URL (optional) | `https://api.custom.com/` |
## Testing
Run the test suite:
```bash
cargo test
```
Run specific tests:
```bash
cargo test test_cache
cargo test test_circuit_breaker
```
## Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass
5. Submit a pull request
## License
This project is licensed under the MIT License - see the LICENSE file for details.

@ -1,8 +1,9 @@
# Swarms API Documentation
*Enterprise-grade Agent Swarm Management API*
*Enterprise-Grade Agent Swarm Management API*
**Base URL**: `https://api.swarms.world` or `https://swarms-api-285321057562.us-east1.run.app`
**Base URL**: `https://api.swarms.world`
**API Key Management**: [https://swarms.world/platform/api-keys](https://swarms.world/platform/api-keys)
## Overview
@ -950,7 +951,7 @@ For technical assistance with the Swarms API, please contact:
- Documentation: [https://docs.swarms.world](https://docs.swarms.world)
- Email: kye@swarms.world
- Community Discord: [https://discord.gg/swarms](https://discord.gg/swarms)
- Community Discord: [https://discord.gg/jM3Z6M9uMq](https://discord.gg/jM3Z6M9uMq)
- Swarms Marketplace: [https://swarms.world](https://swarms.world)
- Swarms AI Website: [https://swarms.ai](https://swarms.ai)

@ -113,9 +113,9 @@ To further enhance your understanding and usage of the Swarms Platform, explore
### Links
- [API Documentation](https://docs.swarms.world)
- [Community Forums](https://discord.gg/swarms)
- [Community Forums](https://discord.gg/jM3Z6M9uMq)
- [Tutorials and Guides](https://docs.swarms.world))
- [Support](https://discord.gg/swarms)
- [Support](https://discord.gg/jM3Z6M9uMq)
## Conclusion

@ -159,23 +159,11 @@ This is an example of how to use the TwitterTool in a production environment usi
import os
from time import time
from swarm_models import OpenAIChat
from swarms import Agent
from dotenv import load_dotenv
from swarms_tools.social_media.twitter_tool import TwitterTool
load_dotenv()
model_name = "gpt-4o"
model = OpenAIChat(
model_name=model_name,
max_tokens=3000,
openai_api_key=os.getenv("OPENAI_API_KEY"),
)
medical_coder = Agent(
agent_name="Medical Coder",
system_prompt="""
@ -224,7 +212,8 @@ medical_coder = Agent(
- For ambiguous cases, provide a brief note with reasoning and flag for clarification.
- Ensure the output format is clean, consistent, and ready for professional use.
""",
llm=model,
model_name="gpt-4o-mini",
max_tokens=3000,
max_loops=1,
dynamic_temperature_enabled=True,
)

@ -138,6 +138,6 @@ Your contributions fund:
[dao]: https://dao.swarms.world/
[investors]: https://investors.swarms.world/
[site]: https://swarms.world/
[discord]: https://discord.gg/swarms
[discord]: https://discord.gg/jM3Z6M9uMq
```

@ -1,16 +1,43 @@
from swarms.structs.agent import Agent
from swarms import Agent
# Initialize the agent
agent = Agent(
agent_name="Financial-Analysis-Agent",
agent_description="Personal finance advisor agent",
max_loops=4,
agent_name="Quantitative-Trading-Agent",
agent_description="Advanced quantitative trading and algorithmic analysis agent",
system_prompt="""You are an expert quantitative trading agent with deep expertise in:
- Algorithmic trading strategies and implementation
- Statistical arbitrage and market making
- Risk management and portfolio optimization
- High-frequency trading systems
- Market microstructure analysis
- Quantitative research methodologies
- Financial mathematics and stochastic processes
- Machine learning applications in trading
Your core responsibilities include:
1. Developing and backtesting trading strategies
2. Analyzing market data and identifying alpha opportunities
3. Implementing risk management frameworks
4. Optimizing portfolio allocations
5. Conducting quantitative research
6. Monitoring market microstructure
7. Evaluating trading system performance
You maintain strict adherence to:
- Mathematical rigor in all analyses
- Statistical significance in strategy development
- Risk-adjusted return optimization
- Market impact minimization
- Regulatory compliance
- Transaction cost analysis
- Performance attribution
You communicate in precise, technical terms while maintaining clarity for stakeholders.""",
max_loops=3,
model_name="gpt-4o-mini",
dynamic_temperature_enabled=True,
interactive=False,
output_type="all",
safety_prompt_on=True,
)
agent.run("Conduct an analysis of the best real undervalued ETFs")
# print(out)
# print(type(out))
print(agent.run("What are the best top 3 etfs for gold coverage?"))

@ -1,56 +0,0 @@
import os
from dotenv import load_dotenv
from swarm_models import OpenAIChat
from swarms import Agent
from swarms.prompts.finance_agent_sys_prompt import (
FINANCIAL_AGENT_SYS_PROMPT,
)
from new_features_examples.async_executor import HighSpeedExecutor
load_dotenv()
# Get the OpenAI API key from the environment variable
api_key = os.getenv("OPENAI_API_KEY")
# Create an instance of the OpenAIChat class
model = OpenAIChat(
openai_api_key=api_key, model_name="gpt-4o-mini", temperature=0.1
)
# Initialize the agent
agent = Agent(
agent_name="Financial-Analysis-Agent",
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
llm=model,
max_loops=1,
# autosave=True,
# dashboard=False,
# verbose=True,
# dynamic_temperature_enabled=True,
# saved_state_path="finance_agent.json",
# user_name="swarms_corp",
# retry_attempts=1,
# context_length=200000,
# return_step_meta=True,
# output_type="json", # "json", "dict", "csv" OR "string" soon "yaml" and
# auto_generate_prompt=False, # Auto generate prompt for the agent based on name, description, and system prompt, task
# # artifacts_on=True,
# artifacts_output_path="roth_ira_report",
# artifacts_file_extension=".txt",
# max_tokens=8000,
# return_history=True,
)
def execute_agent(
task: str = "How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria. Create a report on this question.",
):
return agent.run(task)
executor = HighSpeedExecutor()
results = executor.run(execute_agent, 2)
print(results)

@ -1,131 +0,0 @@
import asyncio
import multiprocessing as mp
import time
from functools import partial
from typing import Any, Dict, Union
class HighSpeedExecutor:
def __init__(self, num_processes: int = None):
"""
Initialize the executor with configurable number of processes.
If num_processes is None, it uses CPU count.
"""
self.num_processes = num_processes or mp.cpu_count()
async def _worker(
self,
queue: asyncio.Queue,
func: Any,
*args: Any,
**kwargs: Any,
):
"""Async worker that processes tasks from the queue"""
while True:
try:
# Non-blocking get from queue
await queue.get()
await asyncio.get_event_loop().run_in_executor(
None, partial(func, *args, **kwargs)
)
queue.task_done()
except asyncio.CancelledError:
break
async def _distribute_tasks(
self, num_tasks: int, queue: asyncio.Queue
):
"""Distribute tasks across the queue"""
for i in range(num_tasks):
await queue.put(i)
async def execute_batch(
self,
func: Any,
num_executions: int,
*args: Any,
**kwargs: Any,
) -> Dict[str, Union[int, float]]:
"""
Execute the given function multiple times concurrently.
Args:
func: The function to execute
num_executions: Number of times to execute the function
*args, **kwargs: Arguments to pass to the function
Returns:
A dictionary containing the number of executions, duration, and executions per second.
"""
queue = asyncio.Queue()
# Create worker tasks
workers = [
asyncio.create_task(
self._worker(queue, func, *args, **kwargs)
)
for _ in range(self.num_processes)
]
# Start timing
start_time = time.perf_counter()
# Distribute tasks
await self._distribute_tasks(num_executions, queue)
# Wait for all tasks to complete
await queue.join()
# Cancel workers
for worker in workers:
worker.cancel()
# Wait for all workers to finish
await asyncio.gather(*workers, return_exceptions=True)
end_time = time.perf_counter()
duration = end_time - start_time
return {
"executions": num_executions,
"duration": duration,
"executions_per_second": num_executions / duration,
}
def run(
self,
func: Any,
num_executions: int,
*args: Any,
**kwargs: Any,
):
return asyncio.run(
self.execute_batch(func, num_executions, *args, **kwargs)
)
# def example_function(x: int = 0) -> int:
# """Example function to execute"""
# return x * x
# async def main():
# # Create executor with number of CPU cores
# executor = HighSpeedExecutor()
# # Execute the function 1000 times
# result = await executor.execute_batch(
# example_function, num_executions=1000, x=42
# )
# print(
# f"Completed {result['executions']} executions in {result['duration']:.2f} seconds"
# )
# print(
# f"Rate: {result['executions_per_second']:.2f} executions/second"
# )
# if __name__ == "__main__":
# # Run the async main function
# asyncio.run(main())

@ -1,176 +0,0 @@
import asyncio
from typing import List
from swarm_models import OpenAIChat
from swarms.structs.async_workflow import (
SpeakerConfig,
SpeakerRole,
create_default_workflow,
run_workflow_with_retry,
)
from swarms.prompts.finance_agent_sys_prompt import (
FINANCIAL_AGENT_SYS_PROMPT,
)
from swarms.structs.agent import Agent
async def create_specialized_agents() -> List[Agent]:
"""Create a set of specialized agents for financial analysis"""
# Base model configuration
model = OpenAIChat(model_name="gpt-4o")
# Financial Analysis Agent
financial_agent = Agent(
agent_name="Financial-Analysis-Agent",
agent_description="Personal finance advisor agent",
system_prompt=FINANCIAL_AGENT_SYS_PROMPT
+ "Output the <DONE> token when you're done creating a portfolio of etfs, index, funds, and more for AI",
max_loops=1,
llm=model,
dynamic_temperature_enabled=True,
user_name="Kye",
retry_attempts=3,
context_length=8192,
return_step_meta=False,
output_type="str",
auto_generate_prompt=False,
max_tokens=4000,
stopping_token="<DONE>",
saved_state_path="financial_agent.json",
interactive=False,
)
# Risk Assessment Agent
risk_agent = Agent(
agent_name="Risk-Assessment-Agent",
agent_description="Investment risk analysis specialist",
system_prompt="Analyze investment risks and provide risk scores. Output <DONE> when analysis is complete.",
max_loops=1,
llm=model,
dynamic_temperature_enabled=True,
user_name="Kye",
retry_attempts=3,
context_length=8192,
output_type="str",
max_tokens=4000,
stopping_token="<DONE>",
saved_state_path="risk_agent.json",
interactive=False,
)
# Market Research Agent
research_agent = Agent(
agent_name="Market-Research-Agent",
agent_description="AI and tech market research specialist",
system_prompt="Research AI market trends and growth opportunities. Output <DONE> when research is complete.",
max_loops=1,
llm=model,
dynamic_temperature_enabled=True,
user_name="Kye",
retry_attempts=3,
context_length=8192,
output_type="str",
max_tokens=4000,
stopping_token="<DONE>",
saved_state_path="research_agent.json",
interactive=False,
)
return [financial_agent, risk_agent, research_agent]
async def main():
# Create specialized agents
agents = await create_specialized_agents()
# Create workflow with group chat enabled
workflow = create_default_workflow(
agents=agents,
name="AI-Investment-Analysis-Workflow",
enable_group_chat=True,
)
# Configure speaker roles
workflow.speaker_system.add_speaker(
SpeakerConfig(
role=SpeakerRole.COORDINATOR,
agent=agents[0], # Financial agent as coordinator
priority=1,
concurrent=False,
required=True,
)
)
workflow.speaker_system.add_speaker(
SpeakerConfig(
role=SpeakerRole.CRITIC,
agent=agents[1], # Risk agent as critic
priority=2,
concurrent=True,
)
)
workflow.speaker_system.add_speaker(
SpeakerConfig(
role=SpeakerRole.EXECUTOR,
agent=agents[2], # Research agent as executor
priority=2,
concurrent=True,
)
)
# Investment analysis task
investment_task = """
Create a comprehensive investment analysis for a $40k portfolio focused on AI growth opportunities:
1. Identify high-growth AI ETFs and index funds
2. Analyze risks and potential returns
3. Create a diversified portfolio allocation
4. Provide market trend analysis
Present the results in a structured markdown format.
"""
try:
# Run workflow with retry
result = await run_workflow_with_retry(
workflow=workflow, task=investment_task, max_retries=3
)
print("\nWorkflow Results:")
print("================")
# Process and display agent outputs
for output in result.agent_outputs:
print(f"\nAgent: {output.agent_name}")
print("-" * (len(output.agent_name) + 8))
print(output.output)
# Display group chat history if enabled
if workflow.enable_group_chat:
print("\nGroup Chat Discussion:")
print("=====================")
for msg in workflow.speaker_system.message_history:
print(f"\n{msg.role} ({msg.agent_name}):")
print(msg.content)
# Save detailed results
if result.metadata.get("shared_memory_keys"):
print("\nShared Insights:")
print("===============")
for key in result.metadata["shared_memory_keys"]:
value = workflow.shared_memory.get(key)
if value:
print(f"\n{key}:")
print(value)
except Exception as e:
print(f"Workflow failed: {str(e)}")
finally:
await workflow.cleanup()
if __name__ == "__main__":
# Run the example
asyncio.run(main())

@ -0,0 +1,52 @@
from swarms.communication.redis_wrap import RedisConversation
import json
import time
def print_messages(conv):
messages = conv.to_dict()
print(f"Messages for conversation '{conv.get_name()}':")
print(json.dumps(messages, indent=4))
# First session - Add messages
print("\n=== First Session ===")
conv = RedisConversation(
use_embedded_redis=True,
redis_port=6380,
token_count=False,
cache_enabled=False,
auto_persist=True,
redis_data_dir="/Users/swarms_wd/.swarms/redis",
name="my_test_chat", # Use a friendly name instead of conversation_id
)
# Add messages
conv.add("user", "Hello!")
conv.add("assistant", "Hi there! How can I help?")
conv.add("user", "What's the weather like?")
# Print current messages
print_messages(conv)
# Close the first connection
del conv
time.sleep(2) # Give Redis time to save
# Second session - Verify persistence
print("\n=== Second Session ===")
conv2 = RedisConversation(
use_embedded_redis=True,
redis_port=6380,
token_count=False,
cache_enabled=False,
auto_persist=True,
redis_data_dir="/Users/swarms_wd/.swarms/redis",
name="my_test_chat", # Use the same name to restore the conversation
)
# Print messages from second session
print_messages(conv2)
# You can also change the name if needed
# conv2.set_name("weather_chat")

@ -1,22 +1,22 @@
"""
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- pull ranges for diagnosis
- if the diagnosis is x, then the lab ranges should be a to b
- train the agents, increase the load of input
- train the agents, increase the load of input
- medical history sent to the agent
- setup rag for the agents
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- how to manage diseases and by looking at correlating lab, docs, diagnoses
- put docs in rag ->
- put docs in rag ->
- monitoring, evaluation, and treatment
- can we confirm for every diagnosis -> monitoring, evaluation, and treatment, specialized for these things
- find diagnosis -> or have diagnosis, -> for each diagnosis are there evidence of those 3 things
- swarm of those 4 agents, ->
- swarm of those 4 agents, ->
- fda api for healthcare for commerically available papers
-
-
"""

@ -1,22 +1,22 @@
"""
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- pull ranges for diagnosis
- if the diagnosis is x, then the lab ranges should be a to b
- train the agents, increase the load of input
- train the agents, increase the load of input
- medical history sent to the agent
- setup rag for the agents
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- how to manage diseases and by looking at correlating lab, docs, diagnoses
- put docs in rag ->
- put docs in rag ->
- monitoring, evaluation, and treatment
- can we confirm for every diagnosis -> monitoring, evaluation, and treatment, specialized for these things
- find diagnosis -> or have diagnosis, -> for each diagnosis are there evidence of those 3 things
- swarm of those 4 agents, ->
- swarm of those 4 agents, ->
- fda api for healthcare for commerically available papers
-
-
"""

@ -1,63 +0,0 @@
import os
import google.generativeai as genai
from loguru import logger
class GeminiModel:
"""
Represents a GeminiModel instance for generating text based on user input.
"""
def __init__(
self,
temperature: float,
top_p: float,
top_k: float,
):
"""
Initializes the GeminiModel by setting up the API key, generation configuration, and starting a chat session.
Raises a KeyError if the GEMINI_API_KEY environment variable is not found.
"""
try:
api_key = os.environ["GEMINI_API_KEY"]
genai.configure(api_key=api_key)
self.generation_config = {
"temperature": 1,
"top_p": 0.95,
"top_k": 40,
"max_output_tokens": 8192,
"response_mime_type": "text/plain",
}
self.model = genai.GenerativeModel(
model_name="gemini-1.5-pro",
generation_config=self.generation_config,
)
self.chat_session = self.model.start_chat(history=[])
except KeyError as e:
logger.error(f"Environment variable not found: {e}")
raise
def run(self, task: str) -> str:
"""
Sends a message to the chat session and returns the response text.
Raises an Exception if there's an error running the GeminiModel.
Args:
task (str): The input task or message to send to the chat session.
Returns:
str: The response text from the chat session.
"""
try:
response = self.chat_session.send_message(task)
return response.text
except Exception as e:
logger.error(f"Error running GeminiModel: {e}")
raise
# Example usage
if __name__ == "__main__":
gemini_model = GeminiModel()
output = gemini_model.run("INSERT_INPUT_HERE")
print(output)

@ -1,272 +0,0 @@
from typing import List, Dict
from dataclasses import dataclass
from datetime import datetime
import asyncio
import aiohttp
from loguru import logger
from swarms import Agent
from pathlib import Path
import json
@dataclass
class CryptoData:
"""Real-time cryptocurrency data structure"""
symbol: str
current_price: float
market_cap: float
total_volume: float
price_change_24h: float
market_cap_rank: int
class DataFetcher:
"""Handles real-time data fetching from CoinGecko"""
def __init__(self):
self.base_url = "https://api.coingecko.com/api/v3"
self.session = None
async def _init_session(self):
if self.session is None:
self.session = aiohttp.ClientSession()
async def close(self):
if self.session:
await self.session.close()
self.session = None
async def get_market_data(
self, limit: int = 20
) -> List[CryptoData]:
"""Fetch market data for top cryptocurrencies"""
await self._init_session()
url = f"{self.base_url}/coins/markets"
params = {
"vs_currency": "usd",
"order": "market_cap_desc",
"per_page": str(limit),
"page": "1",
"sparkline": "false",
}
try:
async with self.session.get(
url, params=params
) as response:
if response.status != 200:
logger.error(
f"API Error {response.status}: {await response.text()}"
)
return []
data = await response.json()
crypto_data = []
for coin in data:
try:
crypto_data.append(
CryptoData(
symbol=str(
coin.get("symbol", "")
).upper(),
current_price=float(
coin.get("current_price", 0)
),
market_cap=float(
coin.get("market_cap", 0)
),
total_volume=float(
coin.get("total_volume", 0)
),
price_change_24h=float(
coin.get("price_change_24h", 0)
),
market_cap_rank=int(
coin.get("market_cap_rank", 0)
),
)
)
except (ValueError, TypeError) as e:
logger.error(
f"Error processing coin data: {str(e)}"
)
continue
logger.info(
f"Successfully fetched data for {len(crypto_data)} coins"
)
return crypto_data
except Exception as e:
logger.error(f"Exception in get_market_data: {str(e)}")
return []
class CryptoSwarmSystem:
def __init__(self):
self.agents = self._initialize_agents()
self.data_fetcher = DataFetcher()
logger.info("Crypto Swarm System initialized")
def _initialize_agents(self) -> Dict[str, Agent]:
"""Initialize different specialized agents"""
base_config = {
"max_loops": 1,
"autosave": True,
"dashboard": False,
"verbose": True,
"dynamic_temperature_enabled": True,
"retry_attempts": 3,
"context_length": 200000,
"return_step_meta": False,
"output_type": "string",
"streaming_on": False,
}
agents = {
"price_analyst": Agent(
agent_name="Price-Analysis-Agent",
system_prompt="""Analyze the given cryptocurrency price data and provide insights about:
1. Price trends and movements
2. Notable price actions
3. Potential support/resistance levels""",
saved_state_path="price_agent.json",
user_name="price_analyzer",
**base_config,
),
"volume_analyst": Agent(
agent_name="Volume-Analysis-Agent",
system_prompt="""Analyze the given cryptocurrency volume data and provide insights about:
1. Volume trends
2. Notable volume spikes
3. Market participation levels""",
saved_state_path="volume_agent.json",
user_name="volume_analyzer",
**base_config,
),
"market_analyst": Agent(
agent_name="Market-Analysis-Agent",
system_prompt="""Analyze the overall cryptocurrency market data and provide insights about:
1. Market trends
2. Market dominance
3. Notable market movements""",
saved_state_path="market_agent.json",
user_name="market_analyzer",
**base_config,
),
}
return agents
async def analyze_market(self) -> Dict:
"""Run real-time market analysis using all agents"""
try:
# Fetch market data
logger.info("Fetching market data for top 20 coins")
crypto_data = await self.data_fetcher.get_market_data(20)
if not crypto_data:
return {
"error": "Failed to fetch market data",
"timestamp": datetime.now().isoformat(),
}
# Run analysis with each agent
results = {}
for agent_name, agent in self.agents.items():
logger.info(f"Running {agent_name} analysis")
analysis = self._run_agent_analysis(
agent, crypto_data
)
results[agent_name] = analysis
return {
"timestamp": datetime.now().isoformat(),
"market_data": {
coin.symbol: {
"price": coin.current_price,
"market_cap": coin.market_cap,
"volume": coin.total_volume,
"price_change_24h": coin.price_change_24h,
"rank": coin.market_cap_rank,
}
for coin in crypto_data
},
"analysis": results,
}
except Exception as e:
logger.error(f"Error in market analysis: {str(e)}")
return {
"error": str(e),
"timestamp": datetime.now().isoformat(),
}
def _run_agent_analysis(
self, agent: Agent, crypto_data: List[CryptoData]
) -> str:
"""Run analysis for a single agent"""
try:
data_str = json.dumps(
[
{
"symbol": cd.symbol,
"price": cd.current_price,
"market_cap": cd.market_cap,
"volume": cd.total_volume,
"price_change_24h": cd.price_change_24h,
"rank": cd.market_cap_rank,
}
for cd in crypto_data
],
indent=2,
)
prompt = f"""Analyze this real-time cryptocurrency market data and provide detailed insights:
{data_str}"""
return agent.run(prompt)
except Exception as e:
logger.error(f"Error in {agent.agent_name}: {str(e)}")
return f"Error: {str(e)}"
async def main():
# Create output directory
Path("reports").mkdir(exist_ok=True)
# Initialize the swarm system
swarm = CryptoSwarmSystem()
while True:
try:
# Run analysis
report = await swarm.analyze_market()
# Save report
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
report_path = f"reports/market_analysis_{timestamp}.json"
with open(report_path, "w") as f:
json.dump(report, f, indent=2, default=str)
logger.info(
f"Analysis complete. Report saved to {report_path}"
)
# Wait before next analysis
await asyncio.sleep(300) # 5 minutes
except Exception as e:
logger.error(f"Error in main loop: {str(e)}")
await asyncio.sleep(60) # Wait 1 minute before retrying
finally:
if swarm.data_fetcher.session:
await swarm.data_fetcher.close()
if __name__ == "__main__":
asyncio.run(main())

File diff suppressed because it is too large Load Diff

@ -0,0 +1,19 @@
from swarms.structs.conversation import Conversation
# Example usage
# conversation = Conversation()
conversation = Conversation(token_count=True)
conversation.add("user", "Hello, how are you?")
conversation.add("assistant", "I am doing well, thanks.")
# conversation.add(
# "assistant", {"name": "tool_1", "output": "Hello, how are you?"}
# )
# print(conversation.return_json())
# # print(conversation.get_last_message_as_string())
print(conversation.return_json())
print(conversation.to_dict())
# # conversation.add("assistant", "I am doing well, thanks.")
# # # print(conversation.to_json())
# print(type(conversation.to_dict()))
# print(conversation.to_yaml())

@ -1,22 +1,22 @@
"""
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- pull ranges for diagnosis
- if the diagnosis is x, then the lab ranges should be a to b
- train the agents, increase the load of input
- train the agents, increase the load of input
- medical history sent to the agent
- setup rag for the agents
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- how to manage diseases and by looking at correlating lab, docs, diagnoses
- put docs in rag ->
- put docs in rag ->
- monitoring, evaluation, and treatment
- can we confirm for every diagnosis -> monitoring, evaluation, and treatment, specialized for these things
- find diagnosis -> or have diagnosis, -> for each diagnosis are there evidence of those 3 things
- swarm of those 4 agents, ->
- swarm of those 4 agents, ->
- fda api for healthcare for commerically available papers
-
-
"""

@ -0,0 +1,21 @@
from swarms.structs.agent import Agent
from swarms.structs.council_judge import CouncilAsAJudge
# ========== USAGE EXAMPLE ==========
if __name__ == "__main__":
user_query = "How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria?"
base_agent = Agent(
agent_name="Financial-Analysis-Agent",
system_prompt="You are a financial expert helping users understand and establish ROTH IRAs.",
model_name="claude-opus-4-20250514",
max_loops=1,
)
model_output = base_agent.run(user_query)
panel = CouncilAsAJudge()
results = panel.run(user_query, model_output)
print(results)

@ -0,0 +1,19 @@
from swarms.structs.agent import Agent
# Initialize the agent
agent = Agent(
agent_name="Clinical-Documentation-Agent",
agent_description="Specialized agent for clinical documentation and "
"medical record analysis",
system_prompt="You are a clinical documentation specialist with expertise "
"in medical terminology, SOAP notes, and healthcare "
"documentation standards. You help analyze and improve "
"clinical documentation for accuracy, completeness, and "
"compliance.",
max_loops=1,
model_name="claude-opus-4-20250514",
dynamic_temperature_enabled=True,
output_type="final",
)
print(agent.run("what are the best ways to diagnose the flu?"))

@ -10,7 +10,7 @@ agent = Agent(
system_prompt=FINANCIAL_AGENT_SYS_PROMPT
+ "Output the <DONE> token when you're done creating a portfolio of etfs, index, funds, and more for AI",
max_loops=1,
model_name="openai/gpt-4o",
model_name="claude-3-sonnet-20240229",
dynamic_temperature_enabled=True,
user_name="Kye",
retry_attempts=3,

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save