[feats][swarms.communication] + [docs][cleanup] + [tests][cleanup and swarms.communication]

dependabot/pip/transformers-gte-4.39.0-and-lt-4.53.0
Kye Gomez 2 weeks ago
parent 3e243a943a
commit ab030d46b9

@ -0,0 +1,19 @@
from swarms.structs.conversation import Conversation
# Example usage
# conversation = Conversation()
conversation = Conversation(token_count=True)
conversation.add("user", "Hello, how are you?")
conversation.add("assistant", "I am doing well, thanks.")
# conversation.add(
# "assistant", {"name": "tool_1", "output": "Hello, how are you?"}
# )
# print(conversation.return_json())
# # print(conversation.get_last_message_as_string())
print(conversation.return_json())
print(conversation.to_dict())
# # conversation.add("assistant", "I am doing well, thanks.")
# # # print(conversation.to_json())
# print(type(conversation.to_dict()))
# print(conversation.to_yaml())

@ -1,765 +0,0 @@
# Swarms API: Orchestrating the Future of AI Agent Collaboration
In today's rapidly evolving AI landscape, we're witnessing a fundamental shift from single-agent AI systems to complex, collaborative multi-agent architectures. While individual AI models like GPT-4 and Claude have demonstrated remarkable capabilities, they often struggle with complex tasks requiring diverse expertise, nuanced decision-making, and specialized domain knowledge. Enter the Swarms API, an enterprise-grade solution designed to orchestrate collaborative intelligence through coordinated AI agent swarms.
## The Problem: The Limitations of Single-Agent AI
Despite significant advances in large language models and AI systems, single-agent architectures face inherent limitations when tackling complex real-world problems:
### Expertise Boundaries
Even the most advanced AI models have knowledge boundaries. No single model can possess expert-level knowledge across all domains simultaneously. When a task requires deep expertise in multiple areas (finance, law, medicine, and technical analysis, for example), a single agent quickly reaches its limits.
### Complex Reasoning Chains
Many real-world problems demand multistep reasoning with multiple feedback loops and verification processes. Single agents often struggle to maintain reasoning coherence through extended problem-solving journeys, leading to errors that compound over time.
### Workflow Orchestration
Enterprise applications frequently require sophisticated workflows with multiple handoffs, approvals, and specialized processing steps. Managing this orchestration with individual AI instances is inefficient and error-prone.
### Resource Optimization
Deploying high-powered AI models for every task is expensive and inefficient. Organizations need right-sized solutions that match computing resources to task requirements.
### Collaboration Mechanisms
The most sophisticated human problem-solving happens in teams, where specialists collaborate, debate, and refine solutions together. This collaborative intelligence is difficult to replicate with isolated AI agents.
## The Solution: Swarms API
The Swarms API addresses these challenges through a revolutionary approach to AI orchestration. By enabling multiple specialized agents to collaborate in coordinated swarms, it unlocks new capabilities previously unattainable with single-agent architectures.
### What is the Swarms API?
The Swarms API is an enterprise-grade platform that enables organizations to deploy and manage intelligent agent swarms in the cloud. Rather than relying on a single AI agent to handle complex tasks, the Swarms API orchestrates teams of specialized AI agents that work together, each handling specific aspects of a larger problem.
The platform provides a robust infrastructure for creating, executing, and managing sophisticated AI agent workflows without the burden of maintaining the underlying infrastructure. With its cloud-native architecture, the Swarms API offers scalability, reliability, and security essential for enterprise deployments.
## Core Capabilities
The Swarms API delivers a comprehensive suite of capabilities designed for production-grade AI orchestration:
### Intelligent Swarm Management
At its core, the Swarms API enables the creation and execution of collaborative agent swarms. These swarms consist of specialized AI agents designed to work together on complex tasks. Unlike traditional AI approaches where a single model handles the entire workload, swarms distribute tasks among specialized agents, each contributing its expertise to the collective solution.
For example, a financial analysis swarm might include:
- A data preprocessing agent that cleans and normalizes financial data
- A market analyst agent that identifies trends and patterns
- An economic forecasting agent that predicts future market conditions
- A report generation agent that compiles insights into a comprehensive analysis
By coordinating these specialized agents, the swarm can deliver more accurate, nuanced, and valuable results than any single agent could produce alone.
### Automatic Agent Generation
One of the most powerful features of the Swarms API is its ability to dynamically create optimized agents based on task requirements. Rather than manually configuring each agent in a swarm, users can specify the overall task and let the platform automatically generate appropriate agents with optimized prompts and configurations.
This automatic agent generation significantly reduces the expertise and effort required to deploy effective AI solutions. The system analyzes the task requirements and creates a set of agents specifically designed to address different aspects of the problem. This approach not only saves time but also improves the quality of results by ensuring each agent is properly configured for its specific role.
### Multiple Swarm Architectures
Different problems require different collaboration patterns. The Swarms API supports various swarm architectures to match specific workflow needs:
- **SequentialWorkflow**: Agents work in a predefined sequence, with each agent handling specific subtasks in order
- **ConcurrentWorkflow**: Multiple agents work simultaneously on different aspects of a task
- **GroupChat**: Agents collaborate in a discussion format to solve problems collectively
- **HierarchicalSwarm**: Organizes agents in a structured hierarchy with managers and workers
- **MajorityVoting**: Uses a consensus mechanism where multiple agents vote on the best solution
- **AutoSwarmBuilder**: Automatically designs and builds an optimal swarm architecture based on the task
- **MixtureOfAgents**: Combines multiple agent types to tackle diverse aspects of a problem
- **MultiAgentRouter**: Routes subtasks to specialized agents based on their capabilities
- **AgentRearrange**: Dynamically reorganizes the workflow between agents based on evolving task requirements
This flexibility allows organizations to select the most appropriate collaboration pattern for each specific use case, optimizing the balance between efficiency, thoroughness, and creativity.
### Scheduled Execution
The Swarms API enables automated, scheduled swarm executions, allowing organizations to set up recurring tasks that run automatically at specified times. This feature is particularly valuable for regular reporting, monitoring, and analysis tasks that need to be performed on a consistent schedule.
For example, a financial services company could schedule a daily market analysis swarm to run before trading hours, providing updated insights based on overnight market movements. Similarly, a cybersecurity team might schedule hourly security assessment swarms to continuously monitor potential threats.
### Comprehensive Logging
Transparency and auditability are essential for enterprise AI applications. The Swarms API provides comprehensive logging capabilities that track all API interactions, agent communications, and decision processes. This detailed logging enables:
- Debugging and troubleshooting swarm behaviors
- Auditing decision trails for compliance and quality assurance
- Analyzing performance patterns to identify optimization opportunities
- Documenting the rationale behind AI-generated recommendations
These logs provide valuable insights into how swarms operate and make decisions, increasing trust and enabling continuous improvement of AI workflows.
### Cost Management
AI deployment costs can quickly escalate without proper oversight. The Swarms API addresses this challenge through:
- **Predictable, transparent pricing**: Clear cost structures that make budgeting straightforward
- **Optimized resource utilization**: Intelligent allocation of computing resources based on task requirements
- **Detailed cost breakdowns**: Comprehensive reporting on token usage, agent costs, and total expenditures
- **Model flexibility**: Freedom to choose the most cost-effective models for each agent based on task complexity
This approach ensures organizations get maximum value from their AI investments without unexpected cost overruns.
### Enterprise Security
Security is paramount for enterprise AI deployments. The Swarms API implements robust security measures including:
- **Full API key authentication**: Secure access control for all API interactions
- **Comprehensive key management**: Tools for creating, rotating, and revoking API keys
- **Usage monitoring**: Tracking and alerting for suspicious activity patterns
- **Secure data handling**: Appropriate data protection throughout the swarm execution lifecycle
These security features ensure that sensitive data and AI workflows remain protected in accordance with enterprise security requirements.
## How It Works: Behind the Scenes
The Swarms API operates on a sophisticated architecture designed for reliability, scalability, and performance. Here's a look at what happens when you submit a task to the Swarms API:
1. **Task Submission**: You send a request to the API with your task description and desired swarm configuration.
2. **Swarm Configuration**: The system either uses your specified agent configuration or automatically generates an optimal swarm structure based on the task requirements.
3. **Agent Initialization**: Each agent in the swarm is initialized with its specific instructions, model parameters, and role definitions.
4. **Orchestration Setup**: The system establishes the communication and workflow patterns between agents based on the selected swarm architecture.
5. **Execution**: The swarm begins working on the task, with agents collaborating according to their defined roles and relationships.
6. **Monitoring and Adjustment**: Throughout execution, the system monitors agent performance and makes adjustments as needed.
7. **Result Compilation**: Once the task is complete, the system compiles the results into the requested format.
8. **Response Delivery**: The final output is returned to you, along with metadata about the execution process.
This entire process happens seamlessly in the cloud, with the Swarms API handling all the complexities of agent coordination, resource allocation, and workflow management.
## Real-World Applications
The Swarms API enables a wide range of applications across industries. Here are some compelling use cases that demonstrate its versatility:
### Financial Services
#### Investment Research
Financial institutions can deploy research swarms that combine market analysis, economic forecasting, company evaluation, and risk assessment. These swarms can evaluate investment opportunities much more comprehensively than single-agent systems, considering multiple factors simultaneously:
- Macroeconomic indicators
- Company fundamentals
- Market sentiment
- Technical analysis patterns
- Regulatory considerations
For example, an investment research swarm analyzing a potential stock purchase might include specialists in the company's industry, financial statement analysis, market trend identification, and risk assessment. This collaborative approach delivers more nuanced insights than any single analyst or model could produce independently.
#### Regulatory Compliance
Financial regulations are complex and constantly evolving. Compliance swarms can monitor regulatory changes, assess their impact on existing policies, and recommend appropriate adjustments. These swarms might include:
- Regulatory monitoring agents that track new rules and guidelines
- Policy analysis agents that evaluate existing compliance frameworks
- Gap assessment agents that identify discrepancies
- Documentation agents that update compliance materials
This approach ensures comprehensive coverage of regulatory requirements while minimizing compliance risks.
### Healthcare
#### Medical Research Analysis
The medical literature grows at an overwhelming pace, making it difficult for researchers and clinicians to stay current. Research analysis swarms can continuously scan new publications, identify relevant findings, and synthesize insights for specific research questions or clinical scenarios.
A medical research swarm might include:
- Literature scanning agents that identify relevant publications
- Methodology assessment agents that evaluate research quality
- Clinical relevance agents that determine practical applications
- Summary agents that compile key findings into accessible reports
This collaborative approach enables more thorough literature reviews and helps bridge the gap between research and clinical practice.
#### Treatment Planning
Complex medical cases often benefit from multidisciplinary input. Treatment planning swarms can integrate perspectives from different medical specialties, consider patient-specific factors, and recommend comprehensive care approaches.
For example, an oncology treatment planning swarm might include specialists in:
- Diagnostic interpretation
- Treatment protocol evaluation
- Drug interaction assessment
- Patient history analysis
- Evidence-based outcome prediction
By combining these specialized perspectives, the swarm can develop more personalized and effective treatment recommendations.
### Legal Services
#### Contract Analysis
Legal contracts contain numerous interconnected provisions that must be evaluated holistically. Contract analysis swarms can review complex agreements more thoroughly by assigning different sections to specialized agents:
- Definition analysis agents that ensure consistent terminology
- Risk assessment agents that identify potential liabilities
- Compliance agents that check regulatory requirements
- Precedent comparison agents that evaluate terms against standards
- Conflict detection agents that identify internal inconsistencies
This distributed approach enables more comprehensive contract reviews while reducing the risk of overlooking critical details.
#### Legal Research
Legal research requires examining statutes, case law, regulations, and scholarly commentary. Research swarms can conduct multi-faceted legal research by coordinating specialized agents focusing on different aspects of the legal landscape.
A legal research swarm might include:
- Statutory analysis agents that examine relevant laws
- Case law agents that review judicial precedents
- Regulatory agents that assess administrative rules
- Scholarly analysis agents that evaluate academic perspectives
- Synthesis agents that integrate findings into cohesive arguments
This collaborative approach produces more comprehensive legal analyses that consider multiple sources of authority.
### Research and Development
#### Scientific Literature Review
Scientific research increasingly spans multiple disciplines, making comprehensive literature reviews challenging. Literature review swarms can analyze publications across relevant fields, identify methodological approaches, and synthesize findings from diverse sources.
For example, a biomedical engineering literature review swarm might include specialists in:
- Materials science
- Cellular biology
- Clinical applications
- Regulatory requirements
- Statistical methods
By integrating insights from these different perspectives, the swarm can produce more comprehensive and valuable literature reviews.
#### Experimental Design
Designing robust experiments requires considering multiple factors simultaneously. Experimental design swarms can develop sophisticated research protocols by integrating methodological expertise, statistical considerations, practical constraints, and ethical requirements.
An experimental design swarm might coordinate:
- Methodology agents that design experimental procedures
- Statistical agents that determine appropriate sample sizes and analyses
- Logistics agents that assess practical feasibility
- Ethics agents that evaluate potential concerns
- Documentation agents that prepare formal protocols
This collaborative approach leads to more rigorous experimental designs while addressing potential issues preemptively.
### Software Development
#### Code Review and Optimization
Code review requires evaluating multiple aspects simultaneously: functionality, security, performance, maintainability, and adherence to standards. Code review swarms can distribute these concerns among specialized agents:
- Functionality agents that evaluate whether code meets requirements
- Security agents that identify potential vulnerabilities
- Performance agents that assess computational efficiency
- Style agents that check adherence to coding standards
- Documentation agents that review comments and documentation
By addressing these different aspects in parallel, code review swarms can provide more comprehensive feedback to development teams.
#### System Architecture Design
Designing complex software systems requires balancing numerous considerations. Architecture design swarms can develop more robust system designs by coordinating specialists in different architectural concerns:
- Scalability agents that evaluate growth potential
- Security agents that assess protective measures
- Performance agents that analyze efficiency
- Maintainability agents that consider long-term management
- Integration agents that evaluate external system connections
This collaborative approach leads to more balanced architectural decisions that address multiple requirements simultaneously.
## Getting Started with the Swarms API
The Swarms API is designed for straightforward integration into existing workflows. Let's walk through the setup process and explore some practical code examples for different industries.
### 1. Setting Up Your Environment
First, create an account on [swarms.world](https://swarms.world). After registration, navigate to the API key management interface at [https://swarms.world/platform/api-keys](https://swarms.world/platform/api-keys) to generate your API key.
Once you have your API key, set up your Python environment:
```python
# Install required packages
pip install requests python-dotenv
```
Create a basic project structure:
```
swarms-project/
├── .env # Store your API key securely
├── swarms_client.py # Helper functions for API interaction
└── examples/ # Industry-specific examples
```
In your `.env` file, add your API key:
```
SWARMS_API_KEY=your_api_key_here
```
### 2. Creating a Basic Swarms Client
Let's create a simple client to interact with the Swarms API:
```python
# swarms_client.py
import os
import requests
from dotenv import load_dotenv
import json
# Load environment variables
load_dotenv()
# Configuration
API_KEY = os.getenv("SWARMS_API_KEY")
BASE_URL = "https://api.swarms.world"
# Standard headers for all requests
headers = {
"x-api-key": API_KEY,
"Content-Type": "application/json"
}
def check_api_health():
"""Simple health check to verify API connectivity."""
response = requests.get(f"{BASE_URL}/health", headers=headers)
return response.json()
def run_swarm(swarm_config):
"""Execute a swarm with the provided configuration."""
response = requests.post(
f"{BASE_URL}/v1/swarm/completions",
headers=headers,
json=swarm_config
)
return response.json()
def get_available_swarms():
"""Retrieve list of available swarm types."""
response = requests.get(f"{BASE_URL}/v1/swarms/available", headers=headers)
return response.json()
def get_available_models():
"""Retrieve list of available AI models."""
response = requests.get(f"{BASE_URL}/v1/models/available", headers=headers)
return response.json()
def get_swarm_logs():
"""Retrieve logs of previous swarm executions."""
response = requests.get(f"{BASE_URL}/v1/swarm/logs", headers=headers)
return response.json()
```
### 3. Industry-Specific Examples
Let's explore practical applications of the Swarms API across different industries.
#### Healthcare: Clinical Research Assistant
This example creates a swarm that analyzes clinical trial data and summarizes findings:
```python
# healthcare_example.py
from swarms_client import run_swarm
import json
def clinical_research_assistant():
"""
Create a swarm that analyzes clinical trial data, identifies patterns,
and generates comprehensive research summaries.
"""
swarm_config = {
"name": "Clinical Research Assistant",
"description": "Analyzes medical research data and synthesizes findings",
"agents": [
{
"agent_name": "Data Preprocessor",
"description": "Cleans and organizes clinical trial data",
"system_prompt": "You are a data preprocessing specialist focused on clinical trials. "
"Your task is to organize, clean, and structure raw clinical data for analysis. "
"Identify and handle missing values, outliers, and inconsistencies in the data.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Clinical Analyst",
"description": "Analyzes preprocessed data to identify patterns and insights",
"system_prompt": "You are a clinical research analyst with expertise in interpreting medical data. "
"Your job is to examine preprocessed clinical trial data, identify significant patterns, "
"and determine the clinical relevance of these findings. Consider factors such as "
"efficacy, safety profiles, and patient subgroups.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Medical Writer",
"description": "Synthesizes analysis into comprehensive reports",
"system_prompt": "You are a medical writer specializing in clinical research. "
"Your task is to take the analyses provided and create comprehensive, "
"well-structured reports that effectively communicate findings to both "
"medical professionals and regulatory authorities. Follow standard "
"medical publication guidelines.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
}
],
"max_loops": 1,
"swarm_type": "SequentialWorkflow",
"task": "Analyze the provided Phase III clinical trial data for Drug XYZ, "
"a novel treatment for type 2 diabetes. Identify efficacy patterns across "
"different patient demographics, note any safety concerns, and prepare "
"a comprehensive summary suitable for submission to regulatory authorities."
}
# Execute the swarm
result = run_swarm(swarm_config)
# Print formatted results
print(json.dumps(result, indent=4))
return result
if __name__ == "__main__":
clinical_research_assistant()
```
#### Legal: Contract Analysis System
This example demonstrates a swarm designed to analyze complex legal contracts:
```python
# legal_example.py
from swarms_client import run_swarm
import json
def contract_analysis_system():
"""
Create a swarm that thoroughly analyzes legal contracts,
identifies potential risks, and suggests improvements.
"""
swarm_config = {
"name": "Contract Analysis System",
"description": "Analyzes legal contracts for risks and improvement opportunities",
"agents": [
{
"agent_name": "Clause Extractor",
"description": "Identifies and categorizes key clauses in contracts",
"system_prompt": "You are a legal document specialist. Your task is to "
"carefully review legal contracts and identify all key clauses, "
"categorizing them by type (liability, indemnification, termination, etc.). "
"Extract each clause with its context and prepare them for detailed analysis.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Risk Assessor",
"description": "Evaluates clauses for potential legal risks",
"system_prompt": "You are a legal risk assessment expert. Your job is to "
"analyze contract clauses and identify potential legal risks, "
"exposure points, and unfavorable terms. Rate each risk on a "
"scale of 1-5 and provide justification for your assessment.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Improvement Recommender",
"description": "Suggests alternative language to mitigate risks",
"system_prompt": "You are a contract drafting expert. Based on the risk "
"assessment provided, suggest alternative language for "
"problematic clauses to better protect the client's interests. "
"Ensure suggestions are legally sound and professionally worded.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Summary Creator",
"description": "Creates executive summary of findings and recommendations",
"system_prompt": "You are a legal communication specialist. Create a clear, "
"concise executive summary of the contract analysis, highlighting "
"key risks and recommendations. Your summary should be understandable "
"to non-legal executives while maintaining accuracy.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
}
],
"max_loops": 1,
"swarm_type": "SequentialWorkflow",
"task": "Analyze the attached software licensing agreement between TechCorp and ClientInc. "
"Identify all key clauses, assess potential risks to ClientInc, suggest improvements "
"to better protect ClientInc's interests, and create an executive summary of findings."
}
# Execute the swarm
result = run_swarm(swarm_config)
# Print formatted results
print(json.dumps(result, indent=4))
return result
if __name__ == "__main__":
contract_analysis_system()
```
#### Private Equity: Investment Opportunity Analysis
This example shows a swarm that performs comprehensive due diligence on potential investments:
```python
# private_equity_example.py
from swarms_client import run_swarm, schedule_swarm
import json
from datetime import datetime, timedelta
def investment_opportunity_analysis():
"""
Create a swarm that performs comprehensive due diligence
on potential private equity investment opportunities.
"""
swarm_config = {
"name": "PE Investment Analyzer",
"description": "Performs comprehensive analysis of private equity investment opportunities",
"agents": [
{
"agent_name": "Financial Analyst",
"description": "Analyzes financial statements and projections",
"system_prompt": "You are a private equity financial analyst with expertise in "
"evaluating company financials. Review the target company's financial "
"statements, analyze growth trajectories, profit margins, cash flow patterns, "
"and debt structure. Identify financial red flags and growth opportunities.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Market Researcher",
"description": "Assesses market conditions and competitive landscape",
"system_prompt": "You are a market research specialist in the private equity sector. "
"Analyze the target company's market position, industry trends, competitive "
"landscape, and growth potential. Identify market-related risks and opportunities "
"that could impact investment returns.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Operational Due Diligence",
"description": "Evaluates operational efficiency and improvement opportunities",
"system_prompt": "You are an operational due diligence expert. Analyze the target "
"company's operational structure, efficiency metrics, supply chain, "
"technology infrastructure, and management capabilities. Identify "
"operational improvement opportunities that could increase company value.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Risk Assessor",
"description": "Identifies regulatory, legal, and business risks",
"system_prompt": "You are a risk assessment specialist in private equity. "
"Evaluate potential regulatory challenges, legal liabilities, "
"compliance issues, and business model vulnerabilities. Rate "
"each risk based on likelihood and potential impact.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Investment Thesis Creator",
"description": "Synthesizes analysis into comprehensive investment thesis",
"system_prompt": "You are a private equity investment strategist. Based on the "
"analyses provided, develop a comprehensive investment thesis "
"that includes valuation assessment, potential returns, value "
"creation opportunities, exit strategies, and investment recommendations.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
}
],
"max_loops": 1,
"swarm_type": "SequentialWorkflow",
"task": "Perform comprehensive due diligence on HealthTech Inc., a potential acquisition "
"target in the healthcare technology sector. The company develops remote patient "
"monitoring solutions and has shown 35% year-over-year growth for the past three years. "
"Analyze financials, market position, operational structure, potential risks, and "
"develop an investment thesis with a recommended valuation range."
}
# Option 1: Execute the swarm immediately
result = run_swarm(swarm_config)
# Option 2: Schedule the swarm for tomorrow morning
tomorrow = (datetime.now() + timedelta(days=1)).replace(hour=8, minute=0, second=0).isoformat()
# scheduled_result = schedule_swarm(swarm_config, tomorrow, "America/New_York")
# Print formatted results from immediate execution
print(json.dumps(result, indent=4))
return result
if __name__ == "__main__":
investment_opportunity_analysis()
```
#### Education: Curriculum Development Assistant
This example shows how to use the Concurrent Workflow swarm type:
```python
# education_example.py
from swarms_client import run_swarm
import json
def curriculum_development_assistant():
"""
Create a swarm that assists in developing educational curriculum
with concurrent subject matter experts.
"""
swarm_config = {
"name": "Curriculum Development Assistant",
"description": "Develops comprehensive educational curriculum",
"agents": [
{
"agent_name": "Subject Matter Expert",
"description": "Provides domain expertise on the subject",
"system_prompt": "You are a subject matter expert in data science. "
"Your role is to identify the essential concepts, skills, "
"and knowledge that students need to master in a comprehensive "
"data science curriculum. Focus on both theoretical foundations "
"and practical applications, ensuring the content reflects current "
"industry standards and practices.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Instructional Designer",
"description": "Structures learning objectives and activities",
"system_prompt": "You are an instructional designer specializing in technical education. "
"Your task is to transform subject matter content into structured learning "
"modules with clear objectives, engaging activities, and appropriate assessments. "
"Design the learning experience to accommodate different learning styles and "
"knowledge levels.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Assessment Specialist",
"description": "Develops evaluation methods and assessments",
"system_prompt": "You are an educational assessment specialist. "
"Design comprehensive assessment strategies to evaluate student "
"learning throughout the curriculum. Create formative and summative "
"assessments, rubrics, and feedback mechanisms that align with learning "
"objectives and provide meaningful insights into student progress.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
},
{
"agent_name": "Curriculum Integrator",
"description": "Synthesizes input from all specialists into a cohesive curriculum",
"system_prompt": "You are a curriculum development coordinator. "
"Your role is to synthesize the input from subject matter experts, "
"instructional designers, and assessment specialists into a cohesive, "
"comprehensive curriculum. Ensure logical progression of topics, "
"integration of theory and practice, and alignment between content, "
"activities, and assessments.",
"model_name": "gpt-4o",
"role": "worker",
"max_loops": 1
}
],
"max_loops": 1,
"swarm_type": "ConcurrentWorkflow", # Experts work simultaneously before integration
"task": "Develop a comprehensive 12-week data science curriculum for advanced undergraduate "
"students with programming experience. The curriculum should cover data analysis, "
"machine learning, data visualization, and ethics in AI. Include weekly learning "
"objectives, teaching materials, hands-on activities, and assessment methods. "
"The curriculum should prepare students for entry-level data science positions."
}
# Execute the swarm
result = run_swarm(swarm_config)
# Print formatted results
print(json.dumps(result, indent=4))
return result
if __name__ == "__main__":
curriculum_development_assistant()
```
### 5. Monitoring and Optimization
To optimize your swarm configurations and track usage patterns, you can retrieve and analyze logs:
```python
# analytics_example.py
from swarms_client import get_swarm_logs
import json
def analyze_swarm_usage():
"""
Analyze swarm usage patterns to optimize configurations and costs.
"""
# Retrieve logs
logs = get_swarm_logs()
return logs
if __name__ == "__main__":
analyze_swarm_usage()
```
### 6. Next Steps
Once you've implemented and tested these examples, you can further optimize your swarm configurations by:
1. Experimenting with different swarm architectures for the same task to compare results
2. Adjusting agent prompts to improve specialization and collaboration
3. Fine-tuning model parameters like temperature and max_tokens
4. Combining swarms into larger workflows through scheduled execution
The Swarms API's flexibility allows for continuous refinement of your AI orchestration strategies, enabling increasingly sophisticated solutions to complex problems.
## The Future of AI Agent Orchestration
The Swarms API represents a significant evolution in how we deploy AI for complex tasks. As we look to the future, several trends are emerging in the field of agent orchestration:
### Specialized Agent Ecosystems
We're moving toward rich ecosystems of highly specialized agents designed for specific tasks and domains. These specialized agents will have deep expertise in narrow areas, enabling more sophisticated collaboration when combined in swarms.
### Dynamic Swarm Formation
Future swarm platforms will likely feature even more advanced capabilities for dynamic swarm formation, where the system automatically determines not only which agents to include but also how they should collaborate based on real-time task analysis.
### Cross-Modal Collaboration
As AI capabilities expand across modalities (text, image, audio, video), we'll see increasing collaboration between agents specialized in different data types. This cross-modal collaboration will enable more comprehensive analysis and content creation spanning multiple formats.
### Human-Swarm Collaboration
The next frontier in agent orchestration will be seamless collaboration between human teams and AI swarms, where human specialists and AI agents work together, each contributing their unique strengths to complex problems.
### Continuous Learning Swarms
Future swarms will likely incorporate more sophisticated mechanisms for continuous improvement, with agent capabilities evolving based on past performance and feedback.
## Conclusion
The Swarms API represents a significant leap forward in AI orchestration, moving beyond the limitations of single-agent systems to unlock the power of collaborative intelligence. By enabling specialized agents to work together in coordinated swarms, this enterprise-grade platform opens new possibilities for solving complex problems across industries.
From financial analysis to healthcare research, legal services to software development, the applications for agent swarms are as diverse as they are powerful. The Swarms API provides the infrastructure, tools, and flexibility needed to deploy these collaborative AI systems at scale, with the security, reliability, and cost management features essential for enterprise adoption.
As we continue to push the boundaries of what AI can accomplish, the ability to orchestrate collaborative intelligence will become increasingly crucial. The Swarms API is at the forefront of this evolution, providing a glimpse into the future of AI—a future where the most powerful AI systems aren't individual models but coordinated teams of specialized agents working together to solve our most challenging problems.
For organizations looking to harness the full potential of AI, the Swarms API offers a compelling path forward—one that leverages the power of collaboration to achieve results beyond what any single AI agent could accomplish alone.
To explore the Swarms API and begin building your own intelligent agent swarms, visit [swarms.world](https://swarms.world) today.
---
## Resources
* Website: [swarms.ai](https://swarms.ai)
* Marketplace: [swarms.world](https://swarms.world)
* Cloud Platform: [cloud.swarms.ai](https://cloud.swarms.ai)
* Documentation: [docs.swarms.world](https://docs.swarms.world/en/latest/swarms_cloud/swarms_api/)

@ -357,6 +357,7 @@ nav:
- Swarms API as MCP: "swarms_cloud/mcp.md"
- Swarms API Tools: "swarms_cloud/swarms_api_tools.md"
- Individual Agent Completions: "swarms_cloud/agent_api.md"
- Swarms API Python Client: "swarms_cloud/python_client.md"
- Pricing:
- Swarms API Pricing: "swarms_cloud/api_pricing.md"

@ -1,9 +0,0 @@
# Available Models
| Model Name | Description | Input Price | Output Price | Use Cases |
|-----------------------|---------------------------------------------------------------------------------------------------------|--------------|--------------|------------------------------------------------------------------------|
| **nternlm-xcomposer2-4khd** | One of the highest performing VLMs (Video Language Models). | $4/1M Tokens | $8/1M Tokens | High-resolution video processing and understanding. |
## What models should we add?
[Book a call with us to learn more about your needs:](https://calendly.com/swarm-corp/30min)

@ -1,352 +0,0 @@
# Swarm Cloud API Reference
## Overview
The AI Chat Completion API processes text and image inputs to generate conversational responses. It supports various configurations to customize response behavior and manage input content.
## API Endpoints
### Chat Completion URL
`https://api.swarms.world`
- **Endpoint:** `/v1/chat/completions`
-- **Full Url** `https://api.swarms.world/v1/chat/completions`
- **Method:** POST
- **Description:** Generates a response based on the provided conversation history and parameters.
#### Request Parameters
| Parameter | Type | Description | Required |
|---------------|--------------------|-----------------------------------------------------------|----------|
| `model` | string | The AI model identifier. | Yes |
| `messages` | array of objects | A list of chat messages, including the sender's role and content. | Yes |
| `temperature` | float | Controls randomness. Lower values make responses more deterministic. | No |
| `top_p` | float | Controls diversity. Lower values lead to less random completions. | No |
| `max_tokens` | integer | The maximum number of tokens to generate. | No |
| `stream` | boolean | If set to true, responses are streamed back as they're generated. | No |
#### Response Structure
- **Success Response Code:** `200 OK`
```markdown
{
"model": string,
"object": string,
"choices": array of objects,
"usage": object
}
```
### List Models
- **Endpoint:** `/v1/models`
- **Method:** GET
- **Description:** Retrieves a list of available models.
#### Response Structure
- **Success Response Code:** `200 OK`
```markdown
{
"data": array of objects
}
```
## Objects
### Request
| Field | Type | Description | Required |
|-----------|---------------------|-----------------------------------------------|----------|
| `role` | string | The role of the message sender. | Yes |
| `content` | string or array | The content of the message. | Yes |
| `name` | string | An optional name identifier for the sender. | No |
### Response
| Field | Type | Description |
|-----------|--------|------------------------------------|
| `index` | integer| The index of the choice. |
| `message` | object | A `ChatMessageResponse` object. |
#### UsageInfo
| Field | Type | Description |
|-------------------|---------|-----------------------------------------------|
| `prompt_tokens` | integer | The number of tokens used in the prompt. |
| `total_tokens` | integer | The total number of tokens used. |
| `completion_tokens`| integer| The number of tokens used for the completion. |
## Example Requests
### Text Chat Completion
```json
POST /v1/chat/completions
{
"model": "cogvlm-chat-17b",
"messages": [
{
"role": "user",
"content": "Hello, world!"
}
],
"temperature": 0.8
}
```
### Image and Text Chat Completion
```json
POST /v1/chat/completions
{
"model": "cogvlm-chat-17b",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image"
},
{
"type": "image_url",
"image_url": "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD..."
}
]
}
],
"temperature": 0.8,
"top_p": 0.9,
"max_tokens": 1024
}
```
## Error Codes
The API uses standard HTTP status codes to indicate the success or failure of an API call.
| Status Code | Description |
|-------------|-----------------------------------|
| 200 | OK - The request has succeeded. |
| 400 | Bad Request - Invalid request format. |
| 500 | Internal Server Error - An error occurred on the server. |
## Examples in Various Languages
### Python
```python
import requests
import base64
from PIL import Image
from io import BytesIO
# Convert image to Base64
def image_to_base64(image_path):
with Image.open(image_path) as image:
buffered = BytesIO()
image.save(buffered, format="JPEG")
img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
return img_str
# Replace 'image.jpg' with the path to your image
base64_image = image_to_base64("your_image.jpg")
text_data = {"type": "text", "text": "Describe what is in the image"}
image_data = {
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{base64_image}"},
}
# Construct the request data
request_data = {
"model": "cogvlm-chat-17b",
"messages": [{"role": "user", "content": [text_data, image_data]}],
"temperature": 0.8,
"top_p": 0.9,
"max_tokens": 1024,
}
# Specify the URL of your FastAPI application
url = "https://api.swarms.world/v1/chat/completions"
# Send the request
response = requests.post(url, json=request_data)
# Print the response from the server
print(response.text)
```
### Example API Request in Node
```js
const fs = require('fs');
const https = require('https');
const sharp = require('sharp');
// Convert image to Base64
async function imageToBase64(imagePath) {
try {
const imageBuffer = await sharp(imagePath).jpeg().toBuffer();
return imageBuffer.toString('base64');
} catch (error) {
console.error('Error converting image to Base64:', error);
}
}
// Main function to execute the workflow
async function main() {
const base64Image = await imageToBase64("your_image.jpg");
const textData = { type: "text", text: "Describe what is in the image" };
const imageData = {
type: "image_url",
image_url: { url: `data:image/jpeg;base64,${base64Image}` },
};
// Construct the request data
const requestData = JSON.stringify({
model: "cogvlm-chat-17b",
messages: [{ role: "user", content: [textData, imageData] }],
temperature: 0.8,
top_p: 0.9,
max_tokens: 1024,
});
const options = {
hostname: 'api.swarms.world',
path: '/v1/chat/completions',
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': requestData.length,
},
};
const req = https.request(options, (res) => {
let responseBody = '';
res.on('data', (chunk) => {
responseBody += chunk;
});
res.on('end', () => {
console.log('Response:', responseBody);
});
});
req.on('error', (error) => {
console.error(error);
});
req.write(requestData);
req.end();
}
main();
```
### Example API Request in Go
```go
package main
import (
"bytes"
"encoding/base64"
"encoding/json"
"fmt"
"image"
"image/jpeg"
_ "image/png" // Register PNG format
"io"
"net/http"
"os"
)
// imageToBase64 converts an image to a Base64-encoded string.
func imageToBase64(imagePath string) (string, error) {
file, err := os.Open(imagePath)
if err != nil {
return "", err
}
defer file.Close()
img, _, err := image.Decode(file)
if err != nil {
return "", err
}
buf := new(bytes.Buffer)
err = jpeg.Encode(buf, img, nil)
if err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(buf.Bytes()), nil
}
// main is the entry point of the program.
func main() {
base64Image, err := imageToBase64("your_image.jpg")
if err != nil {
fmt.Println("Error converting image to Base64:", err)
return
}
requestData := map[string]interface{}{
"model": "cogvlm-chat-17b",
"messages": []map[string]interface{}{
{
"role": "user",
"content": []map[string]string{{"type": "text", "text": "Describe what is in the image"}, {"type": "image_url", "image_url": {"url": fmt.Sprintf("data:image/jpeg;base64,%s", base64Image)}}},
},
},
"temperature": 0.8,
"top_p": 0.9,
"max_tokens": 1024,
}
requestBody, err := json.Marshal(requestData)
if err != nil {
fmt.Println("Error marshaling request data:", err)
return
}
url := "https://api.swarms.world/v1/chat/completions"
request, err := http.NewRequest("POST", url, bytes.NewBuffer(requestBody))
if err != nil {
fmt.Println("Error creating request:", err)
return
}
request.Header.Set("Content-Type", "application/json")
client := &http.Client{}
response, err := client.Do(request)
if err != nil {
fmt.Println("Error sending request:", err)
return
}
defer response.Body.Close()
responseBody, err := io.ReadAll(response.Body)
if err != nil {
fmt.Println("Error reading response body:", err)
return
}
fmt.Println("Response:", string(responseBody))
}
```
## Conclusion
This API reference provides the necessary details to understand and interact with the AI Chat Completion API. By following the outlined request and response formats, users can integrate this API into their applications to generate dynamic and contextually relevant conversational responses.

@ -1,103 +0,0 @@
## Migrate from OpenAI to Swarms in 3 lines of code
If youve been using GPT-3.5 or GPT-4, switching to Swarms is easy!
Swarms VLMs are available to use through our OpenAI compatible API. Additionally, if you have been building or prototyping using OpenAIs Python SDK you can keep your code as-is and use Swarmss VLMs models.
In this example, we will show you how to change just three lines of code to make your Python application use Swarmss Open Source models through OpenAIs Python SDK.
## Getting Started
Migrate OpenAIs Python SDK example script to use Swarmss LLM endpoints.
These are the three modifications necessary to achieve our goal:
Redefine OPENAI_API_KEY your API key environment variable to use your Swarms key.
Redefine OPENAI_BASE_URL to point to `https://api.swarms.world/v1/chat/completions`
Change the model name to an Open Source model, for example: cogvlm-chat-17b
## Requirements
We will be using Python and OpenAIs Python SDK.
## Instructions
Set up a Python virtual environment. Read Creating Virtual Environments here.
```sh
python3 -m venv .venv
source .venv/bin/activate
```
Install the pip requirements in your local python virtual environment
`python3 -m pip install openai`
## Environment setup
To run this example, there are simple steps to take:
Get an Swarms API token by following these instructions.
Expose the token in a new SWARMS_API_TOKEN environment variable:
`export SWARMS_API_TOKEN=<your-token>`
Switch the OpenAI token and base URL environment variable
`export OPENAI_API_KEY=$SWARMS_API_TOKEN`
`export OPENAI_BASE_URL="https://api.swarms.world/v1/chat/completions"`
If you prefer, you can also directly paste your token into the client initialization.
## Example code
Once youve completed the steps above, the code below will call Swarms LLMs:
```python
from dotenv import load_dotenv
from openai import OpenAI
load_dotenv()
openai_api_key = ""
openai_api_base = "https://api.swarms.world/v1"
model = "internlm-xcomposer2-4khd"
client = OpenAI(api_key=openai_api_key, base_url=openai_api_base)
# Note that this model expects the image to come before the main text
chat_response = client.chat.completions.create(
model=model,
messages=[
{
"role": "user",
"content": [
{
"type": "image_url",
"image_url": {
"url": "https://home-cdn.reolink.us/wp-content/uploads/2022/04/010345091648784709.4253.jpg",
},
},
{
"type": "text",
"text": "What is the most dangerous object in the image?",
},
],
}
],
temperature=0.1,
max_tokens=5000,
)
print("Chat response:", chat_response)
``` 
Note that you need to supply one of Swarmss supported LLMs as an argument, as in the example above. For a complete list of our supported LLMs, check out our REST API page.
## Example output
The code above produces the following object:
```python
ChatCompletionMessage(content=" Hello! How can I assist you today? Do you have any questions or tasks you'd like help with? Please let me know and I'll do my best to assist you.", role='assistant' function_call=None, tool_calls=None)
```

File diff suppressed because it is too large Load Diff

@ -2,7 +2,7 @@
*Enterprise-grade Agent Swarm Management API*
**Base URL**: `https://api.swarms.world`
**Base URL**: `https://api.swarms.world` or `https://swarms-api-285321057562.us-east1.run.app`
**API Key Management**: [https://swarms.world/platform/api-keys](https://swarms.world/platform/api-keys)
## Overview

@ -1,56 +0,0 @@
import os
from dotenv import load_dotenv
from swarm_models import OpenAIChat
from swarms import Agent
from swarms.prompts.finance_agent_sys_prompt import (
FINANCIAL_AGENT_SYS_PROMPT,
)
from new_features_examples.async_executor import HighSpeedExecutor
load_dotenv()
# Get the OpenAI API key from the environment variable
api_key = os.getenv("OPENAI_API_KEY")
# Create an instance of the OpenAIChat class
model = OpenAIChat(
openai_api_key=api_key, model_name="gpt-4o-mini", temperature=0.1
)
# Initialize the agent
agent = Agent(
agent_name="Financial-Analysis-Agent",
system_prompt=FINANCIAL_AGENT_SYS_PROMPT,
llm=model,
max_loops=1,
# autosave=True,
# dashboard=False,
# verbose=True,
# dynamic_temperature_enabled=True,
# saved_state_path="finance_agent.json",
# user_name="swarms_corp",
# retry_attempts=1,
# context_length=200000,
# return_step_meta=True,
# output_type="json", # "json", "dict", "csv" OR "string" soon "yaml" and
# auto_generate_prompt=False, # Auto generate prompt for the agent based on name, description, and system prompt, task
# # artifacts_on=True,
# artifacts_output_path="roth_ira_report",
# artifacts_file_extension=".txt",
# max_tokens=8000,
# return_history=True,
)
def execute_agent(
task: str = "How can I establish a ROTH IRA to buy stocks and get a tax break? What are the criteria. Create a report on this question.",
):
return agent.run(task)
executor = HighSpeedExecutor()
results = executor.run(execute_agent, 2)
print(results)

@ -1,131 +0,0 @@
import asyncio
import multiprocessing as mp
import time
from functools import partial
from typing import Any, Dict, Union
class HighSpeedExecutor:
def __init__(self, num_processes: int = None):
"""
Initialize the executor with configurable number of processes.
If num_processes is None, it uses CPU count.
"""
self.num_processes = num_processes or mp.cpu_count()
async def _worker(
self,
queue: asyncio.Queue,
func: Any,
*args: Any,
**kwargs: Any,
):
"""Async worker that processes tasks from the queue"""
while True:
try:
# Non-blocking get from queue
await queue.get()
await asyncio.get_event_loop().run_in_executor(
None, partial(func, *args, **kwargs)
)
queue.task_done()
except asyncio.CancelledError:
break
async def _distribute_tasks(
self, num_tasks: int, queue: asyncio.Queue
):
"""Distribute tasks across the queue"""
for i in range(num_tasks):
await queue.put(i)
async def execute_batch(
self,
func: Any,
num_executions: int,
*args: Any,
**kwargs: Any,
) -> Dict[str, Union[int, float]]:
"""
Execute the given function multiple times concurrently.
Args:
func: The function to execute
num_executions: Number of times to execute the function
*args, **kwargs: Arguments to pass to the function
Returns:
A dictionary containing the number of executions, duration, and executions per second.
"""
queue = asyncio.Queue()
# Create worker tasks
workers = [
asyncio.create_task(
self._worker(queue, func, *args, **kwargs)
)
for _ in range(self.num_processes)
]
# Start timing
start_time = time.perf_counter()
# Distribute tasks
await self._distribute_tasks(num_executions, queue)
# Wait for all tasks to complete
await queue.join()
# Cancel workers
for worker in workers:
worker.cancel()
# Wait for all workers to finish
await asyncio.gather(*workers, return_exceptions=True)
end_time = time.perf_counter()
duration = end_time - start_time
return {
"executions": num_executions,
"duration": duration,
"executions_per_second": num_executions / duration,
}
def run(
self,
func: Any,
num_executions: int,
*args: Any,
**kwargs: Any,
):
return asyncio.run(
self.execute_batch(func, num_executions, *args, **kwargs)
)
# def example_function(x: int = 0) -> int:
# """Example function to execute"""
# return x * x
# async def main():
# # Create executor with number of CPU cores
# executor = HighSpeedExecutor()
# # Execute the function 1000 times
# result = await executor.execute_batch(
# example_function, num_executions=1000, x=42
# )
# print(
# f"Completed {result['executions']} executions in {result['duration']:.2f} seconds"
# )
# print(
# f"Rate: {result['executions_per_second']:.2f} executions/second"
# )
# if __name__ == "__main__":
# # Run the async main function
# asyncio.run(main())

@ -1,176 +0,0 @@
import asyncio
from typing import List
from swarm_models import OpenAIChat
from swarms.structs.async_workflow import (
SpeakerConfig,
SpeakerRole,
create_default_workflow,
run_workflow_with_retry,
)
from swarms.prompts.finance_agent_sys_prompt import (
FINANCIAL_AGENT_SYS_PROMPT,
)
from swarms.structs.agent import Agent
async def create_specialized_agents() -> List[Agent]:
"""Create a set of specialized agents for financial analysis"""
# Base model configuration
model = OpenAIChat(model_name="gpt-4o")
# Financial Analysis Agent
financial_agent = Agent(
agent_name="Financial-Analysis-Agent",
agent_description="Personal finance advisor agent",
system_prompt=FINANCIAL_AGENT_SYS_PROMPT
+ "Output the <DONE> token when you're done creating a portfolio of etfs, index, funds, and more for AI",
max_loops=1,
llm=model,
dynamic_temperature_enabled=True,
user_name="Kye",
retry_attempts=3,
context_length=8192,
return_step_meta=False,
output_type="str",
auto_generate_prompt=False,
max_tokens=4000,
stopping_token="<DONE>",
saved_state_path="financial_agent.json",
interactive=False,
)
# Risk Assessment Agent
risk_agent = Agent(
agent_name="Risk-Assessment-Agent",
agent_description="Investment risk analysis specialist",
system_prompt="Analyze investment risks and provide risk scores. Output <DONE> when analysis is complete.",
max_loops=1,
llm=model,
dynamic_temperature_enabled=True,
user_name="Kye",
retry_attempts=3,
context_length=8192,
output_type="str",
max_tokens=4000,
stopping_token="<DONE>",
saved_state_path="risk_agent.json",
interactive=False,
)
# Market Research Agent
research_agent = Agent(
agent_name="Market-Research-Agent",
agent_description="AI and tech market research specialist",
system_prompt="Research AI market trends and growth opportunities. Output <DONE> when research is complete.",
max_loops=1,
llm=model,
dynamic_temperature_enabled=True,
user_name="Kye",
retry_attempts=3,
context_length=8192,
output_type="str",
max_tokens=4000,
stopping_token="<DONE>",
saved_state_path="research_agent.json",
interactive=False,
)
return [financial_agent, risk_agent, research_agent]
async def main():
# Create specialized agents
agents = await create_specialized_agents()
# Create workflow with group chat enabled
workflow = create_default_workflow(
agents=agents,
name="AI-Investment-Analysis-Workflow",
enable_group_chat=True,
)
# Configure speaker roles
workflow.speaker_system.add_speaker(
SpeakerConfig(
role=SpeakerRole.COORDINATOR,
agent=agents[0], # Financial agent as coordinator
priority=1,
concurrent=False,
required=True,
)
)
workflow.speaker_system.add_speaker(
SpeakerConfig(
role=SpeakerRole.CRITIC,
agent=agents[1], # Risk agent as critic
priority=2,
concurrent=True,
)
)
workflow.speaker_system.add_speaker(
SpeakerConfig(
role=SpeakerRole.EXECUTOR,
agent=agents[2], # Research agent as executor
priority=2,
concurrent=True,
)
)
# Investment analysis task
investment_task = """
Create a comprehensive investment analysis for a $40k portfolio focused on AI growth opportunities:
1. Identify high-growth AI ETFs and index funds
2. Analyze risks and potential returns
3. Create a diversified portfolio allocation
4. Provide market trend analysis
Present the results in a structured markdown format.
"""
try:
# Run workflow with retry
result = await run_workflow_with_retry(
workflow=workflow, task=investment_task, max_retries=3
)
print("\nWorkflow Results:")
print("================")
# Process and display agent outputs
for output in result.agent_outputs:
print(f"\nAgent: {output.agent_name}")
print("-" * (len(output.agent_name) + 8))
print(output.output)
# Display group chat history if enabled
if workflow.enable_group_chat:
print("\nGroup Chat Discussion:")
print("=====================")
for msg in workflow.speaker_system.message_history:
print(f"\n{msg.role} ({msg.agent_name}):")
print(msg.content)
# Save detailed results
if result.metadata.get("shared_memory_keys"):
print("\nShared Insights:")
print("===============")
for key in result.metadata["shared_memory_keys"]:
value = workflow.shared_memory.get(key)
if value:
print(f"\n{key}:")
print(value)
except Exception as e:
print(f"Workflow failed: {str(e)}")
finally:
await workflow.cleanup()
if __name__ == "__main__":
# Run the example
asyncio.run(main())

@ -1,22 +1,22 @@
"""
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- pull ranges for diagnosis
- if the diagnosis is x, then the lab ranges should be a to b
- train the agents, increase the load of input
- train the agents, increase the load of input
- medical history sent to the agent
- setup rag for the agents
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- how to manage diseases and by looking at correlating lab, docs, diagnoses
- put docs in rag ->
- put docs in rag ->
- monitoring, evaluation, and treatment
- can we confirm for every diagnosis -> monitoring, evaluation, and treatment, specialized for these things
- find diagnosis -> or have diagnosis, -> for each diagnosis are there evidence of those 3 things
- swarm of those 4 agents, ->
- swarm of those 4 agents, ->
- fda api for healthcare for commerically available papers
-
-
"""

@ -1,22 +1,22 @@
"""
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- pull ranges for diagnosis
- if the diagnosis is x, then the lab ranges should be a to b
- train the agents, increase the load of input
- train the agents, increase the load of input
- medical history sent to the agent
- setup rag for the agents
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- how to manage diseases and by looking at correlating lab, docs, diagnoses
- put docs in rag ->
- put docs in rag ->
- monitoring, evaluation, and treatment
- can we confirm for every diagnosis -> monitoring, evaluation, and treatment, specialized for these things
- find diagnosis -> or have diagnosis, -> for each diagnosis are there evidence of those 3 things
- swarm of those 4 agents, ->
- swarm of those 4 agents, ->
- fda api for healthcare for commerically available papers
-
-
"""

@ -1,63 +0,0 @@
import os
import google.generativeai as genai
from loguru import logger
class GeminiModel:
"""
Represents a GeminiModel instance for generating text based on user input.
"""
def __init__(
self,
temperature: float,
top_p: float,
top_k: float,
):
"""
Initializes the GeminiModel by setting up the API key, generation configuration, and starting a chat session.
Raises a KeyError if the GEMINI_API_KEY environment variable is not found.
"""
try:
api_key = os.environ["GEMINI_API_KEY"]
genai.configure(api_key=api_key)
self.generation_config = {
"temperature": 1,
"top_p": 0.95,
"top_k": 40,
"max_output_tokens": 8192,
"response_mime_type": "text/plain",
}
self.model = genai.GenerativeModel(
model_name="gemini-1.5-pro",
generation_config=self.generation_config,
)
self.chat_session = self.model.start_chat(history=[])
except KeyError as e:
logger.error(f"Environment variable not found: {e}")
raise
def run(self, task: str) -> str:
"""
Sends a message to the chat session and returns the response text.
Raises an Exception if there's an error running the GeminiModel.
Args:
task (str): The input task or message to send to the chat session.
Returns:
str: The response text from the chat session.
"""
try:
response = self.chat_session.send_message(task)
return response.text
except Exception as e:
logger.error(f"Error running GeminiModel: {e}")
raise
# Example usage
if __name__ == "__main__":
gemini_model = GeminiModel()
output = gemini_model.run("INSERT_INPUT_HERE")
print(output)

@ -1,272 +0,0 @@
from typing import List, Dict
from dataclasses import dataclass
from datetime import datetime
import asyncio
import aiohttp
from loguru import logger
from swarms import Agent
from pathlib import Path
import json
@dataclass
class CryptoData:
"""Real-time cryptocurrency data structure"""
symbol: str
current_price: float
market_cap: float
total_volume: float
price_change_24h: float
market_cap_rank: int
class DataFetcher:
"""Handles real-time data fetching from CoinGecko"""
def __init__(self):
self.base_url = "https://api.coingecko.com/api/v3"
self.session = None
async def _init_session(self):
if self.session is None:
self.session = aiohttp.ClientSession()
async def close(self):
if self.session:
await self.session.close()
self.session = None
async def get_market_data(
self, limit: int = 20
) -> List[CryptoData]:
"""Fetch market data for top cryptocurrencies"""
await self._init_session()
url = f"{self.base_url}/coins/markets"
params = {
"vs_currency": "usd",
"order": "market_cap_desc",
"per_page": str(limit),
"page": "1",
"sparkline": "false",
}
try:
async with self.session.get(
url, params=params
) as response:
if response.status != 200:
logger.error(
f"API Error {response.status}: {await response.text()}"
)
return []
data = await response.json()
crypto_data = []
for coin in data:
try:
crypto_data.append(
CryptoData(
symbol=str(
coin.get("symbol", "")
).upper(),
current_price=float(
coin.get("current_price", 0)
),
market_cap=float(
coin.get("market_cap", 0)
),
total_volume=float(
coin.get("total_volume", 0)
),
price_change_24h=float(
coin.get("price_change_24h", 0)
),
market_cap_rank=int(
coin.get("market_cap_rank", 0)
),
)
)
except (ValueError, TypeError) as e:
logger.error(
f"Error processing coin data: {str(e)}"
)
continue
logger.info(
f"Successfully fetched data for {len(crypto_data)} coins"
)
return crypto_data
except Exception as e:
logger.error(f"Exception in get_market_data: {str(e)}")
return []
class CryptoSwarmSystem:
def __init__(self):
self.agents = self._initialize_agents()
self.data_fetcher = DataFetcher()
logger.info("Crypto Swarm System initialized")
def _initialize_agents(self) -> Dict[str, Agent]:
"""Initialize different specialized agents"""
base_config = {
"max_loops": 1,
"autosave": True,
"dashboard": False,
"verbose": True,
"dynamic_temperature_enabled": True,
"retry_attempts": 3,
"context_length": 200000,
"return_step_meta": False,
"output_type": "string",
"streaming_on": False,
}
agents = {
"price_analyst": Agent(
agent_name="Price-Analysis-Agent",
system_prompt="""Analyze the given cryptocurrency price data and provide insights about:
1. Price trends and movements
2. Notable price actions
3. Potential support/resistance levels""",
saved_state_path="price_agent.json",
user_name="price_analyzer",
**base_config,
),
"volume_analyst": Agent(
agent_name="Volume-Analysis-Agent",
system_prompt="""Analyze the given cryptocurrency volume data and provide insights about:
1. Volume trends
2. Notable volume spikes
3. Market participation levels""",
saved_state_path="volume_agent.json",
user_name="volume_analyzer",
**base_config,
),
"market_analyst": Agent(
agent_name="Market-Analysis-Agent",
system_prompt="""Analyze the overall cryptocurrency market data and provide insights about:
1. Market trends
2. Market dominance
3. Notable market movements""",
saved_state_path="market_agent.json",
user_name="market_analyzer",
**base_config,
),
}
return agents
async def analyze_market(self) -> Dict:
"""Run real-time market analysis using all agents"""
try:
# Fetch market data
logger.info("Fetching market data for top 20 coins")
crypto_data = await self.data_fetcher.get_market_data(20)
if not crypto_data:
return {
"error": "Failed to fetch market data",
"timestamp": datetime.now().isoformat(),
}
# Run analysis with each agent
results = {}
for agent_name, agent in self.agents.items():
logger.info(f"Running {agent_name} analysis")
analysis = self._run_agent_analysis(
agent, crypto_data
)
results[agent_name] = analysis
return {
"timestamp": datetime.now().isoformat(),
"market_data": {
coin.symbol: {
"price": coin.current_price,
"market_cap": coin.market_cap,
"volume": coin.total_volume,
"price_change_24h": coin.price_change_24h,
"rank": coin.market_cap_rank,
}
for coin in crypto_data
},
"analysis": results,
}
except Exception as e:
logger.error(f"Error in market analysis: {str(e)}")
return {
"error": str(e),
"timestamp": datetime.now().isoformat(),
}
def _run_agent_analysis(
self, agent: Agent, crypto_data: List[CryptoData]
) -> str:
"""Run analysis for a single agent"""
try:
data_str = json.dumps(
[
{
"symbol": cd.symbol,
"price": cd.current_price,
"market_cap": cd.market_cap,
"volume": cd.total_volume,
"price_change_24h": cd.price_change_24h,
"rank": cd.market_cap_rank,
}
for cd in crypto_data
],
indent=2,
)
prompt = f"""Analyze this real-time cryptocurrency market data and provide detailed insights:
{data_str}"""
return agent.run(prompt)
except Exception as e:
logger.error(f"Error in {agent.agent_name}: {str(e)}")
return f"Error: {str(e)}"
async def main():
# Create output directory
Path("reports").mkdir(exist_ok=True)
# Initialize the swarm system
swarm = CryptoSwarmSystem()
while True:
try:
# Run analysis
report = await swarm.analyze_market()
# Save report
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
report_path = f"reports/market_analysis_{timestamp}.json"
with open(report_path, "w") as f:
json.dump(report, f, indent=2, default=str)
logger.info(
f"Analysis complete. Report saved to {report_path}"
)
# Wait before next analysis
await asyncio.sleep(300) # 5 minutes
except Exception as e:
logger.error(f"Error in main loop: {str(e)}")
await asyncio.sleep(60) # Wait 1 minute before retrying
finally:
if swarm.data_fetcher.session:
await swarm.data_fetcher.close()
if __name__ == "__main__":
asyncio.run(main())

File diff suppressed because it is too large Load Diff

@ -1,22 +1,22 @@
"""
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- For each diagnosis, pull lab results,
- egfr
- for each diagnosis, pull lab ranges,
- pull ranges for diagnosis
- if the diagnosis is x, then the lab ranges should be a to b
- train the agents, increase the load of input
- train the agents, increase the load of input
- medical history sent to the agent
- setup rag for the agents
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- run the first agent -> kidney disease -> don't know the stage -> stage 2 -> lab results -> indicative of stage 3 -> the case got elavated ->
- how to manage diseases and by looking at correlating lab, docs, diagnoses
- put docs in rag ->
- put docs in rag ->
- monitoring, evaluation, and treatment
- can we confirm for every diagnosis -> monitoring, evaluation, and treatment, specialized for these things
- find diagnosis -> or have diagnosis, -> for each diagnosis are there evidence of those 3 things
- swarm of those 4 agents, ->
- swarm of those 4 agents, ->
- fda api for healthcare for commerically available papers
-
-
"""

@ -10,7 +10,7 @@ agent = Agent(
system_prompt=FINANCIAL_AGENT_SYS_PROMPT
+ "Output the <DONE> token when you're done creating a portfolio of etfs, index, funds, and more for AI",
max_loops=1,
model_name="openai/gpt-4o",
model_name="claude-3-sonnet-20240229",
dynamic_temperature_enabled=True,
user_name="Kye",
retry_attempts=3,

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save